docs: Unmerge ECS architecture from overview

This commit is contained in:
Paul Glass 2022-01-18 16:17:49 -06:00
parent fbbe71415a
commit 446355ff69
5 changed files with 116 additions and 96 deletions

View File

@ -0,0 +1,97 @@
---
layout: docs
page_title: Architecture - AWS ECS
description: >-
Architecture of Consul Service Mesh on AWS ECS (Elastic Container Service).
---
# Architecture
The following diagram shows the main components of the Consul architecture when deployed to an ECS cluster:
![Consul on ECS Architecture](/img/consul-ecs-arch.png)
1. **Consul servers:** Production-ready Consul server cluster
1. **Application tasks:** Runs user application containers along with two helper containers:
1. **Consul client:** The Consul client container runs Consul. The Consul client communicates
with the Consul server and configures the Envoy proxy sidecar. This communication
is called _control plane_ communication.
1. **Sidecar proxy:** The sidecar proxy container runs [Envoy](https://envoyproxy.io/). All requests
to and from the application container(s) run through the sidecar proxy. This communication
is called _data plane_ communication.
1. **Mesh Init:** Each task runs a short-lived container, called `mesh-init`, which sets up initial configuration
for Consul and Envoy.
1. **Health Syncing:** Optionally, an additional `health-sync` container can be included in a task to sync health statuses
from ECS into Consul.
1. **ACL Controller:** Automatically provisions Consul ACL tokens for Consul clients and service mesh services
in an ECS Cluster.
For more information about how Consul works in general, see Consul's [Architecture Overview](/docs/architecture).
## Task Startup
This diagram shows the timeline of a task starting up and all its containers:
<img alt="Task Startup Timeline" src="/img/ecs-task-startup.svg" style={{display: "block", maxWidth: "400px"}} />
- **T0:** ECS starts the task. The `consul-client` and `mesh-init` containers start:
- `consul-client` uses the `retry-join` option to join the Consul cluster
- `mesh-init` registers the service for the current task and its sidecar proxy with
Consul. It runs `consul connect envoy -bootstrap` to generate Envoys
bootstrap JSON file and write it to a shared volume. `mesh-init` exits after completing these operations.
- **T1:** The following containers start:
- The `sidecar-proxy` container starts and runs Envoy by executing `envoy -c <path-to-bootstrap-json>`.
- If applicable, the `health-sync` container syncs health checks from ECS to Consul (see [ECS Health Check Syncing](#ecs-health-check-syncing)).
- **T2:** The `sidecar-proxy` container is marked as healthy by ECS. It uses a health check that detects if its public listener port is open. At this time, your application containers are started since all Consul machinery is ready to service requests. The only running containers are `consul-client`, `sidecar-proxy`, and your application container(s).
## Task Shutdown
This diagram shows an example timeline of a task shutting down:
<img alt="Task Shutdown Timeline" src="/img/ecs-task-shutdown.svg" style={{display: "block", maxWidth: "400px"}} />
- **T0**: ECS sends a TERM signal to all containers. Each container reacts to the TERM signal:
- `consul-client` begins to gracefully leave the Consul cluster.
- `health-sync` stops syncing health status from ECS into Consul checks.
- `sidecar-proxy` ignores the TERM signal and continues running until the `user-app` container exits. This allows the application container to continue making outgoing requests through the proxy to the mesh.
- `user-app` exits if it is not configured to ignore the TERM signal. The `user-app` container will continue running if it is configured to ignore the TERM signal.
- **T1**:
- `health-sync` updates its Consul checks to critical status and exits. This ensures this service instance is marked unhealthy.
- `sidecar-proxy` notices the `user-app` container has stopped and exits.
- **T2**: `consul-client` finishes gracefully leaving the Consul datacenter and exits.
- **T3**:
- ECS notices all containers have exited, and will soon change the Task status to `STOPPED`
- Updates about this task have reached the rest of the Consul cluster, so downstream proxies have been updated to stopped sending traffic to this task.
- **T4**: At this point task shutdown should be complete. Otherwise, ECS will send a KILL signal to any containers still running. The KILL signal cannot be ignored and will forcefully stop containers. This will interrupt in-progress operations and possibly cause errors.
## Automatic ACL Token Provisioning
Consul ACL tokens secure communication between agents and services.
The following containers in a task require an ACL token:
- `consul-client`: The Consul client uses a token to authorize itself with Consul servers.
All `consul-client` containers share the same token.
- `mesh-init`: The `mesh-init` container uses a token to register the service with Consul.
This token is unique for the Consul service, and is shared by instances of the service.
The ACL controller automatically creates ACL tokens for mesh-enabled tasks in an ECS cluster.
The `acl-controller` Terraform module creates the ACL controller task. The controller creates the
ACL token used by `consul-client` containers at startup and then watches for tasks in the cluster. It checks tags
to determine if the task is mesh-enabled. If so, it creates the service ACL token for the task, if the
token does not yet exist.
The ACL controller stores all ACL tokens in AWS Secrets Manager, and tasks are configured to pull these
tokens from AWS Secrets Manager when they start.
## ECS Health Check Syncing
If the following conditions apply, ECS health checks automatically sync with Consul health checks for all application containers:
* marked as `essential`
* have ECS `healthChecks`
* are not configured with native Consul health checks
The `mesh-init` container creates a TTL health check for
every container that fits these criteria and the `health-sync` container ensures
that the ECS and Consul health checks remain in sync.

View File

@ -8,9 +8,8 @@ description: >-
# AWS ECS
Consul can be deployed on [AWS ECS](https://aws.amazon.com/ecs/) (Elastic Container Service) using our official Terraform modules.
![Consul on ECS Architecture](/img/consul-ecs-arch.png)
Consul service mesh applications can be deployed on [AWS Elastic Container Service](https://aws.amazon.com/ecs/) (ECS)
using our official Terraform modules.
## Service Mesh
@ -18,6 +17,17 @@ Using Consul on AWS ECS enables you to add your ECS tasks to the service mesh an
take advantage of features such as zero-trust-security, intentions, observability,
traffic policy, and more.
## Architecture
![Consul on ECS Architecture](/img/consul-ecs-arch.png)
Consul on ECS follows an [architecture](/docs/internals/architecture) similar to other platforms, but each ECS task is a
Consul node. An ECS task runs the user application container(s), as well as a Consul client container for control plane
communication and an [Envoy](https://envoyproxy.io/) sidecar proxy container to faciliate data plane communication for
[Consul Connect](/docs/connect).
For a detailed architecture overview, see the [Architecture](/docs/ecs/architecture) page.
## Getting Started
There are several ways to get started with Consul with ECS.
@ -28,94 +38,3 @@ There are several ways to get started with Consul with ECS.
* The [Consul with Dev Server on EC2](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/examples/dev-server-ec2) example installation deploys a sample application in ECS using the EC2 launch type.
See the [Requirements](/docs/ecs/requirements) and the full [Install Guide](/docs/ecs/install) when you're ready to install Consul on an existing ECS cluster and add existing tasks to the service mesh.
## Architecture
The following diagram shows the main components of the Consul architecture when deployed to an ECS cluster:
![Consul on ECS Architecture](/img/consul-ecs-arch.png)
1. **Consul servers:** Production-ready Consul server cluster
1. **Application tasks:** Runs user application containers along with two helper containers:
1. **Consul client:** The Consul client container runs Consul. The Consul client communicates
with the Consul server and configures the Envoy proxy sidecar. This communication
is called _control plane_ communication.
1. **Sidecar proxy:** The sidecar proxy container runs [Envoy](https://envoyproxy.io/). All requests
to and from the application container(s) run through the sidecar proxy. This communication
is called _data plane_ communication.
1. **ACL Controller:** Automatically provisions Consul ACL tokens for Consul clients and service mesh services
in an ECS Cluster.
For more information about how Consul works in general, see Consul's [Architecture Overview](/docs/architecture).
In addition to the long-running Consul client and sidecar proxy containers, the `mesh-init` container runs
at startup and sets up initial configuration for Consul and Envoy.
### Task Startup
This diagram shows the timeline of a task starting up and all its containers:
<img alt="Task Startup Timeline" src="/img/ecs-task-startup.svg" style={{display: "block", maxWidth: "400px"}} />
- **T0:** ECS starts the task. The `consul-client` and `mesh-init` containers start:
- `consul-client` uses the `retry-join` option to join the Consul cluster
- `mesh-init` registers the service for the current task and its sidecar proxy with
Consul. It runs `consul connect envoy -bootstrap` to generate Envoys
bootstrap JSON file and write it to a shared volume. `mesh-init` exits after completing these operations.
- **T1:** The following containers start:
- The `sidecar-proxy` container starts and runs Envoy by executing `envoy -c <path-to-bootstrap-json>`.
- If applicable, the `health-sync` container syncs health checks from ECS to Consul (see [ECS Health Check Syncing](#ecs-health-check-syncing)).
- **T2:** The `sidecar-proxy` container is marked as healthy by ECS. It uses a health check that detects if its public listener port is open. At this time, your application containers are started since all Consul machinery is ready to service requests. The only running containers are `consul-client`, `sidecar-proxy`, and your application container(s).
### Task Shutdown
This diagram shows an example timeline of a task shutting down:
<img alt="Task Shutdown Timeline" src="/img/ecs-task-shutdown.svg" style={{display: "block", maxWidth: "400px"}} />
- **T0**: ECS sends a TERM signal to all containers. Each container reacts to the TERM signal:
- `consul-client` begins to gracefully leave the Consul cluster.
- `health-sync` stops syncing health status from ECS into Consul checks.
- `sidecar-proxy` ignores the TERM signal and continues running until the `user-app` container exits. This allows the application container to continue making outgoing requests through the proxy to the mesh.
- `user-app` exits if it is not configured to ignore the TERM signal. The `user-app` container will continue running if it is configured to ignore the TERM signal.
- **T1**:
- `health-sync` updates its Consul checks to critical status and exits. This ensures this service instance is marked unhealthy.
- `sidecar-proxy` notices the `user-app` container has stopped and exits.
- **T2**: `consul-client` finishes gracefully leaving the Consul datacenter and exits.
- **T3**:
- ECS notices all containers have exited, and will soon change the Task status to `STOPPED`
- Updates about this task have reached the rest of the Consul cluster, so downstream proxies have been updated to stopped sending traffic to this task.
- **T4**: At this point task shutdown should be complete. Otherwise, ECS will send a KILL signal to any containers still running. The KILL signal cannot be ignored and will forcefully stop containers. This will interrupt in-progress operations and possibly cause errors.
### Automatic ACL Token Provisioning
Consul ACL tokens secure communication between agents and services.
The following containers in a task require an ACL token:
- `consul-client`: The Consul client uses a token to authorize itself with Consul servers.
All `consul-client` containers share the same token.
- `mesh-init`: The `mesh-init` container uses a token to register the service with Consul.
This token is unique for the Consul service, and is shared by instances of the service.
The ACL controller automatically creates ACL tokens for mesh-enabled tasks in an ECS cluster.
The `acl-controller` Terraform module creates the ACL controller task. The controller creates the
ACL token used by `consul-client` containers at startup and then watches for tasks in the cluster. It checks tags
to determine if the task is mesh-enabled. If so, it creates the service ACL token for the task, if the
token does not yet exist.
The ACL controller stores all ACL tokens in AWS Secrets Manager, and tasks are configured to pull these
tokens from AWS Secrets Manager when they start.
### ECS Health Check Syncing
If the following conditions apply, ECS health checks automatically sync with Consul health checks for all application containers:
* marked as `essential`
* have ECS `healthChecks`
* are not configured with native Consul health checks
The `mesh-init` container creates a TTL health check for
every container that fits these criteria and the `health-sync` container ensures
that the ECS and Consul health checks remain in sync.

View File

@ -182,5 +182,5 @@ python manage.py runserver "127.0.0.1:8080"
- Configure a secure [Production Installation](/docs/ecs/production-installation).
- Now that your applications are running in the service mesh, read about
other [Service Mesh features](/docs/connect).
- View the [Architecture](/docs/ecs#architecture) documentation to understand
- View the [Architecture](/docs/ecs/architecture) documentation to understand
what's going on under the hood.

View File

@ -19,7 +19,7 @@ A secure Consul cluster should include the following:
## Deploy ACL Controller
Before deploying your service, you will need to deploy the [ACL controller](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/submodules/acl-controller) so that it can provision the necessary tokens
for tasks on the service mesh. To learn more about the ACL Controller, please see [Automatic ACL Token Provisioning](/docs/ecs#automatic-acl-token-provisioning).
for tasks on the service mesh. To learn more about the ACL Controller, please see [Automatic ACL Token Provisioning](/docs/ecs/architecture#automatic-acl-token-provisioning).
To deploy the controller, you will first need store an ACL token with `acl:write` privileges
and a CA certificate for the Consul server in AWS Secrets Manager.

View File

@ -623,6 +623,10 @@
"title": "Migrate Existing Tasks",
"path": "ecs/migrate-existing-tasks"
},
{
"title": "Architecture",
"path": "ecs/architecture"
},
{
"title": "Consul Enterprise",
"path": "ecs/enterprise"