Integrate your AWS ECS services into Consul service mesh
Consul's multi-platform capabilities allow you to manage and connect your services in a consistent manner across different clouds, platforms, runtimes, and networks.
There are two ways to integrate your ECS tasks with Consul: the Consul ECS Terraform module method and the manual task definition method. This tutorial uses the Consul ECS Terraform module method, which lets you use your existing ECS task definitions so you can quickly integrate them into your Consul service mesh.
In this tutorial, you will extend your Consul service mesh to services running in ECS. In the process, you will learn how using Consul with ECS lets you simplify scaling your services, support different runtimes, and reduce operational overhead.
Scenario overview
HashiCups is a coffee shop demo application. It has a microservices architecture and uses Consul service mesh to securely connect the services. At the beginning of this tutorial, the HashiCups frontend (nginx
, frontend
, and public-api
) will be in the Consul service mesh on AWS EKS. The HashiCups backend (product-api
, product-db
, and payments
) will initially be outside of the Consul service mesh on AWS ECS.
You will connect the HashiCups backend services running on AWS ECS into the service mesh, leveraging Consul's multi-platform capabilities to run HashiCups across two distinct platforms. This enables secure service to service communication no matter where your application workload is running or what platform it runs on.
In this tutorial, you will:
- Deploy the following resources with Terraform:
- Elastic Kubernetes Service (EKS) cluster
- Elastic Container Service (ECS) cluster
- A self-managed Consul datacenter on EKS
- Investigate demo environment state
- Review Consul ECS Terraform module
- Integrate ECS tasks into Consul service mesh
- Verify all services are in the mesh
Prerequisites
For this tutorial, you will need:
- An AWS account configured for use with Terraform
- aws-cli >= 2.0
- terraform >= 1.0
- consul >= 1.16.0
- git >= 2.0
- kubectl <= 1.24
Clone GitHub repository
Clone the GitHub repository containing the configuration files and resources.
$ git clone https://github.com/hashicorp-education/learn-consul-ecs.git
Change into the directory that contains the complete configuration files for this tutorial.
$ cd learn-consul-ecs/self-managed/
Deploy infrastructure and demo application
With these Terraform configuration files, you are ready to deploy your infrastructure.
Issue the terraform init
command from your working directory to download the necessary providers and initialize the backend.
$ terraform init Initializing the backend... Initializing provider plugins...## ... Terraform has been successfully initialized!## ...
Then, deploy the resources. Confirm the run by entering yes
.
$ terraform apply ## ...Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes ## ... Apply complete! Resources: 94 added, 0 changed, 0 destroyed.
Note
The Terraform deployment could take up to 15 minutes to complete. Feel free to explore the next sections of this tutorial while waiting for the environment to complete initialization.
Connect to your infrastructure
Kubernetes stores cluster connection information in a file called kubeconfig
. Retrieve the Kubernetes configuration settings for your EKS cluster and merge them into your local kubeconfig
file by issuing the following command:
$ aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw kubernetes_cluster_id)
Configure your CLI to interact with Consul datacenter
In this section, you will set environment variables in your terminal so your Consul CLI can interact with your Consul datacenter. The Consul CLI reads these environment variables for behavior defaults and will reference these values when you run consul
commands.
Tokens are artifacts in the ACL system used to authenticate users, services, and Consul agents. Since ACLs are enabled in this Consul datacenter, entities requesting access to a resource must include a token that is linked with a policy, service identity, or node identity that grants permission to the resource. The ACL system checks the token and grants or denies access to resources based on the associated permissions. A bootstrap token has unrestricted privileges to all resources and APIs.
Retrieve the ACL bootstrap token from the respective Kubernetes secret and set it as an environment variable.
$ export CONSUL_HTTP_TOKEN=$(kubectl get --namespace consul secrets/bootstrap-token --template={{.data.token}} | base64 -d)
Set the Consul destination address. By default, Consul runs on port 8500
for http
and 8501
for https
.
$ export CONSUL_HTTP_ADDR=http://$(kubectl get services/consul-ui --namespace consul -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
Investigate demo environment state
In this section, you will investigate the current state of your demo environment.
Retrieve the current list of services in the Consul catalog. Notice that only the HashiCups frontend services (nginx
, frontend
, and public-api
) have a corresponding sidecar proxy, indicating they are in the Consul service mesh.
$ consul catalog servicesapi-gatewayconsulfrontendfrontend-sidecar-proxynginxnginx-sidecar-proxypublic-apipublic-api-sidecar-proxy
Retrieve the current list of services in ECS. Notice that the HashiCups backend services (product-api
, payments
, and product-db
) are running in ECS, but are not currently integrated into the Consul service mesh.
$ aws ecs list-services --region $(terraform output -raw region) --cluster $(terraform output -raw ecs_cluster_name){ "serviceArns": [ "arn:aws:ecs:us-west-2:561656980159:service/learn-consul-whg5/product-api", "arn:aws:ecs:us-west-2:561656980159:service/learn-consul-whg5/product-db", "arn:aws:ecs:us-west-2:561656980159:service/learn-consul-whg5/payments" ]}
Retrieve the Consul API Gateway public DNS address.
$ export CONSUL_APIGW_ADDR=http://$(kubectl get svc/api-gateway -o json | jq -r '.status.loadBalancer.ingress[0].hostname') && echo $CONSUL_APIGW_ADDRhttp://a4cc3e77d86854fe4bbcc9c62b8d381d-221509817.us-west-2.elb.amazonaws.com
Open the Consul API Gateway's URL in your browser and notice that only the frontend HashiCups services are available.
This current state is expected since the backend HashiCups services within AWS ECS are not inside the Consul service mesh, but with the native Consul ECS integration you can integrate these ECS services into the mesh.
Review Consul ECS Terraform modules
The native Consul ECS integration allows you to use your existing ECS task container definitions within a Consul ECS Terraform module for simple integration into your Consul service mesh. Feel free to check out the Consul ECS Terraform documentation to learn more.
In this tutorial environment, you will deploy the existing HashiCups services integrated with the Consul ECS Terraform module. Each HashiCups service task definition will be wrapped within its own Consul ECS Terraform module.
Your ECS tasks integrated with Consul communicate with the Consul control plane's gRPC API to receive Envoy configuration information. They also communicate with the Consul control plane's HTTP API, which is the primary interface to all functionality available in Consul. These ports can be specified in your Consul ECS Terraform submodule definitions using the grpc_config
and http_config
inputs. See the self-managed Consul ports page for port defaults.
Review the Consul ECS Terraform module for the backend service product-api
.
module "product-api" { source = "hashicorp/consul-ecs/aws//modules/mesh-task" version = "0.7.0" # The name this service will be registered as in Consul. consul_service_name = "product-api" # The port that this application listens on. port = 9090 # Address of the Consul server consul_server_hosts = "${data.kubernetes_nodes.node_data.nodes.0.metadata.0.name}" # Configures ACLs for the mesh-task. acls = true # The Consul HTTP port http_config = { port = 32500 https = false } # The Consul gRPC port grpc_config = { port = 32502 } # Upstream Consul services that this service will call. upstreams = [ { destinationName = "product-db" localBindPort = 5432 } ] family = "${local.name}-product-api" cpu = 512 memory = 1024 log_configuration = local.product_api_log_config # The ECS container definition container_definitions = [ { name = "product-api" image = "hashicorpdemoapp/product-api:v0.0.20" essential = true portMappings = [ { containerPort = 9090 protocol = "tcp" } ] environment = [ { name = "DB_CONNECTION" value = "host=localhost port=5432 user=postgres password=password dbname=products sslmode=disable" }, { name = "BIND_ADDRESS" value = "localhost:9090" } ] mountPoints = [] volumesFrom = [] logConfiguration = local.product_api_log_config } ] depends_on = [aws_ecs_cluster.ecs_cluster, module.controller]}
Note
In a production environment, we recommend to enable and utilize TLS within your Consul datacenter and Consul ECS modules. This demonstration environment does not utilize TLS due to constraints with self-signed certificates. Feel free to check out the documentation for Secure Configuration for Consul on AWS Elastic Container Service (ECS) with Terraform.
In addition to each of these modules, you must deploy the controller
module. The controller provisions secure ACL tokens for all of your Consul service mesh tasks.
module "controller" { source = "hashicorp/consul-ecs/aws//modules/controller" version = "0.7.0" # Address of the Consul host consul_server_hosts = "${data.kubernetes_nodes.node_data.nodes.0.metadata.0.name}" # The Consul HTTP port http_config = { port = 32500 https = false } # The Consul gRPC port grpc_config = { port = 32502 } # The ARN of the AWS SecretsManager secret containing the token to be used by this controller. # The token needs to have at least `acl:write`, `node:write` and `operator:write` privileges in Consul consul_bootstrap_token_secret_arn = aws_secretsmanager_secret.bootstrap_token.arn name_prefix = local.name ecs_cluster_arn = aws_ecs_cluster.ecs_cluster.arn region = var.vpc_region subnets = module.vpc.private_subnets launch_type = "FARGATE" log_configuration = local.acl_controller_log_config depends_on = [ aws_secretsmanager_secret.bootstrap_token, aws_ecs_cluster.ecs_cluster]}
Register ECS tasks into Consul service mesh
Replace the contents of ecs-services-and-tasks.tf
with your ECS tasks and services integrated with Consul.
$ cp -f hashicups-ecs/ecs-services-and-tasks-with-consul.tf ecs-services-and-tasks.tf
Now that your ECS tasks utilize Consul ECS modules, run terraform init
to initialize these modules.
$ terraform init
Run terraform apply
to deploy the HashiCups ECS tasks and services that are now integrated with Consul.Confirm the run by entering yes
.
$ terraform apply ## ...Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes ## ... Apply complete! Resources: 46 added, 5 changed, 14 destroyed.
Verify all services are in the mesh
Retrieve the current list of services in ECS. Notice that the HashiCups backend services (product-api
, payments
, and product-db
) as well as the consul-ecs-controller
are running in ECS and integrated with Consul.
$ aws ecs list-services --region $(terraform output -raw region) --cluster $(terraform output -raw ecs_cluster_name){ "serviceArns": [ "arn:aws:ecs:us-west-2:561656980159:service/learn-consul-ebjq/product-db-consul", "arn:aws:ecs:us-west-2:561656980159:service/learn-consul-ebjq/product-api-consul", "arn:aws:ecs:us-west-2:561656980159:service/learn-consul-ebjq/payments-consul", "arn:aws:ecs:us-west-2:561656980159:service/learn-consul-ebjq/consul-ecs-controller" ]}
Retrieve the current list of services in the Consul catalog to verify all HashiCups services are now in the Consul service mesh. Notice that all HashiCups services include a sidecar proxy, which indicates that these services are integrated into the Consul service mesh.
$ consul catalog servicesapi-gatewayconsulfrontendfrontend-sidecar-proxynginxnginx-sidecar-proxypayment-apipayment-api-sidecar-proxyproduct-apiproduct-api-sidecar-proxyproduct-dbproduct-db-sidecar-proxypublic-apipublic-api-sidecar-proxy
Retrieve the Consul API Gateway public DNS address.
$ echo $CONSUL_APIGW_ADDRhttp://a4cc3e77d86854fe4bbcc9c62b8d381d-221509817.us-west-2.elb.amazonaws.com
Open the Consul API Gateway's URL in your browser and notice that HashiCups is now fully functional.
Clean up resources
Destroy the Terraform resources to clean up your environment. Confirm the destroy operation by inputting yes
.
$ terraform destroy
Note
Due to race conditions with the various cloud resources created in this tutorial, it may be necessary to run the destroy
operation twice to ensure all resources have been properly removed.
Next steps
In this tutorial, you integrated your AWS ECS services into the Consul service mesh. This integration offers simplified application scalability, development platform flexibility, and reduced operational overhead.
For more information about the topics covered in this tutorial, refer to the following resources: