Register your services to Consul
In the previous tutorial, you deployed a Consul server with all security features enabled and explored how to use Consul as a KV store and DNS server.โรง
Consul's value shines when you use them with Consul clients to serve as a distributed health monitoring platform for your services and as a centralized service catalog.
In this tutorial, you will deploy Consul client agents to your virtual machine (VM) workloads. Then, you will register the services to the Consul catalog and setup a distributed monitoring system using Consul health checks.
In this tutorial, you will:
- Deploy your VM environment on AWS EC2 using Terraform
- Configure Consul client agents for the different VMs
- Start Consul client instances on your workload VMs
- Configure your terminal to communicate with the Consul datacenter
- Verify Consul datacenter members
- Query Consul catalog using CLI, API and DNS interfaces
- Modify a service definition and update the service in Consul catalog
Note
This tutorial is part of the Get Started collection, for this reason all the steps used to configure Consul agents and services are shown and require to be executed manually. If you are setting up a production environment you should codify and automate the installation and deployment process. Refer to the VM production patterns tutorial collection for Consul production deployment best practices.
Tutorial scenario
This tutorial uses HashiCups, a demo coffee shop application made up of several microservices running on VMs.
At the beginning of the tutorial, you have an instance of HashiCups running on four VMs and one Consul server (you deployed this in the previous tutorial).
By the end of this tutorial, you will have deployed and started a Consul client agent on each VMs that hosts HashiCups. In addition, you will have registered the HashiCups services in the Consul service catalog and setup health checks for each service.
Prerequisites
If you completed the previous tutorial, the infrastructure is already in place with all prerequisites needed.
Login into the bastion host VM
Terraform output provides a series of useful information, including bastion host IP address.
Login to the bastion host using ssh.
$ ssh -i certs/id_rsa.pem admin@`terraform output -raw ip_bastion`
Verify Consul binary in service servers
Verify that the VMs you want to deploy the Consul agents on have the Consul binary.
For example, to check Consul installation on the Database VM, login to the Database VM from the bastion host.
$ ssh -i certs/id_rsa hashicups-db
Verify Consul binary is installed.
$ consul versionConsul v1.16.1Revision e0ab4d29Build Date 2023-08-05T21:56:29ZProtocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)
Return to the bastion host by exiting the SSH session.
$ exit
Repeat the steps for all VMs (hashicups-nginx
, hashicups-frontend
, hashicups-api
) you want to add to the Consul datacenter.
Configure environment
This tutorial and interactive lab environment uses scripts in the tutorial's GitHub repository to generate the Consul configuration files for your client agents.
Define scenario environment variables again.
$ export DATACENTER="dc1"; \export DOMAIN="consul"; \export OUTPUT_FOLDER="./assets/scenario/conf/"; \export CONSUL_CONFIG_DIR="/etc/consul.d/"
Configure the Consul CLI to interact with the Consul server.
$ export CONSUL_HTTP_ADDR="https://consul-server-0:8443" \export CONSUL_HTTP_SSL=true \export CONSUL_CACERT="${OUTPUT_FOLDER}secrets/consul-agent-ca.pem" \export CONSUL_TLS_SERVER_NAME="server.${DATACENTER}.${DOMAIN}"
To interact with Consul, you need to set CONSUL_HTTP_TOKEN
to a valid Consul
token. For this tutorial, you will use the token you created during the ACL
bootstrap.
If you completed the previous tutorial, the bootstrap token is located the home
directory in a file named acl-token-bootstrap.json
.
$ export CONSUL_HTTP_TOKEN=`cat ./acl-token-bootstrap.json | jq -r ".SecretID"`
Generate Consul clients configuration
To be able to connect to Consul servers, you need setup the retry_join
value
using the Terraform output.
$ export CONSUL_RETRY_JOIN="<use value of Terraform output retry_join here>"
Since the Consul datacenter is configured with ACL enabled by default, you will need to define the ACL tokens you want to pass to the Consul clients when the configuration gets created.
First, export the token you generated for DNS so you can use it as default token for clients.
$ export CONSUL_DNS_TOKEN=`cat ${OUTPUT_FOLDER}secrets/acl-token-dns.json | jq -r ".SecretID"`
Then, for Consul service definition to be created properly, define a Consul token to be included in the service definition file.
In this example, you will use the bootstrap token.
$ export CONSUL_AGENT_TOKEN="${CONSUL_HTTP_TOKEN}"
Generate configuration for Database agent
First, define the Consul node name.
$ export NODE_NAME="hashicups-db"
Then, generate the Consul configuration.
$ bash ./ops/scenarios/99_supporting_scripts/generate_consul_client_config.sh[generate_consul_client_config.sh] - - [hashicups-db]-- Parameter Check-- Cleaning Scenario before apply.-- [WARN] Removing pre-existing configuration in ./assets/scenario/conf/-- Generate folder structure-- Copy available configuration-- Generating configuration for Consul agent hashicups-db
To complete Consul agent configuration, you need to setup tokens for the client. For this tutorial, you are using the bootstrap token. We recommend you create more restrictive tokens for the client agents in production.
$ tee ${OUTPUT_FOLDER}${NODE_NAME}/agent-acl-tokens.hcl > /dev/null << EOFacl { tokens { agent = "${CONSUL_HTTP_TOKEN}" default = "${CONSUL_DNS_TOKEN}" config_file_service_registration = "${CONSUL_HTTP_TOKEN}" }}EOF
Once Consul agent configuration is generated, you can add the service configuration to the output folder. This will make sure your services are configured automatically at Consul startup.
First, generate the service configuration for the agent.
$ bash ./ops/scenarios/99_supporting_scripts/generate_consul_service_config.sh[generate_consul_service_config.sh] - [hashicups-db]-- Parameter Check-- Generating service definition for service discovery-- Generating service definition for service mesh
Then, copy the service definition file into the general configuration directory.
$ cp ${OUTPUT_FOLDER}${NODE_NAME}/svc/service_discovery/*.hcl ${OUTPUT_FOLDER}${NODE_NAME}
Generate configuration for API agent
First, define the Consul node name.
$ export NODE_NAME="hashicups-api"
Then, generate the Consul configuration.
$ bash ./ops/scenarios/99_supporting_scripts/generate_consul_client_config.sh[generate_consul_client_config.sh] - - [hashicups-api]-- Parameter Check-- Cleaning Scenario before apply.-- [WARN] Removing pre-existing configuration in ./assets/scenario/conf/-- Generate folder structure-- Copy available configuration-- Generating configuration for Consul agent hashicups-api
To complete Consul agent configuration, you need to setup tokens for the client. For this tutorial, you are using the bootstrap token. We recommend you create more restrictive tokens for the client agents in production.
$ tee ${OUTPUT_FOLDER}${NODE_NAME}/agent-acl-tokens.hcl > /dev/null << EOFacl { tokens { agent = "${CONSUL_HTTP_TOKEN}" default = "${CONSUL_DNS_TOKEN}" config_file_service_registration = "${CONSUL_HTTP_TOKEN}" }}EOF
Once Consul agent configuration is generated, you can add the service configuration to the output folder. This will make sure your services are configured automatically at Consul startup.
First, generate the service configuration for the agent.
$ bash ./ops/scenarios/99_supporting_scripts/generate_consul_service_config.sh[generate_consul_service_config.sh] - [hashicups-api]-- Parameter Check-- Generating service definition for service discovery-- Generating service definition for service mesh
Then, copy the service definition file into the general configuration directory.
$ cp ${OUTPUT_FOLDER}${NODE_NAME}/svc/service_discovery/*.hcl ${OUTPUT_FOLDER}${NODE_NAME}
Generate configuration for Frontend agent
First, define the Consul node name.
$ export NODE_NAME="hashicups-frontend"
Then, generate the Consul configuration.
$ bash ./ops/scenarios/99_supporting_scripts/generate_consul_client_config.sh[generate_consul_client_config.sh] - - [hashicups-frontend]-- Parameter Check-- Cleaning Scenario before apply.-- [WARN] Removing pre-existing configuration in ./assets/scenario/conf/-- Generate folder structure-- Copy available configuration-- Generating configuration for Consul agent hashicups-frontend
To complete Consul agent configuration, you need to setup tokens for the client. For this tutorial, you are using the bootstrap token. We recommend you create more restrictive tokens for the client agents in production.
$ tee ${OUTPUT_FOLDER}${NODE_NAME}/agent-acl-tokens.hcl > /dev/null << EOFacl { tokens { agent = "${CONSUL_HTTP_TOKEN}" default = "${CONSUL_DNS_TOKEN}" config_file_service_registration = "${CONSUL_HTTP_TOKEN}" }}EOF
Once Consul agent configuration is generated, you can add the service configuration to the output folder. This will make sure your services are configured automatically at Consul startup.
First, generate the service configuration for the agent.
$ bash ./ops/scenarios/99_supporting_scripts/generate_consul_service_config.sh[generate_consul_service_config.sh] - [hashicups-frontend]-- Parameter Check-- Generating service definition for service discovery-- Generating service definition for service mesh
Then, copy the service definition file into the general configuration directory.
$ cp ${OUTPUT_FOLDER}${NODE_NAME}/svc/service_discovery/*.hcl ${OUTPUT_FOLDER}${NODE_NAME}
Generate configuration for NGINX agent
First, define the Consul node name.
$ export NODE_NAME="hashicups-nginx"
Then, generate the Consul configuration.
$ bash ./ops/scenarios/99_supporting_scripts/generate_consul_client_config.sh[generate_consul_client_config.sh] - - [hashicups-nginx]-- Parameter Check-- Cleaning Scenario before apply.-- [WARN] Removing pre-existing configuration in ./assets/scenario/conf/-- Generate folder structure-- Copy available configuration-- Generating configuration for Consul agent hashicups-nginx
To complete Consul agent configuration, you need to setup tokens for the client. For this tutorial, you are using the bootstrap token. We recommend you create more restrictive tokens for the client agents in production.
$ tee ${OUTPUT_FOLDER}${NODE_NAME}/agent-acl-tokens.hcl > /dev/null << EOFacl { tokens { agent = "${CONSUL_HTTP_TOKEN}" default = "${CONSUL_DNS_TOKEN}" config_file_service_registration = "${CONSUL_HTTP_TOKEN}" }}EOF
Once Consul agent configuration is generated, you can add the service configuration to the output folder. This will make sure your services are configured automatically at Consul startup.
First, generate the service configuration for the agent.
$ bash ./ops/scenarios/99_supporting_scripts/generate_consul_service_config.sh[generate_consul_service_config.sh] - [hashicups-nginx]-- Parameter Check-- Generating service definition for service discovery-- Generating service definition for service mesh
Then, copy the service definition file into the general configuration directory.
$ cp ${OUTPUT_FOLDER}${NODE_NAME}/svc/service_discovery/*.hcl ${OUTPUT_FOLDER}${NODE_NAME}
Check generated files
Once you have generated the configuration files, your directory should look like the following:
$ tree ${OUTPUT_FOLDER}hashicups*./assets/scenario/conf/hashicups-apiโโโ agent-acl-tokens.hclโโโ agent-gossip-encryption.hclโโโ consul-agent-ca.pemโโโ consul.hclโโโ svcโ โโโ service_discoveryโ โ โโโ svc-hashicups-api.hclโ โโโ service_meshโ โโโ svc-hashicups-api.hclโโโ svc-hashicups-api.hcl./assets/scenario/conf/hashicups-dbโโโ agent-acl-tokens.hclโโโ agent-gossip-encryption.hclโโโ consul-agent-ca.pemโโโ consul.hclโโโ svcโ โโโ service_discoveryโ โ โโโ svc-hashicups-db.hclโ โโโ service_meshโ โโโ svc-hashicups-db.hclโโโ svc-hashicups-db.hcl./assets/scenario/conf/hashicups-frontendโโโ agent-acl-tokens.hclโโโ agent-gossip-encryption.hclโโโ consul-agent-ca.pemโโโ consul.hclโโโ svcโ โโโ service_discoveryโ โ โโโ svc-hashicups-frontend.hclโ โโโ service_meshโ โโโ svc-hashicups-frontend.hclโโโ svc-hashicups-frontend.hcl./assets/scenario/conf/hashicups-nginxโโโ agent-acl-tokens.hclโโโ agent-gossip-encryption.hclโโโ consul-agent-ca.pemโโโ consul.hclโโโ svcโ โโโ service_discoveryโ โ โโโ svc-hashicups-nginx.hclโ โโโ service_meshโ โโโ svc-hashicups-nginx.hclโโโ svc-hashicups-nginx.hclย 12 directories, 28 files
Each directory corresponds to a Consul client configuration for the respective node.
The scripts generated multiple configuration files so it is easier to read and tune them for your environment. The following are the generated files and a description of what they do:
- The
agent-acl-tokens.hcl
file contains tokens for the Consul agent. - The
agent-gossip-encryption.hcl
file configures gossip encryption. - The
consul-agent-ca.pem
file is the public certificate for Consul CA. - The
consul.hcl
file contains node specific configuration and it is needed, with this specific name, if you want to configure Consul as a systemd daemon. - The
svc-hashicups-*.hcl
files contains the service definition for the services that need to be registered on each node.
Visit the agent configuration and service configuration documentation to interpret the files or to modify them when applying them to your environment.
Copy configuration on client VMs
After the script generates the client configuration, you will copy these files into the respective Consul configuration directories in each client node.
Tip
In the lab environment, the HashiCups application nodes have a running SSH
server. As a result, you can use ssh
and scp
commands to perform the
following operations. If the nodes in your personal environment does not have
an SSH server, you may need to use a different approach to create the
configuration directories and copy the files.
First, define the Consul configuration directory.
$ export CONSUL_REMOTE_CONFIG_DIR=/etc/consul.d/
Then, remove existing configuration from the VMs.
$ ssh -i certs/id_rsa hashicups-db "sudo rm -rf ${CONSUL_REMOTE_CONFIG_DIR}*"; \ ssh -i certs/id_rsa hashicups-api "sudo rm -rf ${CONSUL_REMOTE_CONFIG_DIR}*"; \ ssh -i certs/id_rsa hashicups-frontend "sudo rm -rf ${CONSUL_REMOTE_CONFIG_DIR}*"; \ ssh -i certs/id_rsa hashicups-nginx "sudo rm -rf ${CONSUL_REMOTE_CONFIG_DIR}*";
Finally, copy the configuration files into the different VMs.
$ scp -i certs/id_rsa ${OUTPUT_FOLDER}/hashicups-db/* hashicups-db:${CONSUL_REMOTE_CONFIG_DIR}; \ scp -i certs/id_rsa ${OUTPUT_FOLDER}/hashicups-api/* hashicups-api:${CONSUL_REMOTE_CONFIG_DIR}; \ scp -i certs/id_rsa ${OUTPUT_FOLDER}/hashicups-frontend/* hashicups-frontend:${CONSUL_REMOTE_CONFIG_DIR}; \ scp -i certs/id_rsa ${OUTPUT_FOLDER}/hashicups-nginx/* hashicups-nginx:${CONSUL_REMOTE_CONFIG_DIR}
Start Consul on client VMs
Now that you have copied the configuration files to each client VMs, you will start the Consul client agent on each VM.
Setup Database Consul client agent
Login to the Database VM from the bastion host.
$ ssh -i certs/id_rsa hashicups-db
Define the Consul configuration and data directories.
$ export CONSUL_CONFIG_DIR=/etc/consul.d/ \export CONSUL_DATA_DIR=/opt/consul/
Ensure your user has write permission to the Consul data directory.
$ sudo chmod g+w ${CONSUL_DATA_DIR}
Finally, start the Consul server process.
$ consul agent -config-dir=${CONSUL_CONFIG_DIR} > /tmp/consul-client.log 2>&1 &
The command starts the Consul server in the background to not lock the terminal.
You can access the Consul server log through the /tmp/consul-server.log
file.
Once the Consul agent is started, exit the ssh session to return to the bastion host.
$ exit
Setup API Consul client agent
Login to the API VM from the bastion host.
$ ssh -i certs/id_rsa hashicups-api
Define the Consul configuration and data directories.
$ export CONSUL_CONFIG_DIR=/etc/consul.d/ \export CONSUL_DATA_DIR=/opt/consul/
Ensure your user has write permission to the Consul data directory.
$ sudo chmod g+w ${CONSUL_DATA_DIR}
Finally, start the Consul server process.
$ consul agent -config-dir=${CONSUL_CONFIG_DIR} > /tmp/consul-client.log 2>&1 &
The process is started in background to not lock the terminal. Consul server log
can be accessed in the /tmp/consul-client.log
file.
Once the Consul agent is started, exit the ssh session to return to the bastion host.
$ exit
Setup Frontend Consul client agent
Login to the Frontend VM from the bastion host.
$ ssh -i certs/id_rsa hashicups-frontend
Define the Consul configuration and data directories.
$ export CONSUL_CONFIG_DIR=/etc/consul.d/ \export CONSUL_DATA_DIR=/opt/consul/
Ensure your user has write permission to the Consul data directory.
$ sudo chmod g+w ${CONSUL_DATA_DIR}
Finally, start the Consul server process.
$ consul agent -config-dir=${CONSUL_CONFIG_DIR} > /tmp/consul-client.log 2>&1 &
The process is started in background to not lock the terminal. Consul server log
can be accessed in the /tmp/consul-client.log
file.
Once the Consul agent is started, exit the ssh session to return to the bastion host.
$ exit
Setup NGINX Consul client agent
Login to the NGINX VM from the bastion host.
$ ssh -i certs/id_rsa hashicups-nginx
Define the Consul configuration and data directories.
$ export CONSUL_CONFIG_DIR=/etc/consul.d/ \export CONSUL_DATA_DIR=/opt/consul/
Ensure your user has write permission to the Consul data directory.
$ sudo chmod g+w ${CONSUL_DATA_DIR}
Finally, start the Consul server process.
$ consul agent -config-dir=${CONSUL_CONFIG_DIR} > /tmp/consul-client.log 2>&1 &
The process is started in background to not lock the terminal. Consul server log
can be accessed in the /tmp/consul-client.log
file.
Once the Consul agent is started, exit the ssh session to return to the bastion host.
$ exit
Verify Consul datacenter members
After you started all Consul agents, verify they successfully joined the Consul datacenter. If you are using the interactive lab environment, go to the Bastion Host tab.
Retrieve the agents in the Consul datacenter.
$ consul membersNode Address Status Type Build Protocol DC Partition Segmentconsul-server-0 10.0.4.241:8301 alive server 1.16.1 2 dc1 default <all>hashicups-api 10.0.4.179:8301 alive client 1.16.1 2 dc1 default <default>hashicups-db 10.0.4.7:8301 alive client 1.16.1 2 dc1 default <default>hashicups-frontend 10.0.4.59:8301 alive client 1.16.1 2 dc1 default <default>hashicups-nginx 10.0.4.146:8301 alive client 1.16.1 2 dc1 default <default>
Query services in Consul catalog
When you started the Consul client agents, they registered the service running on their node into the Consul catalog. Each service definition also contained the service's health check. You can find the service definition files in each node's respective Consul configuration file you defined earlier.
Query the healthy services using the Consul CLI, API, or DNS.
Use the Consul CLI to query the service catalog.
$ consul catalog services -tagsconsulhashicups-api v1hashicups-db v1hashicups-frontend v1hashicups-nginx v1
Modify service definition tags
When using Consul CLI or the API endpoints, Consul will also show you the
metadata associated with the services. In this tutorial, you registered each
service with the v1
tag.
In this section, you will update the database service definition to learn how to update Consul service definitions. You must run these commands on the virtual machine that hosts the services.
Login to the Database VM from the bastion host.
$ ssh -i certs/id_rsa hashicups-db
Create the new configuration file with the following contents. Notice that this configuration adds a
v2
tag to the database service.
svc-hashicups-db.hcl
## -----------------------------## svc-hashicups-db.hcl## -----------------------------service { name = "hashicups-db" id = "hashicups-db-1" tags = ["v1", "v2"] port = 5432ย ย check { id = "check-hashicups-db", name = "hashicups-db status check", service_id = "hashicups-db-1", tcp = "localhost:5432", interval = "5s", timeout = "5s" }}
Once you have created the new service definition file, update the service in the Consul catalog.
Consul can automatically update some of its configuration by reloading the content of the configuration folder.
To use this feature, move the svc-hashicups-db.hcl
file into the Consul
configuration directory (/etc/consul.d
).
$ mv svc-hashicups-db.hcl /etc/consul.d/
Setup the environment variables to connect with Consul.
$ export CONSUL_CONFIG_DIR="/etc/consul.d"; \export CONSUL_HTTP_ADDR="localhost:8500"; \export CONSUL_HTTP_TOKEN=`cat ${CONSUL_CONFIG_DIR}/agent-acl-tokens.hcl | grep agent | awk '{print $3}'| sed 's/"//g'`
Then, use the reload
command to update the service definition.
$ consul reloadConfiguration reload triggered
Query services by tags
After you have updated the database service definition, query it to verify the new tag.
Retrieve the tags associated with each service and verify the new v2
tag for
the database service.
$ consul catalog services -tagsconsulhashicups-api v1hashicups-db v1,v2hashicups-frontend v1hashicups-nginx v1
Next steps
In this tutorial, you deployed Consul clients on each HashiCups' nodes VMs. In addition, you configured Consul to perform health checks on the registered services and updated a service definition.
You now have a distributed system to monitor and resolve your services all without changing your services' configuration or implementation. At this stage, you can use Consul to automatically configure and monitor your services. However, they have the same security they had before introducing Consul.
If you want to stop at this tutorial, you can destroy the infrastructure now.
From the ./self-managed/infrastruture/aws
folder of the repository, use
terraform
to destroy the infrastructure.
$ terraform destroy --auto-approve
In the next tutorial, you will learn how to introduce zero trust security in your network by implementing Consul service mesh.
For more information about the topics covered in this tutorial, refer to the following resources: