Set up the cluster with Consul and Nomad
In this tutorial, you will create the required infrastructure and set up access to the CLI and UI of Consul and Nomad.
Infrastructure overview
The cluster consists of three server nodes, three private client nodes, and one publicly accessible client node. Each node runs the Consul agent and Nomad agent. The agents run in either server or client mode depending on the role of the node.
Nomad server nodes are responsible for accepting jobs, managing client nodes, and scheduling workloads. Consul server nodes are responsible for storing state information like service address, the results of health checks, and other service-specific configurations.
Nomad client nodes are the nodes where Nomad schedules workloads to run. The Nomad client registers with the Nomad servers, communicates health status to the servers, accepts task allocations, and updates the status of allocations. Consul client nodes report node and service health statuses to Consul servers.
Prerequisites
Running this tutorial locally requires the following software and credentials:
- Nomad CLI installed locally
- Consul CLI installed locally
- Packer CLI installed locally
- Terraform CLI installed locally
- AWS account with credentials environment variables set locally
openssl
andhey
CLI tools installed locally
Create the cluster
The cluster creation process includes steps to build the machine images with Packer and then deploy the infrastructure with Terraform.
Make sure that you have cloned the tutorial's code repository locally and changed into the directory.
Build the machine image
Change into the aws
directory.
$ cd aws
Rename the example variables file.
$ cp variables.hcl.example variables.hcl
Open variables.hcl
in your text editor. Update the region
variable to your preferred AWS region. In this example, the region is us-east-1
. The remaining variables are for Terraform and you will update them after building the AMI. Save the file.
variables.hcl
# Packer variables (all are required)region = "us-east-1" # ...
Initialize Packer to download the required plugins. This command returns no output when it finishes successfully.
$ packer init image.pkr.hcl
Build the image and provide the variables file with the -var-file
flag.
$ packer build -var-file=variables.hcl image.pkr.hcl # ... Build 'amazon-ebs' finished after 14 minutes 32 seconds. ==> Wait completed after 14 minutes 32 seconds ==> Builds finished. The artifacts of successful builds are:--> amazon-ebs: AMIs were created:us-east-1: ami-0445eeea5e1406960
The terminal outputs the value of the AMI.
Deploy the infrastructure
Open variables.hcl
in your text editor and update the ami
variable with the value output from the Packer build. In this example, the value is ami-0445eeea5e1406960
. Save the file.
variables.hcl
# Packer variables (all are required)region = "us-east-1" # Terraform variables (all are required)ami = "ami-0b2d23848882ae42d" # ...
The variables.hcl
file includes options for values used to configure Consul.
Consul is configured with TLS encryption and to trust the certificate provided by the Consul servers. The Consul Terraform provider requires the CONSUL_TLS_SERVER_NAME
environment variable to be set.
The Terraform code defaults the datacenter and domain variables in variables.hcl
to dc1
and global
so CONSUL_TLS_SERVER_NAME
will be consul.dc1.global
.
variables.hcl
# ...# These variables will default to the values shown# and do not need to be updated unless you want to# change them# domain = "global"# datacenter = "dc1"
You can update these variables with other values. If you do, be sure to also update the CONSUL_TLS_SERVER_NAME
variable.
Export the CONSUL_TLS_SERVER_NAME
environment variable.
$ export CONSUL_TLS_SERVER_NAME="consul.dc1.global"
Initialize the Terraform configuration to download the necessary providers and modules.
$ terraform initInitializing the backend...Initializing modules...Downloading registry.terraform.io/terraform-aws-modules/vpc/aws 5.13.0 for vpc...- vpc in .terraform/modules/vpc Initializing provider plugins...- Finding hashicorp/tls versions matching "4.0.5"...- Finding hashicorp/local versions matching "2.5.1"...- Finding hashicorp/consul versions matching "2.21.0"...- Finding hashicorp/nomad versions matching "2.3.1"...- Finding hashicorp/aws versions matching ">= 5.46.0, ~> 5.46.0"...- Finding hashicorp/random versions matching ">= 2.0.0"... # ... Terraform has been successfully initialized!
Provision the resources and provide the variables file with the -var-file
flag. Respond yes
to the prompt to confirm the operation. Terraform will output addresses for the Consul and Nomad UIs when it completes the process.
$ terraform apply -var-file=variables.hcl # ... Plan: 88 to add, 0 to change, 0 to destroy. Changes to Outputs: + Configure-local-environment = "source ./datacenter.env" + Consul_UI = (known after apply) + Consul_UI_token = (sensitive value) + Nomad_UI = (known after apply) + Nomad_UI_token = (sensitive value) Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. # ... Apply complete! Resources: 85 added, 0 changed, 0 destroyed. Outputs: Configure-local-environment = "source ./datacenter.env"Consul_UI = "https://18.116.52.247:8443"Consul_UI_token = <sensitive>Nomad_UI = "https://18.116.52.247:4646"Nomad_UI_token = <sensitive>
Set up Consul and Nomad access
Set up access to Consul and Nomad from your local terminal using the output values from Terraform.
The Terraform code generates a datacenter.env
environment file that contains all the necessary variables to connect to your Consul and Nomad instances using the CLI.
Source the datacenter.env
file to set the Consul and Nomad environment variables. This command returns no output when it runs successfully.
$ source ./datacenter.env
Verify Consul members using the consul members
command.
$ consul membersNode Address Status Type Build Protocol DC Partition Segmentconsul-server-0 10.0.4.203:8301 alive server 1.19.0 2 dc1 default <all>consul-server-1 10.0.4.213:8301 alive server 1.19.0 2 dc1 default <all>consul-server-2 10.0.4.70:8301 alive server 1.19.0 2 dc1 default <all>consul-client-0 10.0.4.4:8301 alive client 1.19.0 2 dc1 default <default>consul-client-1 10.0.4.82:8301 alive client 1.19.0 2 dc1 default <default>consul-client-2 10.0.4.19:8301 alive client 1.19.0 2 dc1 default <default>consul-public-client-0 10.0.4.22:8301 alive client 1.19.0 2 dc1 default <default>
Ensure connectivity to the Nomad cluster from your terminal.
$ nomad server membersName Address Port Status Leader Raft Version Build Datacenter Regionnomad-server-0.global 10.0.4.231 4648 alive false 3 1.8.3 dc1 globalnomad-server-1.global 10.0.4.19 4648 alive true 3 1.8.3 dc1 globalnomad-server-2.global 10.0.4.156 4648 alive false 3 1.8.3 dc1 global
Next steps
In this tutorial, you created a cluster running Consul and Nomad and set up CLI and UI access to Consul and Nomad.
In the next tutorial, you will deploy the initial containerized version of the HashiCups application.