Manage multi-hop sessions with HCP Boundary
HCP Boundary allows organizations to register their own self-managed workers. Self-managed workers can be deployed in private networks while still communicating with an upstream HCP Boundary cluster.
Starting with Boundary 0.12, workers can be chained together through reverse-proxy connections. Multi-hop is when two or more workers are connected, creating multiple "hops" in the chain from a worker to a controller.
Note
Deploying self-managed workers with HCP Boundary requires the Boundary Enterprise binary for the Linux, MacOS, Windows or other system the worker is deployed on. The workers should also be up-to-date with the HCP control plane's version, otherwise new features will not work as expected. The control plane version can be checked in the HCP Boundary portal. Multi-hop sessions require the 0.12.0 version of the binary or greater.
HCP Boundary is an identity-aware proxy that sits between users and the infrastructure they want to connect to. The proxy has two components:
- A control plane that manages state around users under management, targets, and access policies.
- Worker nodes, assigned by the control plane once a user authenticates into HCP Boundary and selects a target.
Multi-hop introduces the concept of “upstream” and “downstream” workers. Viewing controllers as the “top” of a multi-hop chain, downstream workers are those below a worker in the chain, while upstreams are those above a worker in the chain. In the diagram below, worker2’s upstream is worker1, and its downstream is worker3.
A chain of workers can be deployed in scenarios where inbound network traffic is not allowed. A worker in a private network will send outbound communication to its upstream worker, and can create reverse proxies for session establishment.
Target worker filters can be used with workers to allow for fine grained control on which workers handle ingress and egress for session traffic to a target. Ingress worker filters determine which workers users will connect with to initiate a session, and egress worker filters determine which workers will be used to access targets.
The table below describes the features available in HCP Boundary vs Boundary Community Edition.
No filter | Ingress-only | Egress-only | Ingress + Egress | |
---|---|---|---|---|
Community | Single-hop, any worker | X | Single-hop, worker selected with egress filter | X |
HCP | Single-hop, directly connected worker (will be HCP-managed) | Single-hop, worker selected with ingress filter | Multi-hop*, client connects to HCP worker as ingress worker, egress worker connects to host | Multi-hop*, client connects to ingress worker, egress worker connects to host |
- If egress and ingress worker filters result in the same selected worker(s), or an egress filter results in a single-hop route, this is effectively single-hop.
Note
Ingress worker filters are not available in Boundary Community Edition.
This tutorial demonstrates the basics of deploying and managing workers in a multi-hop scenario using HCP Boundary.
Prerequisites
This tutorial assumes you have:
- Access to an HCP Boundary instance
- Completed the previous HCP Administration tutorials
- A Boundary binary greater than 0.12.0 in your
PATH
. This tutorial uses the 0.13.2 version of Boundary.
This tutorial provides two options for configuring the multi-hop worker scenario:
- Configuring of a downstream worker using AWS
- Deploying a downstream worker locally as a proof-of-concept
Regardless of the method used, workers must install the HCP Boundary worker binary to be registered with HCP. If deploying a worker manually on AWS, you can follow this guide to create a publicly accessible Amazon EC2 instance to use for this tutorial.
Configure the downstream worker
To configure a downstream worker, the following details are required:
- HCP Cluster URL (Boundary address)
- Auth Method ID (from the Admin Console)
- Admin login name and password
Visit the Getting Started on HCP tutorial if you need to locate any of these values. For a comprehensive guide on deploying a worker, review the Self-Managed Worker Registration with HCP Boundary tutorial.
Select the AWS or Local Worker workflows to continue the tutorial setup.
This tutorial picks up where the Self-Managed Worker Registration with HCP Boundary tutorial left off. This workflow assumes you already have an Ubuntu target and a pre-configured worker registered with HCP Boundary.
This workflow requires a publicly accessible Ubuntu instance to be used as a downstream worker. This setup is similar to the process for defining a single worker in the Self-Managed Worker Registration with HCP Boundary tutorial, with some additional configuration in the worker config file.
You should already have an Ubuntu target accessible via a worker. This worker can be considered the "upstream" worker, which the downstream worker will proxy traffic through when managing sessions to the target.
This setup is designed to allow the target and egress worker to exist on their own private network, which the ingress worker will proxy the Boundary connection to. This allows the target to use a private IP address instead of the public address utilized in the previous tutorials. The extra steps of defining a private network are not taken in this tutorial.
Deploy an additional worker
The next step is to deploy a new egress worker in the downstream chain to the
target. This new worker will be downstream of worker1 from the previous
tutorial, and will serve as the new egress worker that establishes connections
to targets. worker1
from the previous tutorial will now serve as the ingress
worker, upstream from this new worker, worker2
.
Note
You will need the public IP address of the upstream worker configured in the Self-Managed Worker Registration with HCP Boundary tutorial. Copy this value for use in the following steps.
Next, deploy an additional publicly accessible Ubuntu instance to be used as an downstream worker. You can follow this guide to create a publicly accessible Amazon EC2 instance to use for this tutorial.
Warning
For the purposes of this tutorial it is important that the security group policy for the AWS worker instance accepts incoming TCP connections on port 9202 to allow Boundary client connections. To learn more about creating this security group and attaching it to your instance, check the AWS EC2 security group documentation. The screenshot below shows an example of this security group policy.
Log in and download Boundary Enterprise
Log in to the Ubuntu instance that will be configured as the downstream worker.
For example, using SSH:
$ ssh ubuntu@198.51.100.1 -i /path/my-key-pair.pemThe authenticity of host 'ec2-198-51-100-1.compute-1.amazonaws.com (198-51-100-1)' can't be established.ECDSA key fingerprint is l4UB/neBad9tvkgJf1QZWxheQmR59WgrgzEimCG6kZY.Are you sure you want to continue connecting (yes/no)? yes ubuntu@ip-172-31-88-177:~
Note
The above example is for demonstrative purposes. You will need to supply your Ubuntu instance's username, public IP address, and public key to connect. If using AWS EC2, check this article to learn more about connecting to a Linux instance using SSH.
Create a new folder to store your Boundary config file. This tutorial
creates the boundary/
directory in the user's home directory to store the
worker config. If you do not have permission to create this directory, create
the folder elsewhere.
$ mkdir /home/ubuntu/boundary/ && cd /home/ubuntu/boundary/
Next, download and install the Boundary Enterprise binary.
Note
The binary version should match the version of the HCP control
plane. Check the control plane's version in the HCP Boundary portal, and
download the appropriate version using wget. The example below installs the
0.13.2 version of the Boundary Enterprise binary, versioned as 0.13.2+ent
.
Enter the following command to install the latest version of the Boundary Enterprise binary on Ubuntu.
$ curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg ;\echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list ;\sudo apt update && sudo apt install boundary-enterprise -y
Once installed, verify the version of the boundary binary.
$ boundary version Version information: Build Date: 2023-06-07T16:41:10Z Git Revision: b1f75f5c731c843f5c987feae310d86e635806c7 Metadata: ent Version Number: 0.13.2+ent
Ensure the Version Number matches the version of the HCP Boundary control plane. They should match in order to get the latest HCP Boundary features.
Write the worker config
Note
This workflow utilizes a worker-initiated authorization flow to register with the HCP controller.
Next, create a new file named /home/ubuntu/boundary/downstream-worker.hcl
.
$ touch /home/ubuntu/boundary/downstream-worker.hcl
Open the file with a text editor, such as Vi.
Paste the following configuration into the worker config file:
/home/ubuntu/boundary/downstream-worker.hcl
disable_mlock = true listener "tcp" { address = "0.0.0.0:9202" purpose = "proxy"} worker { public_addr = "<worker_public_addr>" initial_upstreams = ["<upstream_worker_public_addr>:9202"] auth_storage_path = "/home/ubuntu/boundary/worker2" tags { type = ["worker2", "downstream"] }}
Update the following values in the downstream-worker.hcl
file:
<worker_public_addr>
on line 9 should be replaced with the public IP address of this ubuntu egress worker, such as18.206.227.218
<upstream_worker_public_addr>
on line 10 should be replaced with the public IP address of the original ubuntu (ingress) worker, such as107.22.128.152
The public_addr
should match the public IP or DNS name of this Ubuntu
instance being configured as an egress worker.
You set the initial_upstreams
when you connect workers together as part of
a multi-hop chain. In this example, the upstream is the ingress worker that you created
in the previous tutorial, which has access to the HCP Boundary cluster.
The initial_upstreams
should match the public IP or DNS name of your original
worker Ubuntu instance deployed in the previous tutorial, and include the
TCP listening port of worker1. It serves as the ingress
worker
and is "upstream" from the downstream egress worker being configured here.
Note the listener "tcp"
stanza:
listener "tcp" { address = "0.0.0.0:9202" purpose = "proxy"}
The address
port is set to 0.0.0.0:9202
. This port should already be
configured by the AWS security group for this instance to accept inbound TCP
connections. If a custom listener port is desired, it should be defined here.
Ensure that the initial_upstreams
defined in the worker stanza include the TCP
listening port of the ingress worker. For example:
worker { public_addr = "18.206.227.218" initial_upstreams = ["107.22.128.152:9202"] auth_storage_path = "/home/ubuntu/boundary/worker2" tags { type = ["worker2", "downstream"] }}
In the above example initial_upstreams
is specified, which indicates the
address or addresses a worker will use when initially connecting to Boundary.
The hcp_boundary_cluster_id
should be omitted, since this worker will forward
connections to an upstream worker, which in this case is the original worker1
ingress worker. Do not configure the hcp_boundary_cluster_id
and
initial_upstreams
in the same config file, as the HCP cluster ID will take
precedence over the initial upstreams.
To see all valid config options, refer to the worker configuration docs.
Save this file.
Start the downstream worker
With the worker config defined, start the worker server. Provide the full path
to the worker config file (such as /home/ubuntu/boundary/downstream-worker.hcl
).
$ boundary server -config="/home/ubuntu/boundary/downstream-worker.hcl" ==> Boundary server configuration: Cgo: disabled Listener 1: tcp (addr: "0.0.0.0:9202", max_request_duration: "1m30s", purpose: "proxy") Log Level: info Mlock: supported: true, enabled: false Version: Boundary v0.13.2+ent Version Sha: d8aaf3500f65fb7d605d27db232457fe3a26bf43 Worker Auth Current Key Id: harddisk-stunt-serve-shininess-essay-courier-manger-deface Worker Auth Registration Request: GzusqckarbczHoLGQ4UA25uSRvw33DNChURoR2CF1BXRUm7frqLSncV64LeYWxQDCstE25Vj6QSBQ1aGh8BUo1dnz899rt3LgktzRGU4vWYcHmvPQKpsSUTJqA42nJxBfpxopKyCvzxZNxgbTSw5BNN9BUsnoy58niY5ui38NhKPKdmKjVDoU4TRVd4Bvti4F2H5C8pBB3qhY6qyeaSRoKjDcEzdTa7S3JicVzbtmfWnfyMLTJ21jRypH2S5haK4RBhFP319mw6DYNhVo7opBkBoW2FaaJbeowGj8b5wFX Worker Auth Storage Path: /home/ubuntu/boundary/worker2 Worker Public Proxy Addr: 44.204.92.85:9202 ==> Boundary server started! Log data will stream in below:
The downstream worker will start and begin attempting to connect to its upstream, worker1, which is also the ingress worker.
The worker also outputs its authorization request as Worker Auth Registration
Request. This will also be saved to a file, auth_request_token
, defined by the
auth_storage_path
in the worker config.
Note the Worker Auth Registration Request:
value on line 12. This value can
also be located in the /boundary/auth_request_token
file. Copy this value.
Exit the downstream Ubuntu worker.
Register the worker with HCP
HCP workers can be registered using the Boundary CLI or Admin Console Web UI.
Authenticate to HCP Boundary as the admin user.
Log in to the HCP portal.
From the HCP Portal's Boundary page, click Open Admin UI - a new page will open.
Enter the admin username and password you created when you deployed the new instance and click Authenticate.
Once logged in, navigate to the Workers page.
From the previous Self-Managed Worker Registration with HCP Boundary tutorial there should already be a worker registered.
Click New.
The new worker page can be used to construct the contents of the
downstream-worker.hcl
file.
Do not fill in any of the worker fields.
Providing the following details will construct the worker config file contents for you:
- Boundary Cluster ID
- Worker Public Address
- Config file path
- Worker Tags
The instructions on this page provide details for installing the HCP
boundary
binary and deploying the constructed config file.
Because the worker has already been deployed, only the Worker Auth Registration Request key needs to be provided on this page.
Scroll down to the bottom of the New Worker page and paste the Worker Auth Registration Request key you copied earlier.
Click Register Worker.
Click Done and notice the new worker on the Workers page.
Worker-aware targets
From the Manage Targets tutorial you
should already have the ubuntu-target
configured in Boundary.
Boundary uses worker tags that define key-value pairs targets can use to determine where they should route connections.
A simple tag was included in the downstream-worker.hcl
file from before:
worker { tags { type = ["worker2", "downstream"] }}
This config creates the resulting tags on the worker:
Tags: Worker Configuration: type: ["worker2" "downstream"] Canonical: type: ["worker2" "downstream"]
The Tags
can be used to create a worker filter for the target.
Open the Boundary Admin Console UI navigate to the Targets page within the IT_Support org.
Click on the ubuntu-target
. Select the Workers tab.
Beside Ingress worker, click the Edit worker filter button.
In the filter form, copy the following filter to add an ingress worker filter that searches for workers with the upstream
tag:
"upstream" in "/tags/type"
Click Save.
Beside Egress worker, click the Edit worker filter button.
In the filter form, copy the following filter to add an ingress worker filter that searches for workers with the downstream
tag:
"downstream" in "/tags/type"
Click Save.
Verify that a worker filter is applied for both the ingress and egress workers.
Note
The type: "worker2"
tag could have also been used for the egress
filter, or a filter that searches for the name of the worker, if assigned. (such
as "/name" == "downstream-worker"
).
With the filters assigned, any connections to this target will be forced to proxy through the ingress worker, which will forward connections to the next downstream worker until it reaches the egress worker. In this example only one "hop" is made directly from the ingress worker to the egress worker.
Finally, establish a connection to the target. Enter your instance's login name
after the -l
option and the path to your instance's public key after the -i
option.
Note
The Boundary Desktop App can also be used to establish a session with the target and manage sessions.
$ boundary connect ssh -target-id $TARGET_ID -- -l ubuntu -i /path/to/key.pem Welcome to Ubuntu 22.04 LTS (GNU/Linux 5.15.0-1011-aws x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage System information as of Tue Sep 20 17:54:00 UTC 2022 System load: 0.0 Processes: 98 Usage of /: 22.7% of 7.58GB Users logged in: 0 Memory usage: 25% IPv4 address for eth0: 172.31.93.237 Swap usage: 0% * Ubuntu Pro delivers the most comprehensive open source security and compliance features. https://ubuntu.com/aws/pro 0 updates can be applied immediately. The list of available updates is more than a week old.To check for new updates run: sudo apt update Last login: Tue Sep 20 17:41:48 2022 from 44.194.155.74To run a command as administrator (user "root"), use "sudo <command>".See "man sudo_root" for details. ubuntu@ip-172-31-93-237:~$
You can verify that the ingress worker handles the initial connection to the worker using the CLI or Admin Console.
Navigate to the Workers page while the connection to the Ubuntu target is still open.
Notice the Session Count next to the worker configured as the ingress worker.
Sessions can be managed using the same methods discussed in the Manage Sessions tutorial.
When finished, the session can be terminated manually, or canceled via another authenticated Boundary command. Sessions can also be managed using the Admin Console UI.
Note
To cancel this session using the CLI, you will need to open a new
terminal window and re-export the BOUNDARY_ADDR
and BOUNDARY_AUTH_METHOD_ID
environment variables. Then log back into Boundary using boundary
authenticate
.
Summary
The HCP Administration tutorial collection demonstrated the common HCP Boundary management workflows.
This tutorial demonstrated configuring multi-hop workers with HCP Boundary and discussed worker management.
To continue learning about Boundary, check out the Inject SSH credentials with HCP Boundary tutorial.