Deploy serverless applications with AWS Lambda and API Gateway
Serverless computing is a cloud computing model in which a cloud provider allocates compute resources on demand. This contrasts with traditional cloud computing where the user is responsible for directly managing virtual servers.
Most serverless applications use Functions as a Service (FaaS) to provide application logic, along with specialized services for additional capabilities such as routing HTTP requests, message queuing, and data storage.
In this tutorial, you will deploy a NodeJS function to AWS Lambda, and then expose that function to the Internet using Amazon API Gateway.
Prerequisites
You can complete this tutorial using the same workflow with either Terraform Community Edition or HCP Terraform. HCP Terraform is a platform that you can use to manage and execute your Terraform projects. It includes features like remote state and execution, structured plan output, workspace resource summaries, and more.
Select the Terraform Community Edition tab to complete this tutorial using Terraform Community Edition.
This tutorial assumes that you are familiar with the Terraform and HCP Terraform workflows. If you are new to Terraform, complete the Get Started collection first. If you are new to HCP Terraform, complete the HCP Terraform Get Started tutorials first.
For this tutorial, you will need:
- Terraform v1.2+ installed locally.
- An AWS account.
- An HCP Terraform account with HCP Terraform locally authenticated.
- An HCP Terraform variable set configured with your AWS credentials.
- the AWS CLI.
Warning
Some of the infrastructure in this tutorial does not qualify for the AWS free tier. Destroy the infrastructure at the end of the guide to avoid unnecessary charges. We are not responsible for any charges that you incur.
Clone example configuration
Clone the Learn Terraform Lambda and API Gateway GitHub repository for this tutorial.
$ git clone https://github.com/hashicorp-education/learn-terraform-lambda-api-gateway
Change to the repository directory.
$ cd learn-terraform-lambda-api-gateway
Review the configuration in main.tf
. It defines the AWS provider you will use
for this tutorial and an S3 bucket which will store your Lambda function.
Create infrastructure
Set the TF_CLOUD_ORGANIZATION
environment variable to your HCP Terraform
organization name. This will configure your HCP Terraform integration.
$ export TF_CLOUD_ORGANIZATION=
Initialize your configuration. Terraform will automatically create the learn-terraform-lambda-api-gateway
workspace in your HCP Terraform organization.
$ terraform initInitializing HCP Terraform...Initializing provider plugins...- Reusing previous version of hashicorp/aws from the dependency lock file- Reusing previous version of hashicorp/random from the dependency lock file- Reusing previous version of hashicorp/archive from the dependency lock file- Using previously-installed hashicorp/aws v5.38.0- Using previously-installed hashicorp/random v3.6.0- Using previously-installed hashicorp/archive v2.4.2 HCP Terraform has been successfully initialized! You may now begin working with HCP Terraform. Try running "terraform plan" tosee any changes that are required for your infrastructure. If you ever set or change modules or Terraform Settings, run "terraform init"again to reinitialize your working directory.
Note: This tutorial assumes that you are using a tutorial-specific HCP Terraform organization with a global variable set of your AWS credentials. Review the Create a Credential Variable Set for detailed guidance. If you are using a scoped variable set, assign it to your new workspace now.
Apply the configuration to create your S3 bucket. Respond to the confirmation
prompt with a yes
.
$ terraform apply## ... Apply complete! Resources: 2 added, 0 changed, 0 destroyed. Outputs: lambda_bucket_name = "learn-terraform-functions-formally-cheaply-frank-mullet"
Create and upload Lambda function archive
To deploy an AWS Lambda function, you must package it in an archive containing the function source code and any dependencies.
The way you build the function code and dependencies will depend on the language and frameworks you choose. In this tutorial, you will deploy the NodeJS function defined in the Git repository you cloned earlier.
Review the function code in hello-world/hello.js
.
hello-world/hello.js
module.exports.handler = async (event) => { console.log('Event: ', event); let responseMessage = 'Hello, World!'; return { statusCode: 200, headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ message: responseMessage, }), }}
This function takes an incoming event object from Lambda and logs it to the console. Then it returns an object which API Gateway will use to generate an HTTP response.
Add the following configuration to main.tf
to package and copy this function to your S3 bucket.
main.tf
data "archive_file" "lambda_hello_world" { type = "zip" source_dir = "${path.module}/hello-world" output_path = "${path.module}/hello-world.zip"} resource "aws_s3_object" "lambda_hello_world" { bucket = aws_s3_bucket.lambda_bucket.id key = "hello-world.zip" source = data.archive_file.lambda_hello_world.output_path etag = filemd5(data.archive_file.lambda_hello_world.output_path)}
This configuration uses the archive_file
data
source
to generate a zip archive and an aws_s3_object
resource
to upload the archive to your S3 bucket.
Create the bucket object now. Respond to the confirmation prompt with a yes
.
$ terraform apply## ... Apply complete! Resources: 1 added, 0 changed, 0 destroyed. Outputs: lambda_bucket_name = "learn-terraform-functions-formally-cheaply-frank-mullet"
Once Terraform deploys your function to S3, use the AWS CLI to inspect the contents of the S3 bucket.
$ aws s3 ls $(terraform output -raw lambda_bucket_name)2021-07-08 13:49:46 353 hello-world.zip
Create the Lambda function
Add the following to main.tf
to define your Lambda function and related
resources.
main.tf
resource "aws_lambda_function" "hello_world" { function_name = "HelloWorld" s3_bucket = aws_s3_bucket.lambda_bucket.id s3_key = aws_s3_object.lambda_hello_world.key runtime = "nodejs20.x" handler = "hello.handler" source_code_hash = data.archive_file.lambda_hello_world.output_base64sha256 role = aws_iam_role.lambda_exec.arn} resource "aws_cloudwatch_log_group" "hello_world" { name = "/aws/lambda/${aws_lambda_function.hello_world.function_name}" retention_in_days = 30} resource "aws_iam_role" "lambda_exec" { name = "serverless_lambda" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [{ Action = "sts:AssumeRole" Effect = "Allow" Sid = "" Principal = { Service = "lambda.amazonaws.com" } } ] })} resource "aws_iam_role_policy_attachment" "lambda_policy" { role = aws_iam_role.lambda_exec.name policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"}
This configuration defines four resources:
aws_lambda_function.hello_world
configures the Lambda function to use the bucket object containing your function code. It also sets the runtime to NodeJS, and assigns the handler to thehandler
function defined inhello.js
. Thesource_code_hash
attribute will change whenever you update the code contained in the archive, which lets Lambda know that there is a new version of your code available. Finally, the resource specifies a role which grants the function permission to access AWS services and resources in your account.aws_cloudwatch_log_group.hello_world
defines a log group to store log messages from your Lambda function for 30 days. By convention, Lambda stores logs in a group with the name/aws/lambda/<Function Name>
.aws_iam_role.lambda_exec
defines an IAM role that allows Lambda to access resources in your AWS account.aws_iam_role_policy_attachment.lambda_policy
attaches a policy the IAM role. TheAWSLambdaBasicExecutionRole
is an AWS managed policy that allows your Lambda function to write to CloudWatch logs.
Add the following to outputs.tf
to create an output value for your Lambda function's name.
outputs.tf
output "function_name" { description = "Name of the Lambda function." value = aws_lambda_function.hello_world.function_name}
Apply this configuration to create your Lambda function and associated
resources. Respond to the confirmation prompt with a yes
.
$ terraform apply## ... Apply complete! Resources: 4 added, 0 changed, 0 destroyed. Outputs: function_name = "HelloWorld"lambda_bucket_name = "learn-terraform-functions-formally-cheaply-frank-mullet"
Once Terraform creates the function, invoke it using the AWS CLI.
$ aws lambda invoke --region=us-east-1 --function-name=$(terraform output -raw function_name) response.json{ "StatusCode": 200, "ExecutedVersion": "$LATEST"}
Check the contents of response.json
to confirm that the function is working as
expected.
$ cat response.json{"statusCode":200,"headers":{"Content-Type":"application/json"},"body":"{\"message\":\"Hello, World!\"}"}
This response matches the object returned by the handler function in
hello-world/hello.js
.
You can review your function in the AWS Lambda Console.
Create an HTTP API with API Gateway
API Gateway is an AWS managed service that allows you to create and manage HTTP or WebSocket APIs. It supports integration with AWS Lambda functions, allowing you to implement an HTTP API using Lambda functions to handle and respond to HTTP requests.
Add the following to main.tf
to configure an API Gateway.
main.tf
resource "aws_apigatewayv2_api" "lambda" { name = "serverless_lambda_gw" protocol_type = "HTTP"} resource "aws_apigatewayv2_stage" "lambda" { api_id = aws_apigatewayv2_api.lambda.id name = "serverless_lambda_stage" auto_deploy = true access_log_settings { destination_arn = aws_cloudwatch_log_group.api_gw.arn format = jsonencode({ requestId = "$context.requestId" sourceIp = "$context.identity.sourceIp" requestTime = "$context.requestTime" protocol = "$context.protocol" httpMethod = "$context.httpMethod" resourcePath = "$context.resourcePath" routeKey = "$context.routeKey" status = "$context.status" responseLength = "$context.responseLength" integrationErrorMessage = "$context.integrationErrorMessage" } ) }} resource "aws_apigatewayv2_integration" "hello_world" { api_id = aws_apigatewayv2_api.lambda.id integration_uri = aws_lambda_function.hello_world.invoke_arn integration_type = "AWS_PROXY" integration_method = "POST"} resource "aws_apigatewayv2_route" "hello_world" { api_id = aws_apigatewayv2_api.lambda.id route_key = "GET /hello" target = "integrations/${aws_apigatewayv2_integration.hello_world.id}"} resource "aws_cloudwatch_log_group" "api_gw" { name = "/aws/api_gw/${aws_apigatewayv2_api.lambda.name}" retention_in_days = 30} resource "aws_lambda_permission" "api_gw" { statement_id = "AllowExecutionFromAPIGateway" action = "lambda:InvokeFunction" function_name = aws_lambda_function.hello_world.function_name principal = "apigateway.amazonaws.com" source_arn = "${aws_apigatewayv2_api.lambda.execution_arn}/*/*"}
This configuration defines four API Gateway resources, and two supplemental resources:
aws_apigatewayv2_api.lambda
defines a name for the API Gateway and sets its protocol toHTTP
.aws_apigatewayv2_stage.lambda
sets up application stages for the API Gateway - such as "Test", "Staging", and "Production". The example configuration defines a single stage, with access logging enabled.aws_apigatewayv2_integration.hello_world
configures the API Gateway to use your Lambda function.aws_apigatewayv2_route.hello_world
maps an HTTP request to a target, in this case your Lambda function. In the example configuration, theroute_key
matches any GET request matching the path/hello
. Atarget
matchingintegrations/<ID>
maps to a Lambda integration with the given ID.aws_cloudwatch_log_group.api_gw
defines a log group to store access logs for theaws_apigatewayv2_stage.lambda
API Gateway stage.aws_lambda_permission.api_gw
gives API Gateway permission to invoke your Lambda function.
The API Gateway stage will publish your API to a URL managed by AWS.
Add an output value for this URL to outputs.tf
.
outputs.tf
output "base_url" { description = "Base URL for API Gateway stage." value = aws_apigatewayv2_stage.lambda.invoke_url}
Apply your configuration to create the API Gateway and other resources. Respond
to the confirmation prompt with a yes
.
$ terraform apply## ... Apply complete! Resources: 6 added, 0 changed, 0 destroyed. Outputs: base_url = "https://mxg7cq38p4.execute-api.us-east-1.amazonaws.com/serverless_lambda_stage"function_name = "HelloWorld"lambda_bucket_name = "learn-terraform-functions-formally-cheaply-frank-mullet"
Now, send a request to API Gateway to invoke the Lambda function. The endpoint consists of the
base_url
output value and the /hello
path, which do you defined as the route_key
.
$ curl "$(terraform output -raw base_url)/hello"{"message":"Hello, World!"}
Update your Lambda function
When you call Lambda functions via API Gateway's proxy integration, API Gateway
passes the request information to your function via the event
object. You can
use information about the request in your function code.
Now, use an HTTP query parameter in your function.
In hello-world/hello.js
, add an if
statement to replace the
responseMessage
if the request includes a Name
query parameter.
hello-world/hello.js
module.exports.handler = async (event) => { console.log('Event: ', event) let responseMessage = 'Hello, World!'; if (event.queryStringParameters && event.queryStringParameters['Name']) { responseMessage = 'Hello, ' + event.queryStringParameters['Name'] + '!'; } return { statusCode: 200, headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ message: responseMessage, }), }}
Apply this change now.
Since your source code changed, the computed etag
and source_code_hash
values have changed as well. Terraform will update your S3 bucket object and
Lambda function.
Respond to the confirmation prompt with a yes
.
$ terraform apply## ... Terraform will perform the following actions: # aws_lambda_function.hello_world will be updated in-place ~ resource "aws_lambda_function" "hello_world" { id = "HelloWorld" ~ last_modified = "2021-07-12T15:00:40.113+0000" -> (known after apply) ~ source_code_hash = "ifMwKWStaDMUDQ3gh68yJzsWNPRfXHfpwMMDJcE1ymA=" -> "1esYQSK1oTfV84+KmDSwhVTBAy8eX6F6uBKLvNsf8AY=" tags = {} # (18 unchanged attributes hidden) # (1 unchanged block hidden) } # aws_s3_object.lambda_hello_world will be updated in-place ~ resource "aws_s3_object" "lambda_hello_world" { ~ etag = "ba1ce6b2aa28971920a6c2b8272fe7c6" -> "adb572ecc1b4f3eda7f497aad0bec527" id = "hello-world.zip" tags = {} + version_id = (known after apply) # (10 unchanged attributes hidden) } Plan: 0 to add, 2 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes aws_s3_object.lambda_hello_world: Modifying... [id=hello-world.zip]aws_s3_object.lambda_hello_world: Modifications complete after 0s [id=hello-world.zip]aws_lambda_function.hello_world: Modifying... [id=HelloWorld]aws_lambda_function.hello_world: Still modifying... [id=HelloWorld, 10s elapsed]aws_lambda_function.hello_world: Modifications complete after 17s [id=HelloWorld] Apply complete! Resources: 0 added, 2 changed, 0 destroyed. Outputs: base_url = "https://iz85oarz9l.execute-api.us-east-1.amazonaws.com/serverless_lambda_stage"function_name = "HelloWorld"lambda_bucket_name = "learn-terraform-functions-quietly-severely-crucial-gnu"
Now, send another request to your function, including the Name
query parameter.
$ curl "$(terraform output -raw base_url)/hello?Name=Terraform"{"message":"Hello, Terraform!"}
Before cleaning up your infrastructure, you can visit the AWS Lambda Console for your function to review the infrastructure you created in this tutorial.
Clean up your infrastructure
Before moving on, clean up the infrastructure you created by running the
terraform destroy
command. Remember to confirm the operation with a yes
.
$ terraform destroy##...Plan: 0 to add, 0 to change, 14 to destroy. Changes to Outputs: - base_url = "https://1q6qs02fjc.execute-api.us-east-1.amazonaws.com/serverless_lambda_stage" -> null - function_name = "HelloWorld-rs" -> null - lambda_bucket_name = "learn-terraform-functions-reasonably-highly-firm-honeybee" -> null Do you really want to destroy all resources in workspace "learn-terraform-lambda-api-gateway"? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value: yes##...Apply complete! Resources: 0 added, 0 changed, 14 destroyed.
If you used HCP Terraform for this tutorial, after destroying your resources, delete the learn-terraform-lambda-api-gateway
workspace from your HCP Terraform organization.
Next steps
In this tutorial, you created and updated an AWS Lambda function with an API Gateway integration. These components are essential parts of most serverless applications.
Review the following resources to learn more about Terraform and AWS:
The Terraform Registry includes modules for Lambda and API Gateway, which support serverless development.
Learn how to use the Cloud Control Provider to manage more of your AWS resources with Terraform.
Create and use Terraform modules to organize your configuration.
Use the Cloud Development Kit (CDK) for Terraform to deploy multiple Lambda functions with TypeScript.