Lambda — Terraform Example
Progress checklist
What this example does
Section titled “What this example does”This example provisions:
- An S3 bucket (versioning + SSE-S3 enabled)
- An S3 Files file system and mount targets in each provided subnet
- An S3 Files access point at
/lambda(UID/GID 1000, required for Lambda) - IAM roles (file system role + Lambda execution role with VPC access)
- Security groups (compute → mount target on NFS port 2049)
- A CloudWatch log group for the Lambda function
- A Python 3.14 Lambda function that lists
/mnt/s3files
-
Confirm prerequisites.
Requirement Minimum version Terraform 1.5 AWS provider 6.40 archive provider 2.4 AWS CLI 2.34.26 Confirm the CLI version:
Terminal window aws --versionThe subnets you provide must have outbound internet access (via NAT Gateway) or VPC interface endpoints for Lambda to reach the Lambda service, CloudWatch Logs, and S3. Private subnets with a NAT Gateway are the most common setup.
-
Clone the example.
Terminal window git clone https://github.com/jajera/terraform-aws-s3-files.gitcd terraform-aws-s3-files/examples/lambdaDirectoryexamples/lambda/
- main.tf
- terraform.tfvars.example
- terraform.tfvars (you create this)
The Lambda function code is generated inline by Terraform using the
archiveprovider — no separate function file is needed. -
Create
terraform.tfvars.Terminal window cp terraform.tfvars.example terraform.tfvarsEdit
terraform.tfvars:vpc_id = "vpc-0123456789abcdef0"subnet_ids = ["subnet-aaaaaaaaaaaaaaaaa", "subnet-bbbbbbbbbbbbbbbbb"]# Optional overrides (defaults shown):# aws_region = "ap-southeast-6"# log_retention_days = 7# bucket_name = null # auto-generated: s3files-demo-<random># bucket_force_destroy = trueVariable Default Required Description vpc_id— yes VPC for mount targets and Lambda ENIs subnet_ids— yes Subnets for mount targets and Lambda (must have NAT or VPC endpoints) aws_regionap-southeast-6no AWS region log_retention_days7no CloudWatch log retention period in days bucket_nameauto-generated no Leave nullto uses3files-demo-<random>bucket_force_destroytrueno Allow destroy even when bucket contains objects -
Initialise and apply.
Terminal window terraform initterraform planterraform applyWhen apply finishes, note the outputs:
Terminal window terraform outputKey outputs:
Output Description lambda_function_nameLambda function name file_system_idS3 Files file system ID access_point_arnARN of the S3 Files access point bucket_nameBacking S3 bucket name -
Invoke the function.
Terminal window aws lambda invoke \--function-name "$(terraform output -raw lambda_function_name)" \--region ap-southeast-6 \--payload '{}' \response.json \&& cat response.json -
Verify the response.
Expected response body:
{"statusCode": 200, "body": "[]"}The empty list
[]is correct — the file system is empty at this point. Write a file via the S3 CLI and invoke again to see it appear:Terminal window aws s3 cp /dev/stdin \"s3://$(terraform output -raw bucket_name)/lambda/hello.txt" \--content-type text/plain \--region ap-southeast-6 <<< "hello from terraform lambda"aws lambda invoke \--function-name "$(terraform output -raw lambda_function_name)" \--region ap-southeast-6 \--payload '{}' \response.json \&& cat response.jsonThe response body should now list
['hello.txt']. -
Tear down.
Terminal window terraform destroy