Skip to content

Lambda — Terraform Example

Progress checklist

This example provisions:

  • An S3 bucket (versioning + SSE-S3 enabled)
  • An S3 Files file system and mount targets in each provided subnet
  • An S3 Files access point at /lambda (UID/GID 1000, required for Lambda)
  • IAM roles (file system role + Lambda execution role with VPC access)
  • Security groups (compute → mount target on NFS port 2049)
  • A CloudWatch log group for the Lambda function
  • A Python 3.14 Lambda function that lists /mnt/s3files
  1. Confirm prerequisites.

    RequirementMinimum version
    Terraform1.5
    AWS provider6.40
    archive provider2.4
    AWS CLI2.34.26

    Confirm the CLI version:

    Terminal window
    aws --version

    The subnets you provide must have outbound internet access (via NAT Gateway) or VPC interface endpoints for Lambda to reach the Lambda service, CloudWatch Logs, and S3. Private subnets with a NAT Gateway are the most common setup.

  2. Clone the example.

    Terminal window
    git clone https://github.com/jajera/terraform-aws-s3-files.git
    cd terraform-aws-s3-files/examples/lambda
    • Directoryexamples/lambda/
      • main.tf
      • terraform.tfvars.example
      • terraform.tfvars (you create this)

    The Lambda function code is generated inline by Terraform using the archive provider — no separate function file is needed.

  3. Create terraform.tfvars.

    Terminal window
    cp terraform.tfvars.example terraform.tfvars

    Edit terraform.tfvars:

    vpc_id = "vpc-0123456789abcdef0"
    subnet_ids = ["subnet-aaaaaaaaaaaaaaaaa", "subnet-bbbbbbbbbbbbbbbbb"]
    # Optional overrides (defaults shown):
    # aws_region = "ap-southeast-6"
    # log_retention_days = 7
    # bucket_name = null # auto-generated: s3files-demo-<random>
    # bucket_force_destroy = true
    VariableDefaultRequiredDescription
    vpc_idyesVPC for mount targets and Lambda ENIs
    subnet_idsyesSubnets for mount targets and Lambda (must have NAT or VPC endpoints)
    aws_regionap-southeast-6noAWS region
    log_retention_days7noCloudWatch log retention period in days
    bucket_nameauto-generatednoLeave null to use s3files-demo-<random>
    bucket_force_destroytruenoAllow destroy even when bucket contains objects
  4. Initialise and apply.

    Terminal window
    terraform init
    terraform plan
    terraform apply

    When apply finishes, note the outputs:

    Terminal window
    terraform output

    Key outputs:

    OutputDescription
    lambda_function_nameLambda function name
    file_system_idS3 Files file system ID
    access_point_arnARN of the S3 Files access point
    bucket_nameBacking S3 bucket name
  5. Invoke the function.

    Terminal window
    aws lambda invoke \
    --function-name "$(terraform output -raw lambda_function_name)" \
    --region ap-southeast-6 \
    --payload '{}' \
    response.json \
    && cat response.json
  6. Verify the response.

    Expected response body:

    {"statusCode": 200, "body": "[]"}

    The empty list [] is correct — the file system is empty at this point. Write a file via the S3 CLI and invoke again to see it appear:

    Terminal window
    aws s3 cp /dev/stdin \
    "s3://$(terraform output -raw bucket_name)/lambda/hello.txt" \
    --content-type text/plain \
    --region ap-southeast-6 <<< "hello from terraform lambda"
    aws lambda invoke \
    --function-name "$(terraform output -raw lambda_function_name)" \
    --region ap-southeast-6 \
    --payload '{}' \
    response.json \
    && cat response.json

    The response body should now list ['hello.txt'].

  7. Tear down.

    Terminal window
    terraform destroy