Introduction
While AWS Lambda expanded its local /tmp capacity to 10GB, engineering data-persistent, multi-invocation shared storage remained a challenge until the release of Amazon S3 Files. By mounting S3 as an NFS v4.1 volume, Lambda functions can now read and write to a shared, high-performance namespace at S3 scale.
This manual documents the engineering required to bridge the serverless runtime with S3 object storage via Access Points.
Architecture: The Serverless Mount Lifecycle
Lambda requires an intermediary Access Point to handle the POSIX identity translation and provide a consistent mount entry point.
sequenceDiagram
participant L as AWS Lambda (VPC)
participant AP as S3 Files Access Point
participant FS as S3 Files Engine
participant S3 as S3 Bucket
L->>AP: NFS Handshake (Port 2049)
Note over AP: Apply POSIX Mapping (1000:1000)
AP->>FS: Metadata Authorization
FS->>S3: Stream Object Data
S3-->>L: Persistent File AccessPhase 1: Networking & Access Point Setup
1. Mandatory VPC Configuration
Lambda cannot access S3 Files over the public internet. The function must be VPC-attached.
- Subnets: Must contain active S3 Files Mount Targets.
- Routing: S3 Gateway Endpoints do not support NFS traffic. Ensure your subnets have routes to the Mount Target IPs and that Security Groups allow outbound TCP 2049.
2. The Access Point: POSIX Identity Mapping
Lambda runs under a restricted execution user. To avoid Permission Denied errors, the Access Point must map all incoming traffic to a specific POSIX identity.
| Configuration | Value | Purpose |
|---|---|---|
| User ID (UID) | 1000 | Matches the serverless execution context |
| Group ID (GID) | 1000 | Matches the serverless execution context |
| Secondary GIDs | (Optional) | For shared file permissions |
| Root Directory | e.g. /lambda |
Isolates the function's namespace |
Phase 2: Lambda Function Configuration
1. IAM Role: The Execution Policy
The Lambda execution role requires permissions to mount the filesystem and handle network interface creation.
Managed Policies Required:
AmazonS3FilesClientFullAccessAWSLambdaVPCAccessExecutionRole
Inline Policy (S3 Data Access):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:GetBucketLocation", "s3:GetBucketVersioning", "s3:GetObject", "s3:ListBucket"],
"Resource": ["arn:aws:s3:::your-bucket", "arn:aws:s3:::your-bucket/*"]
}
]
}2. Mount Configuration
In the Lambda console (or IaC), define the FileSystem Association.
[!CAUTION] Provisioning Lag & Cold Starts: New S3 Files mount targets require 2–3 minutes to become active. Attempting an invocation before this window will result in a connection timeout. Additionally, initial cold-start invocations can experience 10–30 seconds of latency solely for the NFS v4.1 handshake.
Phase 3: Defensive Coding — The Atomic Write Pattern
Writing to a network-mounted volume in a serverless context requires Atomic Renames to prevent partial file corruption during function timeouts.
Python 3.14 Implementation:
import os
import json
import uuid
def lambda_handler(event, context):
# This path must match your CloudFormation/Terraform mount path
MOUNT_ROOT = "/mnt/s3data"
# Validation step
if not os.path.islink(MOUNT_ROOT) and not os.path.isdir(MOUNT_ROOT):
raise RuntimeError(f"Storage path {MOUNT_ROOT} is not available")
# 1. Staging Write (Unique file per request)
tmp_filename = f".pend_{uuid.uuid4()}.json"
tmp_path = os.path.join(MOUNT_ROOT, tmp_filename)
final_path = os.path.join(MOUNT_ROOT, f"output_{context.aws_request_id}.json")
with open(tmp_path, "w") as f:
json.dump(event, f)
f.flush()
os.fsync(f.fileno()) # Ensure data is on the gateway
# 2. Atomic Rename (Commit operation)
# The S3 Files engine handles this as a metadata-only atomic op
os.rename(tmp_path, final_path)
return {
"statusCode": 200,
"committed_path": final_path
}Phase 4: Verification Checklist
Ensure your serverless storage is correctly provisioned:
- Invoke Check: Run the function. Response MUST show the
committed_path. - Persistence Test: In his workstation, run
aws s3 ls s3://BUCKET/lambda/to see the generated JSON files. - Log Audit: Check CloudWatch Logs for any
nfs.mount.timeouterrors—usually indicating a Security Group misconfiguration.
In Part 3, we automate this entire ecosystem using a Modular Terraform Framework, including the ECS Fargate workaround for non-native volume configurations.