AWS Serverless Architecture — InfraTales Hub
AWS serverless in production behaves differently than in tutorials. This hub covers the patterns, limits, and failure modes from real systems.
AWS Serverless Architecture
Serverless on AWS sounds simple until you run it in production. Cold starts, concurrency limits, DLQ handling, EventBridge delivery guarantees — none of this is in the getting-started guide.
This hub covers the serverless patterns that hold at scale.
Serverless Architecture Posts
- Serverless Event Pipeline: Complete Architecture Guide
- Serverless Event Pipeline Part 2: Scaling and Throughput
- Serverless Event Pipeline Part 3: Error Handling and Retry
- Serverless Event Pipeline Part 4: Observability and Tracing
- Serverless Event Pipeline Part 5: Production Hardening
- Real-Time Location Tracking: IoT Core, Kinesis, Lambda
- Serverless IoT Analytics Pipeline
- Real-Time CDC Pipeline with DynamoDB Streams
The Lambda Realities Nobody Writes About
Cold starts are not random
Cold starts happen when Lambda needs a new execution environment. The variables: memory size (higher = faster init), runtime (Python and Node fastest, Java slowest), VPC attachment (adds 100-200ms historically, fixed in 2023 but still slower than non-VPC), and package size.
Concurrency is account-wide
Your default Lambda concurrency limit (1,000 in most regions) is shared across all functions in the account. One poorly-configured function with no reserved concurrency can starve everything else during a spike.
SQS batch size affects throughput and cost
Lambda processes SQS batches. Larger batches = fewer Lambda invocations = lower cost. But larger batches also mean all-or-nothing retry semantics. A single bad message in a batch of 10 retries all 10.
EventBridge is not SQS
EventBridge delivers events at-least-once with no built-in queue depth. SQS gives you backpressure. For anything that needs flow control, SQS with EventBridge as the source is the correct pattern.
Need serverless architecture help? Work with Rahul