Building Production-Ready Multi-Environment VPC Infrastructure with AWS CDK

CDK TypeScript stack deploying isolated VPCs for dev (10.0.0.0/16), staging (10.1.0.0/16), and prod (10.2.0.0/16). Creates public subnets, IGW routing, and S3/DynamoDB endpoints with full L1 control and deterministic, conflict-free network infrastructure for AWS.

Building Production-Ready Multi-Environment VPC Infrastructure with AWS CDK

1. The Problem

Every cloud infrastructure starts with networking. You need VPCs, subnets, routing, and endpoints before deploying anything else.

The challenges are real:

  • AWS assigns random CIDR blocks if you don't specify them explicitly
  • Multiple environments (dev, staging, prod) risk IP collisions
  • CloudFormation is verbose and error-prone for networking
  • Subnets often end up in wrong availability zones
  • VPC endpoints save money but are easy to forget

Most teams rush through VPC setup. They accept defaults, deploy to one AZ, and regret it later when traffic costs spike or an AZ goes down.

This project solves that. It's a production-ready, multi-environment VPC infrastructure built with AWS CDK that guarantees:

  • Predictable CIDR blocks per environment
  • Multi-AZ deployment for high availability
  • VPC endpoints to reduce data transfer costs
  • Type-safe infrastructure code
  • Comprehensive test coverage

2. What We Built

A TypeScript CDK stack that deploys isolated VPC infrastructure across three environments.

Each environment gets:

  • One VPC with environment-specific CIDR ranges
  • Two public subnets spanning different availability zones
  • Internet Gateway with proper routing
  • Gateway VPC endpoints for S3 and DynamoDB
  • Comprehensive tagging for resource management

The infrastructure is deterministic. Dev always gets 10.0.0.0/16. Staging gets 10.1.0.0/16. Production gets 10.2.0.0/16.

No surprises. No CIDR conflicts. Just working infrastructure.

The entire stack uses AWS CDK's L1 (CloudFormation) constructs. This gives us complete control over every resource property. Higher-level L2 constructs are convenient but often make assumptions we don't want.


3. Architecture Overview

Core Components

VPC
The foundation. Each VPC has DNS support and DNS hostnames enabled. This matters for service discovery and private hosted zones later.

Public Subnets
Two subnets across us-east-1a and us-east-1b. Both get public IPs on launch. They share a single route table for consistency.

Internet Gateway
Attached to the VPC and referenced in the public route table. All outbound traffic from public subnets goes through the IGW.

VPC Endpoints
Gateway endpoints for S3 and DynamoDB. These route traffic through AWS's private network instead of the internet. No NAT gateway charges. No data transfer fees.

Network Architecture

Sequence Diagram

CIDR Strategy

Environment VPC CIDR Subnet 1 Subnet 2
dev 10.0.0.0/16 10.0.1.0/24 10.0.2.0/24
staging 10.1.0.0/16 10.1.1.0/24 10.1.2.0/24
prod 10.2.0.0/16 10.2.1.0/24 10.2.2.0/24

This isolation prevents VPC peering conflicts and makes cross-account networking predictable.

Design Tradeoffs

Why public subnets only?
This is foundational infrastructure. Private subnets need NAT gateways, which cost $32/month each. Add them when you actually need them.

Why L1 constructs?
CDK's L2 VPC construct auto-assigns CIDR blocks. You can override them, but the API is clunky. L1 (CfnSubnet, CfnVpc) gives us exact control.

Why shared route table?
Both subnets have identical routing needs. One route table is simpler to manage and update.

Why Gateway endpoints?
Interface endpoints cost money ($7-$10/month each). Gateway endpoints for S3 and DynamoDB are free and reduce data transfer costs.

System Architecture Diagram

System Architecture

Microservices Architecture Diagram

Microservices Architecture

Use Case Diagram

Use Case

Sequence Diagram

Sequence Diagram

Component Diagram

Component Diagram

Deployment Diagram

Deployment Diagram

Layered Architecture Diagram

Layered Architecture

Client–Server Diagram

Client Server

Cloud Architecture Diagram (AWS)

Cloud Architecture

User Flow Diagram

User Flow

4. How It Works

Stack Initialization

The entry point is bin/tap.ts:

const environmentSuffix = process.env.ENVIRONMENT_SUFFIX || 
                         app.node.tryGetContext('environmentSuffix') || 
                         'dev';

new TapStack(app, `TapStack${environmentSuffix}`, {
  env: {
    account: process.env.CDK_DEFAULT_ACCOUNT,
    region: 'us-east-1',
  },
  environmentSuffix: environmentSuffix,
});

It reads the environment from:

  1. Environment variable ENVIRONMENT_SUFFIX
  2. CDK context parameter environmentSuffix
  3. Falls back to dev

This pattern works for local deployments and CI/CD pipelines.

CIDR Block Assignment

The stack has a getCidrRanges() method:

private getCidrRanges(environmentSuffix: string) {
  let baseCidr = '10.0.0.0/16';
  let subnet1Cidr = '10.0.1.0/24';
  let subnet2Cidr = '10.0.2.0/24';

  if (environmentSuffix === 'staging') {
    baseCidr = '10.1.0.0/16';
    subnet1Cidr = '10.1.1.0/24';
    subnet2Cidr = '10.1.2.0/24';
  } else if (environmentSuffix === 'prod') {
    baseCidr = '10.2.0.0/16';
    subnet1Cidr = '10.2.1.0/24';
    subnet2Cidr = '10.2.2.0/24';
  }

  return { vpcCidr: baseCidr, subnet1Cidr, subnet2Cidr };
}

Simple if-else logic. No clever abstractions. Any unknown environment gets dev ranges.

VPC Creation

const vpc = new ec2.Vpc(this, 'VPC', {
  ipAddresses: ec2.IpAddresses.cidr(cidrRanges.vpcCidr),
  availabilityZones: ['us-east-1a', 'us-east-1b'],
  subnetConfiguration: [],
  enableDnsHostnames: true,
  enableDnsSupport: true,
  natGateways: 0,
});

We pass an empty subnetConfiguration because we're creating subnets manually. CDK's automatic subnet creation doesn't guarantee CIDR blocks.

DNS settings are critical. Without enableDnsHostnames, you can't use Route53 private hosted zones. Many hours have been lost debugging this.

Manual Subnet Creation

const publicSubnet1 = new ec2.CfnSubnet(this, 'PublicSubnet1', {
  availabilityZone: 'us-east-1a',
  vpcId: vpc.vpcId,
  cidrBlock: cidrRanges.subnet1Cidr,
  mapPublicIpOnLaunch: true,
  tags: [
    { key: 'Name', value: `${environmentSuffix}-PublicSubnet-1` },
    { key: 'Environment', value: environmentSuffix },
  ],
});

We use CfnSubnet (L1) instead of Subnet (L2). This guarantees:

  • Exact CIDR block
  • Specific availability zone
  • Public IP assignment
  • Consistent tagging

Same pattern for subnet 2 in us-east-1b.

Internet Gateway Setup

const internetGateway = new ec2.CfnInternetGateway(this, 'InternetGateway', {
  tags: [
    { key: 'Name', value: `${environmentSuffix}-IGW-Main` },
    { key: 'Environment', value: environmentSuffix },
  ],
});

const igwAttachment = new ec2.CfnVPCGatewayAttachment(this, 'IGWAttachment', {
  vpcId: vpc.vpcId,
  internetGatewayId: internetGateway.ref,
});

Creating the IGW is one step. Attaching it to the VPC is another. Many people forget the attachment and wonder why their instances can't reach the internet.

Routing Configuration

const publicRouteTable = new ec2.CfnRouteTable(this, 'PublicRouteTable', {
  vpcId: vpc.vpcId,
  tags: [
    { key: 'Name', value: `${environmentSuffix}-PublicRouteTable` },
    { key: 'Environment', value: environmentSuffix },
  ],
});

const defaultRoute = new ec2.CfnRoute(this, 'DefaultRoute', {
  routeTableId: publicRouteTable.ref,
  destinationCidrBlock: '0.0.0.0/0',
  gatewayId: internetGateway.ref,
});

defaultRoute.addDependency(igwAttachment);

The default route sends all internet-bound traffic to the IGW. The explicit dependency ensures the IGW is attached before the route is created.

Then we associate both subnets:

new ec2.CfnSubnetRouteTableAssociation(this, 'RouteTableAssociation1', {
  subnetId: publicSubnet1.ref,
  routeTableId: publicRouteTable.ref,
});

new ec2.CfnSubnetRouteTableAssociation(this, 'RouteTableAssociation2', {
  subnetId: publicSubnet2.ref,
  routeTableId: publicRouteTable.ref,
});

Both subnets share the same route table. Routing behavior is identical across AZs.

VPC Endpoints

const s3VpcEndpoint = new ec2.CfnVPCEndpoint(this, 'S3Endpoint', {
  serviceName: 'com.amazonaws.us-east-1.s3',
  vpcId: vpc.vpcId,
  vpcEndpointType: 'Gateway',
  routeTableIds: [publicRouteTable.ref],
  tags: [
    { key: 'Name', value: `${environmentSuffix}-S3-VPCEndpoint` },
    { key: 'Environment', value: environmentSuffix },
  ],
});

Gateway endpoints automatically add routes to the route table. Traffic to S3 stays inside AWS's network. No internet gateway. No NAT gateway. No data transfer charges.

Same pattern for DynamoDB endpoint.

Resource Dependencies

defaultRoute.addDependency(igwAttachment);
routeTableAssociation1.addDependency(publicRouteTable);
routeTableAssociation2.addDependency(publicRouteTable);
s3VpcEndpoint.addDependency(publicRouteTable);
dynamoDBVpcEndpoint.addDependency(publicRouteTable);

CloudFormation creates resources in parallel. Dependencies enforce ordering. Routes need the IGW attached first. Associations need the route table to exist. Endpoints need the route table created.

Without these dependencies, deployments fail randomly.

Outputs

new cdk.CfnOutput(this, 'VpcId', {
  value: vpc.vpcId,
  description: 'VPC ID',
  exportName: `${environmentSuffix}-VPC-ID`,
});

Outputs are exported with environment prefixes. This lets other stacks import them:

const vpcId = Fn.importValue('dev-VPC-ID');

We export:

  • VPC ID and CIDR
  • Both subnet IDs and AZs
  • Internet Gateway ID
  • Both VPC endpoint IDs

5. Build It Yourself

Prerequisites

You need:

  • AWS CLI configured with credentials
  • Node.js 18 or later
  • AWS CDK CLI installed globally

Install CDK:

npm install -g aws-cdk

Verify:

cdk --version
# Should show 2.x.x

Configure AWS credentials:

aws configure
# Enter your Access Key ID, Secret Access Key, and default region

Test credentials:

aws sts get-caller-identity
# Should show your account ID and user/role

Clone and Install

git clone https://github.com/rahulladumor/realtime-data-pipeline-lakehouse.git
cd realtime-data-pipeline-lakehouse
npm install

The install pulls in CDK libraries and testing dependencies:

  • aws-cdk-lib - Core CDK framework
  • constructs - CDK construct base classes
  • @aws-sdk/client-ec2 - AWS SDK for integration tests
  • TypeScript and Jest for testing

Bootstrap CDK

First deployment to an AWS account needs bootstrapping:

cdk bootstrap aws://ACCOUNT-ID/us-east-1

Replace ACCOUNT-ID with your AWS account number (from aws sts get-caller-identity).

This creates an S3 bucket and IAM roles for CDK deployments. You only do this once per account/region combination.

Deploy Development Environment

cdk deploy --context environmentSuffix=dev

CDK will show a change set. Review it. Type y to confirm.

Deployment takes 2-3 minutes. You'll see:

✅  TapStackdev

Outputs:
dev-VPC-ID = vpc-abc123def456
dev-VPC-CIDR = 10.0.0.0/16
dev-PublicSubnet-1-ID = subnet-111222333
dev-PublicSubnet-2-ID = subnet-444555666
...

Save these outputs. You'll need them for deploying applications.

Deploy Staging and Production

Same pattern:

cdk deploy --context environmentSuffix=staging
cdk deploy --context environmentSuffix=prod

All three environments can coexist in the same account. Their CIDR blocks don't overlap.

Verify Deployment

Check the VPC:

aws ec2 describe-vpcs --filters "Name=tag:Environment,Values=dev"

List subnets:

aws ec2 describe-subnets --filters "Name=tag:Environment,Values=dev"

Check VPC endpoints:

aws ec2 describe-vpc-endpoints --filters "Name=tag:Environment,Values=dev"

All resources should have consistent environment tags.

Common Mistakes

Forgot to bootstrap
Error: Policy contains a statement with one or more invalid principals
Solution: Run cdk bootstrap

Wrong region
Error: VPC not found
Solution: Add --region us-east-1 to AWS CLI commands or check your default region

CIDR conflicts
If you already have a VPC with 10.0.0.0/16, the deployment fails.
Solution: Change CIDR ranges in getCidrRanges() or use different environments

Missing permissions
Your IAM user needs ec2:* and cloudformation:* permissions.
Solution: Attach PowerUserAccess or create a custom policy


6. Key Code Sections

Stack Entry Point

File: bin/tap.ts

#!/usr/bin/env node
import * as cdk from 'aws-cdk-lib';
import 'source-map-support/register';
import { TapStack } from '../lib/tap-stack';

const app = new cdk.App();

const environmentSuffix =
  process.env.ENVIRONMENT_SUFFIX ||
  app.node.tryGetContext('environmentSuffix') ||
  'dev';

new TapStack(app, `TapStack${environmentSuffix}`, {
  env: {
    account: process.env.CDK_DEFAULT_ACCOUNT,
    region: 'us-east-1',
  },
  environmentSuffix: environmentSuffix,
});

This is the entrypoint. It reads environment config and creates the stack.

The #!/usr/bin/env node shebang makes it executable. The source-map-support import gives better stack traces when TypeScript errors occur.

Environment suffix comes from:

  1. ENVIRONMENT_SUFFIX env var (CI/CD)
  2. CDK context (command line)
  3. Default to dev (local development)

The region is hardcoded to us-east-1. Change it if you're deploying elsewhere.

CIDR Logic

File: lib/tap-stack.ts (lines 10-31)

private getCidrRanges(environmentSuffix: string) {
  let baseCidr = '10.0.0.0/16';
  let subnet1Cidr = '10.0.1.0/24';
  let subnet2Cidr = '10.0.2.0/24';

  if (environmentSuffix === 'staging') {
    baseCidr = '10.1.0.0/16';
    subnet1Cidr = '10.1.1.0/24';
    subnet2Cidr = '10.1.2.0/24';
  } else if (environmentSuffix === 'prod') {
    baseCidr = '10.2.0.0/16';
    subnet1Cidr = '10.2.1.0/24';
    subnet2Cidr = '10.2.2.0/24';
  }

  return {
    vpcCidr: baseCidr,
    subnet1Cidr,
    subnet2Cidr,
  };
}

Simple branching logic. Each environment gets unique CIDR blocks.

This prevents IP conflicts during VPC peering or VPN connections. If you need to peer dev and staging VPCs, their CIDR blocks can't overlap.

Unknown environments default to dev ranges. This is safer than crashing.

Subnet Creation with Explicit CIDR

File: lib/tap-stack.ts (lines 76-92)

const publicSubnet1 = new ec2.CfnSubnet(this, 'PublicSubnet1', {
  availabilityZone: 'us-east-1a',
  vpcId: vpc.vpcId,
  cidrBlock: cidrRanges.subnet1Cidr,
  mapPublicIpOnLaunch: true,
  tags: [
    {
      key: 'Name',
      value: `${environmentSuffix}-PublicSubnet-1`,
    },
    {
      key: 'Environment',
      value: environmentSuffix,
    },
  ],
});

Using CfnSubnet directly gives us:

  • Exact CIDR block (10.0.1.0/24 for dev)
  • Specific availability zone (us-east-1a)
  • Public IP on launch (critical for public subnets)
  • Consistent naming and tagging

The L2 Subnet construct would auto-assign CIDR blocks. We want deterministic addresses.

VPC Endpoint Configuration

File: lib/tap-stack.ts (lines 164-179)

const s3VpcEndpoint = new ec2.CfnVPCEndpoint(this, 'S3Endpoint', {
  serviceName: 'com.amazonaws.us-east-1.s3',
  vpcId: vpc.vpcId,
  vpcEndpointType: 'Gateway',
  routeTableIds: [publicRouteTable.ref],
  tags: [
    {
      key: 'Name',
      value: `${environmentSuffix}-S3-VPCEndpoint`,
    },
    {
      key: 'Environment',
      value: environmentSuffix,
    },
  ],
});

Gateway endpoints are free. They add routes to your route table automatically.

When code calls S3 APIs from inside the VPC, traffic goes through the endpoint instead of the internet gateway. This:

  • Reduces data transfer costs
  • Improves latency
  • Keeps traffic inside AWS network

Same pattern for DynamoDB endpoint.

Resource Dependencies

File: lib/tap-stack.ts (lines 155-205)

defaultRoute.addDependency(igwAttachment);
routeTableAssociation1.addDependency(publicRouteTable);
routeTableAssociation2.addDependency(publicRouteTable);
s3VpcEndpoint.addDependency(publicRouteTable);
dynamoDBVpcEndpoint.addDependency(publicRouteTable);

These dependencies are critical. Without them, CloudFormation might:

  • Create a route before attaching the IGW (fails)
  • Associate subnets before the route table exists (fails)
  • Create endpoints before route tables (fails)

CloudFormation parallelizes everything. Dependencies force correct ordering.


7. Running in Production

Logging and Monitoring

The VPC itself doesn't generate logs. Enable VPC Flow Logs to monitor traffic:

aws ec2 create-flow-logs \
  --resource-type VPC \
  --resource-ids vpc-abc123 \
  --traffic-type ALL \
  --log-destination-type cloud-watch-logs \
  --log-group-name /aws/vpc/flowlogs

Flow logs show:

  • Source and destination IPs
  • Ports and protocols
  • Accepted/rejected traffic
  • Bytes transferred

Use them to:

  • Debug connectivity issues
  • Detect security threats
  • Optimize network architecture
  • Track data transfer costs

Flow logs cost about $0.50 per GB ingested. For a small VPC, expect $5-20/month.

Deployment Verification

After deployment, verify:

VPC exists:

aws ec2 describe-vpcs --vpc-ids $(cdk output TapStackdev.VpcId)

Subnets span multiple AZs:

aws ec2 describe-subnets --filters "Name=vpc-id,Values=vpc-abc123"

Look for two subnets in different availability zones.

Route table has default route:

aws ec2 describe-route-tables --filters "Name=vpc-id,Values=vpc-abc123"

Should show a 0.0.0.0/0 route to the IGW.

VPC endpoints are active:

aws ec2 describe-vpc-endpoints --filters "Name=vpc-id,Values=vpc-abc123"

Check that S3 and DynamoDB endpoints have state available.

Common Runtime Issues

Instances can't reach internet
Check:

  • Subnet has route to IGW
  • Instance security group allows outbound traffic
  • Instance is in a public subnet (not private)
  • Subnet has MapPublicIpOnLaunch enabled

VPC endpoint not working
Check:

  • Endpoint is in available state
  • Endpoint is associated with correct route table
  • S3 bucket policy allows VPC endpoint access
  • Security groups allow traffic (for interface endpoints)

AZ failure doesn't failover
If you deploy all instances to one subnet, an AZ outage takes everything down. Solution: Spread instances across both subnets using Auto Scaling Groups or ECS services.

Debugging

Check stack status:

aws cloudformation describe-stacks --stack-name TapStackdev

View stack events:

aws cloudformation describe-stack-events --stack-name TapStackdev --max-items 20

Shows recent changes and failures.

Test VPC endpoint:

# From an EC2 instance in the VPC
aws s3 ls --region us-east-1

If it works, you're using the S3 endpoint. Check route table to confirm.

Verify connectivity:

Deploy a test EC2 instance and run:

ping 8.8.8.8  # Should work (internet via IGW)
curl -I https://s3.amazonaws.com  # Should work (S3 via endpoint)

8. Cost Analysis

Real Cost Expectations

Development Environment

  • VPC: Free
  • Subnets: Free
  • Internet Gateway: Free (pay for data transfer)
  • VPC Endpoints (Gateway): Free
  • Data Transfer OUT: $0.09/GB after first 100GB/month
  • Expected Monthly Cost: $0-5 (depends on data transfer)

Staging Environment

  • Same as dev
  • More data transfer from testing
  • Expected Monthly Cost: $5-20

Production Environment

  • VPC infrastructure: Free
  • Data transfer: Depends on traffic volume
  • Expected Monthly Cost: $20-200+ (mostly data transfer)

Cost Breakdown

Component Dev Staging Prod
VPC $0 $0 $0
Subnets $0 $0 $0
Internet Gateway $0 $0 $0
VPC Endpoints (Gateway) $0 $0 $0
Data Transfer OUT (100GB) $0 $0 $0
Data Transfer OUT (1TB) $90 $90 $90
VPC Flow Logs (optional) $5 $5 $10
Total (no traffic) $0 $0 $0
Total (moderate traffic) $10 $15 $50

Most Expensive Component

Data Transfer OUT is the killer.

Traffic between AWS services in the same region is free. Traffic to the internet costs $0.09/GB.

If you transfer 1TB/month to the internet:

  • First 10TB: $0.09/GB = $90
  • Next 40TB: $0.085/GB
  • Over 150TB: $0.070/GB

For high-traffic applications, data transfer can exceed compute costs.

Optimization Options

Use VPC Endpoints
Already implemented for S3 and DynamoDB. Add endpoints for:

  • AWS Systems Manager (free)
  • CloudWatch Logs (free)
  • ECR (saves $$ on image pulls)

CloudFront for Static Content
Serve static files through CloudFront. First 1TB is $0.085/GB (cheaper than EC2 data transfer).

Compress Data
gzip responses before sending. 70% reduction in transfer costs.

Regional Architecture
Keep compute and storage in the same region. Cross-region transfer costs $0.02/GB.

Monitor Transfer
Enable VPC Flow Logs and analyze traffic patterns. Find unexpected data transfers.

What's NOT Included

This infrastructure doesn't include:

  • NAT Gateways ($32/month each + data transfer)
  • Application Load Balancers ($16/month + LCU charges)
  • EC2 instances
  • RDS databases
  • Lambda functions
  • Any actual compute resources

Those add up fast. A production app might cost:

  • 2 NAT Gateways: $64/month
  • 1 ALB: $25/month
  • EC2 Auto Scaling: $100-500/month
  • RDS (db.t3.medium): $60/month
  • CloudWatch/monitoring: $20/month
  • Total: $270-670/month before application traffic

Cost Monitoring

Tag everything with Environment tag:

tags: [{ key: 'Environment', value: environmentSuffix }]

Then use Cost Explorer to filter by tag. Track each environment's spend separately.

Set up billing alerts:

aws budgets create-budget \
  --account-id 123456789012 \
  --budget file://budget.json \
  --notifications-with-subscribers file://notifications.json

Get email alerts when costs exceed thresholds.


9. Final Thoughts

What This Project Is

A solid foundation for AWS networking. It handles the boring-but-critical stuff:

  • Deterministic IP addressing
  • Multi-AZ deployment
  • Environment isolation
  • Cost-efficient endpoint configuration

Use it as a base. Add private subnets, NAT gateways, and app infrastructure on top.

Limitations

No Private Subnets
Production apps need private subnets for databases and backend services. This stack only creates public subnets.

Why? Keep it simple. Add private subnets when you actually need them.

Hardcoded Region
Everything deploys to us-east-1. Multi-region is possible but adds complexity.

No Network ACLs
We rely on security groups for access control. Network ACLs provide defense-in-depth but are stateless and painful to manage.

Limited VPC Endpoints
Only S3 and DynamoDB have endpoints. Add more for services you use heavily (ECR, CloudWatch, etc).

Lessons Learned

TypeScript > CloudFormation
Writing VPC config in TypeScript is far better than YAML. Type checking catches errors before deployment.

L1 Constructs for Networking
High-level CDK constructs make assumptions about CIDR blocks and subnet placement. L1 constructs give you control.

Explicit Dependencies Matter
CloudFormation parallelizes aggressively. Dependencies prevent race conditions.

Testing Infrastructure Is Essential
Unit tests catch template errors. Integration tests verify actual AWS resources. Both are necessary.

Tagging From Day One
Consistent tagging makes cost allocation and resource management possible. Do it from the start.

When NOT to Use This

Single Environment
If you only need one environment, the multi-environment CIDR logic is overkill. Just hardcode your CIDR blocks.

Auto-Scaling Requirements
This creates fixed subnet counts. If you need many subnets or dynamic subnet creation, use L2 constructs or Terraform modules.

Complex Network Topologies
Hub-and-spoke architectures, transit gateways, or VPN connections need more sophisticated designs.

Multi-Region Active-Active
This deploys to one region. Global applications need cross-region VPCs, which adds significant complexity.

Future Improvements

Private Subnets with NAT Gateways
Add private subnet creation with optional NAT gateway deployment (off by default to save costs).

Network ACLs
Implement baseline NACL rules for defense-in-depth security.

More VPC Endpoints
Add conditional endpoints for ECR, CloudWatch, Systems Manager based on configuration.

IPv6 Support
AWS provides free IPv6. Dual-stack VPCs are the future.

VPC Peering Helper
Automated VPC peering setup between environments.

Transit Gateway Integration
For organizations with many VPCs, transit gateway is better than mesh peering.


Conclusion

Infrastructure as code is powerful when it's deterministic and well-tested.

This project proves you can build production-ready VPC infrastructure with:

  • Predictable IP addressing
  • Multi-environment isolation
  • High availability across AZs
  • Cost optimization through VPC endpoints
  • Comprehensive test coverage

The foundation is solid. Build your applications on top of it.

All code is open source under MIT license. Use it, modify it, improve it.


Questions or Issues?
Open an issue on GitHub or reach out:

Built with ❤️ by Rahul Ladumor

Read more

Building a Production-Grade Blockchain Security Audit Platform on AWS

Designing a Production-Ready Multi-Environment AWS VPC Foundation with CDK & TypeScript

Building an AWS Chaos Engineering Platform: Architecture, Experiments, and Real-World Resilience Testing

Building a Cloud-Native APM Platform with Distributed Profiling on AWS

Subscribe to new posts