Managed Infrastructure for ~1000 Concurrent Users
Plain HTML + JS infographic. Fully managed, click-ops, no IaC.
AWS
GCP
Azure
Scenario:
1000 concurrent users
×
30 min/day
→ assume
0.2–0.4 req/s/user
during peak hours ⇒ **~200–400 RPS** to API (design for 600–800 RPS headroom). Static assets are CDN-cached.
Legend:
Primary path
Recommended
Edge cache
Edge
CloudFront (CDN)
TLS, caching, WAF, geo-routing; cache static + cacheable GET APIs.
Immutable assets
API GET 30–120s
WAF/DDoS
Static hosting
S3 Static Website
SPA/assets from object storage behind CDN.
Routing
API Gateway (HTTP)
Authn/z, rate limits (e.g., 800 RPS burst 1600), canary deploys.
Compute
Lambda (1024–1536MB, PC=20–40)
Provisioned Concurrency 20–40 to avoid cold starts under burst.
Reserved concurrency ≈
600–800
(headroom above peak RPS).
Database
Aurora Serverless v2 (Postgres)
Capacity range:
2–16 ACU
(auto); tune max to workload.
Use
RDS Proxy
; add 1–2
read replicas
or reader endpoint for heavy reads.
Files
S3 (presigned uploads)
Serve via CDN; offload binary traffic from API.
Identity
Cognito (Hosted UI)
MFA/flows managed; JWTs for APIs.
Recommended
ElastiCache Redis (small–medium)
Cache hot reads (item lists, profiles) TTL 60–300s; session store if needed.
Recommended
SQS + Lambda (workers)
Async emails, webhooks, image processing; retry/backoff outside request path.
Ops
CloudWatch + X-Ray
Alarms on p95 latency, 5xx, throttles; traces for hotspots.
Security
WAF + Secrets Manager
Rate-limit abusive IPs; rotate secrets; param store for configs.