Automated rollback for a failing ECS Fargate deployment without custom scripting.
→Enable the ECS deployment circuit breaker with rollback on the ECS service.
Why: Native ECS feature that automatically rolls back if new tasks fail to stabilize. Least operational overhead compared to custom CodeBuild polling or complex CodeDeploy setups.
Reference↗
Deploy to a primary region, validate with automated tests, then deploy to other regions in parallel.
→Use a single CodePipeline with sequential stages: (1) Deploy Region A, (2) a CodeBuild test stage that runs validation, (3) a parallel deploy stage for Regions B & C.
Why: CodeBuild acts as an automated, programmatic gate. A single pipeline is simpler than orchestrating multiple pipelines with Step Functions.
A long-running validation script in a CodeDeploy lifecycle hook causes premature deployment success.
→Increase the `timeout` property for the specific lifecycle hook script in the `AppSpec.yml` file.
Why: The timeout is configured per-hook in the AppSpec file, not at the deployment group level. This ensures the validation script has enough time to complete.
Accelerate slow CodeBuild Docker image builds caused by re-downloading dependencies and image layers on every run.
→In the CodeBuild project config, enable `LOCAL_DOCKER_LAYER_CACHE` and configure an S3 cache for dependency directories (e.g., `.m2`, `node_modules`).
Why: Addresses both sources of slowness directly. Docker layer caching reuses unchanged image layers; S3 caching reuses downloaded application dependencies.
Implement a canary deployment for a Lambda function with automated, metric-driven rollback.
→Use AWS SAM with `DeploymentPreference` (e.g., type `Canary10Percent5Minutes`). Add a CloudWatch alarm on the `Errors` metric as a rollback trigger.
Why: SAM natively integrates with CodeDeploy for Lambda, automating alias traffic shifting, monitoring, and rollback without custom scripts.
Reference↗
Configure IAM for a CodePipeline in Account A to deploy resources into Account B.
→Pipeline role (Account A) assumes an action role (Account B). The action role in B trusts the pipeline role and has deploy permissions. The S3 artifact bucket and KMS key in A must have resource policies granting access to the action role in B.
Why: This is the standard, secure cross-account access pattern: role assumption for actions, resource-based policies for data access.
Implement a GitOps workflow for EKS where the cluster state is automatically and continuously reconciled with a Git repository.
→Deploy a GitOps controller (e.g., Flux, ArgoCD) in the EKS cluster. Configure it to monitor the Git repository and apply/reconcile changes.
Why: This is the standard "pull-based" GitOps pattern. The in-cluster controller handles continuous reconciliation and drift detection, which is the core principle of GitOps.
Allow a CodeBuild project in a central tooling account to deploy Kubernetes manifests to EKS clusters in separate workload accounts.
→In each workload account, create a cross-account IAM role trusted by the CodeBuild role. Map this new role to a Kubernetes RBAC group in the EKS cluster's `aws-auth` ConfigMap. The CodeBuild script assumes the role before running `kubectl`.
Why: This is the standard, secure pattern for cross-account EKS access. It follows least privilege by creating a dedicated, trusted role for this purpose.
Perform a complex RDS PostgreSQL or MySQL schema migration with zero or near-zero downtime.
→Use the Amazon RDS Blue/Green Deployments feature. Create a synchronized staging (green) environment, apply schema changes to it, and then switch over to promote it to production.
Why: This is the purpose-built, managed service for safe, zero-downtime RDS updates. It handles cloning, synchronization, and a fast (< 1 min) switchover with built-in guardrails.
Deploy a new version of a single-page application (SPA) to S3/CloudFront and ensure users receive the new version immediately with minimal cache invalidation costs.
→Use content-based hashing for asset filenames (e.g., `app.a1b2c3d4.js`). After deploying new assets, invalidate only the `index.html` file in the CloudFront distribution.
Why: Hashed filenames are unique, so CloudFront treats them as new objects and fetches them from the origin, bypassing the cache. Only the single entry point file (`index.html`) needs invalidation, which is significantly cheaper than a wildcard (`/*`) invalidation.
Implement a CI/CD pipeline for an AWS CDK application that automatically updates itself when the pipeline's own definition changes.
→Use the CDK Pipelines construct (`pipelines.CodePipeline`). This construct creates a pipeline that includes a `SelfMutate` stage by default.
Why: CDK Pipelines is a high-level construct purpose-built for this pattern. The `SelfMutate` stage ensures the pipeline always reflects the latest definition from code before deploying application changes.
Deploy a new application version that requires a backward-compatible database schema change (e.g., adding new columns) with zero downtime.
→Implement an expand-and-contract (or parallel change) pattern. First, deploy the additive, backward-compatible database schema changes. Second, deploy the new application version that uses the new schema. Both old and new application versions can coexist with the updated database.
Why: This pattern decouples the database and application deployments, ensuring the database state is always compatible with both the old and new application versions, thus enabling zero-downtime rollouts.
Gradually roll out a new feature to specific user segments and measure the impact on business metrics (e.g., conversion rate) using A/B testing.
→Use Amazon CloudWatch Evidently. Create a feature with multiple variations, a launch to control the rollout percentage, and an experiment to measure the statistical impact on defined metrics.
Why: Evidently is a purpose-built service for feature flagging and A/B experimentation, providing not just the rollout mechanism but also the statistical analysis engine to measure impact.