Model a complex workflow with parallel stages and dependencies between stages.
→Use YAML multi-stage pipelines. Use the `dependsOn` keyword for stage dependencies and configure parallel jobs within stages.
Why: YAML provides the most flexible, code-based approach for complex orchestration, superior to classic pipelines or chaining separate pipelines.
Implement zero-downtime, low-risk deployment for a web app with instant rollback capability.
→Use Azure App Service deployment slots. Deploy to a staging (green) slot, validate, then perform a slot swap with production (blue).
Why: A slot swap is an atomic, near-instantaneous operation that redirects traffic. Rollback is as simple as swapping back.
Reference↗
Minimize pipeline duplication for numerous microservices that share common build/deploy steps but require specific customizations.
→Create YAML templates in a central repository. In each service-specific pipeline, use the `extends` keyword and pass parameters for customization.
Why: `extends` promotes DRY principles and enforces standards while allowing flexibility through parameters. More powerful than task groups for entire pipeline structures.
Restrict a pipeline stage (e.g., production deployment) to only run on merges to a specific branch (e.g., main).
→Use a `condition` on the stage or job. E.g., `condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))`.
Why: PR validation builds use a different source branch reference (e.g., `refs/pull/...`), so this condition correctly prevents deployment during the PR lifecycle.
Deploy applications from Azure DevOps to on-premises servers behind a corporate firewall.
→Install self-hosted agents on the on-premises servers. Register them to an agent pool in Azure DevOps.
Why: Self-hosted agents initiate outbound communication to Azure DevOps, so no inbound firewall rules are needed. They can access local network resources for deployment.
Require multi-person approval for production deployments and restrict them to specific maintenance windows.
→Define an Azure DevOps Environment for production. Configure approvals with required approvers. Add a "Business Hours" check as a gate to enforce the time window.
Why: Environments centralize deployment controls. Approvals and gates provide robust, automated policy enforcement before a stage runs.
Control feature exposure to users without redeploying the application, with near-real-time updates.
→Use Azure App Configuration for feature management. Instrument the application to read flags and enable its dynamic refresh capabilities.
Why: Decouples feature releases from deployments. App Configuration provides a centralized UI and SDKs for dynamic updates, avoiding application restarts.
Manage Kubernetes cluster state declaratively, where Git is the single source of truth and changes are automatically applied.
→Deploy a GitOps agent like Flux or ArgoCD to the AKS cluster. Configure the agent to monitor a Git repository containing Kubernetes manifests and automatically synchronize the cluster state.
Why: This pull-based model enables continuous reconciliation and drift detection, which is core to GitOps. It is more robust than push-based `kubectl` pipelines.
Manage Terraform state for team collaboration, ensuring security and preventing concurrent modifications.
→Configure the Terraform backend to use an Azure Storage Account. This provides remote state storage, with state locking handled via Azure Blob lease.
Why: Prevents state file corruption from simultaneous `apply` operations and keeps sensitive state data out of source control.
Reference↗
In a monorepo, trigger an application's CI pipeline only when files in its specific directory (or a shared directory) are changed.
→In the pipeline's YAML, use the `trigger.paths.include` filter to specify the relevant directories, e.g., `include: ['/apps/frontend/**', '/apps/shared/**']`.
Why: This avoids unnecessary builds for unrelated code changes, saving CI time and compute resources.
Optimize a test stage with both fast (unit) and slow (integration) tests for quicker feedback.
→Run unit tests and integration tests in parallel jobs within the same stage.
Why: Parallel execution provides unit test results much faster while slower tests run concurrently. Total stage duration is determined by the longest job, not the sum.
Automatically version a library package based on commit history to clearly communicate the impact of changes (breaking, feature, fix).
→Integrate a tool like GitVersion into the CI pipeline. It analyzes commit messages, branches, and tags to automatically calculate a SemVer (Major.Minor.Patch) version.
Why: SemVer provides meaningful versioning that consumers can rely on for dependency management, unlike build numbers or commit hashes.
Deploy an application to multiple geographic regions one by one, with validation after each regional deployment.
→Use a multi-stage YAML pipeline with sequential stages, one for each region, using `dependsOn` to enforce order. Use environment gates between stages for validation.
Why: This ring-based deployment model contains the blast radius of a bad deployment to a single region, allowing for rollback before impacting all users.
Configure a pipeline to support a trunk-based development model, ensuring the main branch is always deployable.
→Configure a CI trigger on the `main` branch. Enforce PRs with a build validation policy that runs fast, comprehensive tests. Integrate rapid notifications (e.g., to Teams/Slack) for build breaks.
Why: Immediate feedback is critical in trunk-based development. This combination prevents broken code from merging and ensures fast remediation when issues occur.
Pass large artifacts (e.g., ML models, >5GB) between pipeline stages efficiently.
→Upload the large artifact to Azure Blob Storage in the producer stage. Pass the blob URI to the consumer stage as an output variable.
Why: Azure Blob Storage is more cost-effective and performant than built-in pipeline artifacts for multi-gigabyte files.
Reduce build times by avoiding re-downloading dependencies (e.g., NuGet, npm) on every run.
→Use the `Cache@2` task. Define a key based on the package lock file (e.g., `packages.lock.json`). The task will store and restore the dependency folder.
Why: Can save several minutes per build by restoring from a fast, local cache instead of fetching from external repositories.
Build or deploy the same code against multiple targets (e.g., different OSs, regions) in parallel.
→Use a `strategy: matrix` in the YAML pipeline job. Define variables for each combination, which will generate a job for each matrix entry.
Why: A matrix strategy keeps the pipeline definition DRY, creating multiple job variations from a single definition and running them in parallel.
Implement a canary deployment on AKS that automatically shifts traffic and promotes or rolls back based on real-time metrics.
→Use a progressive delivery controller like Flagger, integrated with a service mesh (e.g., Istio) and a metrics provider (e.g., Prometheus).
Why: Flagger automates the entire canary analysis process, providing safer and more reliable progressive delivery than manual scripts.
An application pipeline needs to trigger when code changes in its own repository OR in a separate, shared library repository.
→In the application's YAML, define the shared library under `resources.repositories` and configure a `trigger` block on that resource.
Why: Creates a declarative dependency between repositories, ensuring the application is always rebuilt with the latest shared components.
A pipeline needs to create temporary infrastructure for testing and ensure it's destroyed afterward, even if tests fail.
→Use a multi-stage pipeline with separate apply and destroy stages for IaC (Terraform/Bicep). Configure the destroy stage with `condition: always()`.
Why: The `always()` condition guarantees the cleanup stage runs regardless of the success or failure of previous stages, preventing orphaned resources.
Prevent a production deployment from proceeding unless there is an approved change request in an ITSM tool like ServiceNow.
→Configure an Environment gate that invokes the "Query ServiceNow" gate to check the status of the change request.
Why: Automates integration with enterprise change management processes, ensuring compliance without manual hand-offs.
Provide a pool of self-hosted build agents that scales dynamically with demand to reduce queue times and control costs.
→Configure an Azure DevOps agent pool using an Azure Virtual Machine Scale Set (VMSS), set to automatically scale based on the number of pending jobs.
Why: VMSS agents combine the customization of self-hosted agents with the elasticity of cloud-hosted agents, optimizing performance and cost.
Deploy database schema changes in a way that prevents data loss and supports rollbacks.
→Use a migration tool (e.g., Flyway, DbUp). Implement the expand/contract pattern for schema changes to maintain backward compatibility.
Why: Migration tools provide versioning and control. The expand/contract pattern decouples application and database rollbacks, enabling safer deployments.
Self-hosted agents are running out of disk space from accumulated build artifacts.
→In the pipeline YAML, at the job level, configure `workspace: clean: all`.
Why: This preventative pipeline configuration solves the root cause without requiring manual intervention or continuous infrastructure changes.
Integration tests require an isolated database instance for each pipeline run.
→Define a container resource (e.g., SQL Server, Postgres) as a service in the pipeline YAML. The test job can then connect to this ephemeral service.
Why: Provides fast, isolated, and automatically cleaned-up dependencies for tests, preventing test interference and simplifying setup.
Improve reliability and performance of package restoration from public repositories (e.g., npmjs, nuget.org).
→In Azure Artifacts, create a feed and configure upstream sources pointing to the public repositories. Have clients consume packages from the Azure Artifacts feed.
Why: The feed caches packages from upstream sources, protecting against public repository outages and speeding up restores for frequently used packages.
Deploy a Helm chart to multiple environments (dev, prod) with different configuration values.
→Use separate `values-<env>.yaml` files for each environment. In the `HelmDeploy` task, use the `valueFile` input to specify the appropriate file and `overrideValues` to inject dynamic values like image tags.
Why: This pattern separates static environment configuration from dynamic pipeline variables, keeping deployments clean and maintainable.