Design a scalable network for an enterprise with centralized connectivity (ExpressRoute/VPN), shared services, and workload isolation.
→A hub-and-spoke topology. The hub VNet contains the gateway, Azure Firewall, and other shared services. Spoke VNets contain application workloads and are peered to the hub.
Why: This is the standard, recommended enterprise pattern. It centralizes security and connectivity, reducing cost and complexity, while spokes provide strong workload isolation.
A global web application needs Layer 7 load balancing, a Web Application Firewall (WAF), SSL offloading, and URL-based routing.
→Azure Front Door (Standard or Premium).
Why: Front Door is a modern cloud CDN and global load balancer that integrates these capabilities into a single service, providing better performance and simpler management than combining Traffic Manager with regional Application Gateways.
Design a production-grade AKS cluster for multiple teams with varying workload types (CPU, GPU, memory-intensive).
→Use a dedicated system node pool and multiple user node pools with different VM SKUs (e.g., F-series for CPU, E-series for memory, N-series for GPU). Use the cluster autoscaler and enable the Standard/Premium tier for the uptime SLA.
Why: Multiple node pools allow matching the right hardware to the right workload for performance and cost-efficiency. Separating system pods improves stability. The Standard/Premium tier is required for a financially-backed SLA.
An event-driven serverless workflow requires execution times longer than the 10-minute limit of the Functions Consumption plan.
→Use Azure Functions on a Premium plan or an App Service plan, or use Azure Durable Functions for orchestration.
Why: The Premium plan supports execution up to 60 minutes (default 30) and avoids cold starts. Durable Functions are ideal for orchestrating long-running, stateful workflows that may involve human interaction or long waits.
Choose a messaging service for a fan-out event notification system versus a reliable, ordered command processing system.
→Use Azure Event Grid for fan-out, reactive eventing. Use Azure Service Bus Queues (with sessions for ordering) for reliable, transactional command processing.
Why: Event Grid is a lightweight, push-based event routing service optimized for reactive programming. Service Bus is a robust message broker with features like FIFO (sessions), dead-lettering, and transactions for enterprise messaging.
Expose an API running on a private VNet to external partners securely, with policies for rate limiting and authentication.
→Deploy Azure API Management (APIM) in internal VNet mode, fronted by an Azure Application Gateway with WAF for public ingress.
Why: This pattern provides defense-in-depth. APIM in the VNet can access the private backend. The App Gateway terminates SSL, inspects traffic with WAF, and forwards it to the private APIM instance. APIM policies handle auth, rate limits, etc.
Connect hundreds of branch offices and VNets globally with automated, any-to-any connectivity.
→Azure Virtual WAN.
Why: Virtual WAN is the managed Microsoft solution for large-scale, global transit networking. It automates complex routing and provides a unified hub for connecting VPN, ExpressRoute, and VNet spokes.
Run a large-scale, parallel batch job (e.g., CFD simulation) that requires thousands of cores and low-latency MPI communication.
→Azure Batch with a pool of InfiniBand-enabled VMs (e.g., HB-series) using low-priority (Spot) pricing.
Why: Azure Batch is a job scheduler designed for HPC. InfiniBand-enabled VMs provide the high-throughput, low-latency RDMA networking required for MPI. Low-priority VMs drastically reduce cost for fault-tolerant workloads.
An application in a VNet needs to access PaaS services (SQL, Storage) without traffic traversing the public internet.
→Create private endpoints for the PaaS services. This gives the service a private IP address within your VNet.
Why: Private Endpoints are the most secure method for private PaaS connectivity. They ensure traffic stays on the Microsoft backbone and allows you to disable the public endpoint of the PaaS service entirely.
Host a modern single-page application (SPA) with a serverless API backend, CI/CD integration, and a custom domain.
→Azure Static Web Apps.
Why: This is a purpose-built, streamlined service for this exact pattern. It combines static content hosting, integrated Azure Functions for the API, GitHub/Azure DevOps integration, and managed custom domains with free SSL certificates.
Manage and apply governance (Azure Policy) to servers running on-premises and in other clouds (e.g., AWS) from Azure.
→Install the Azure Arc agent on the non-Azure servers to project them as Azure Arc-enabled servers.
Why: Azure Arc extends the Azure control plane to any infrastructure. Once a server is Arc-enabled, it can be managed with Azure Policy, Monitor, Defender for Cloud, etc., just like a native Azure VM.
Incrementally migrate functionality from a legacy monolithic application to new microservices without a "big bang" cutover.
→Apply the Strangler Fig pattern using a reverse proxy like Azure API Management or Application Gateway.
Why: The reverse proxy intercepts calls to the monolith and selectively routes traffic for specific features to the new microservices. Over time, the proxy "strangles" the monolith by redirecting more and more traffic until the old system can be retired.
VMs are in a VNet with forced tunneling (all internet traffic routed on-prem), but they cannot access Azure PaaS services.
→Forced tunneling breaks direct access to Azure public endpoints. Use service endpoints or private endpoints for PaaS access. Alternatively, add UDRs for specific Azure service tags with a next hop of "Internet" to bypass the tunnel.
Why: PaaS services have public endpoints. Forced tunneling sends that traffic on-prem. You must create an exception path, either by making the PaaS service private (endpoints) or by creating specific route exceptions (UDRs with service tags).
A hub-spoke network needs to resolve on-premises DNS names from Azure, and Azure private DNS zones from on-premises.
→Deploy Azure DNS Private Resolver in the hub VNet. Configure an inbound endpoint for on-prem to resolve Azure DNS, and an outbound endpoint with forwarding rulesets to resolve on-prem DNS from Azure.
Why: This is the modern, PaaS solution for hybrid DNS resolution, replacing the need to manage custom DNS server VMs. It integrates natively with private DNS zones and on-premises DNS forwarders.
Multiple VNets need a predictable, static public IP for all outbound traffic for whitelisting by external services.
→In a hub-spoke topology, route all outbound traffic (0.0.0.0/0) from spokes through an Azure Firewall or NAT Gateway in the hub VNet.
Why: Centralizing egress in the hub ensures all outbound traffic uses the hub firewall/NAT Gateway's public IPs, simplifying management and external whitelisting. NAT Gateway is simpler for pure SNAT, while Firewall adds security inspection.
Process highly sensitive data in a way that it is encrypted even while in use in memory, protecting it from the cloud operator.
→Use Azure Confidential Computing VMs (DCsv3/ECsv3-series) with Intel SGX or AMD SEV-SNP to run code in a hardware-based Trusted Execution Environment (TEE) or encrypted memory.
Why: Confidential Computing addresses the "data-in-use" pillar of security, which traditional encryption-at-rest and in-transit do not. It provides verifiable, hardware-level isolation.
A SaaS provider needs to expose their service, running in their VNet, to a customer in the customer's VNet, entirely over the Azure private network.
→The provider creates an Azure Private Link Service on their Standard Load Balancer. The customer creates a Private Endpoint in their VNet that connects to the service.
Why: Private Link is the definitive pattern for secure, private, cross-tenant service exposure. It avoids public internet exposure, IP overlap issues, and complex VNet peering configurations.