A global application requires low latency and high availability for users worldwide.
→Use a Global HTTP(S) Load Balancer with multi-region backends (MIGs/GKE), Cloud CDN for static content, and Cloud Armor for DDoS protection.
Why: The Global Load Balancer provides a single anycast IP that routes users to the nearest healthy backend. CDN caches content at the edge, reducing origin load and latency.
Reference↗
Ingest and process high-throughput, real-time data from IoT devices for immediate analysis.
→Use Pub/Sub for scalable message ingestion, a Dataflow streaming pipeline for real-time processing and anomaly detection, and write results to BigQuery for analytics.
Why: This is the canonical serverless pattern for real-time data. Pub/Sub decouples ingestion, Dataflow handles complex processing with auto-scaling, and BigQuery supports streaming inserts for real-time analytics.
Reference↗
A gaming backend needs to store player state and leaderboards with sub-millisecond read latency and high throughput.
→Use Cloud Bigtable for game state/leaderboards and Memorystore (Redis) for session caching.
Why: Bigtable provides single-digit millisecond latency for high-throughput reads/writes, ideal for time-series or large analytical datasets. Memorystore offers microsecond latency for session state.
A globally distributed application requires a database with strong transactional consistency and horizontal scalability.
→Use Cloud Spanner with a multi-region configuration.
Why: Spanner is the only service that provides global, strongly consistent transactions with SQL semantics and horizontal scalability. Cloud SQL requires manual sharding for this scale.
Reference↗
Migrating a demanding, on-premises Oracle or PostgreSQL database requiring high availability, performance, and minimal refactoring.
→Use AlloyDB for PostgreSQL.
Why: AlloyDB is a fully managed, PostgreSQL-compatible database with superior performance, 99.99% availability, and Oracle compatibility features, making it ideal for enterprise migrations.
Reference↗
Connecting an on-premises data center to GCP with consistent, low-latency (<10ms) and high-bandwidth (10+ Gbps) requirements.
→Use Dedicated Interconnect with redundant connections.
Why: Dedicated Interconnect provides a private, high-bandwidth, low-latency physical connection. Cloud VPN runs over the public internet and cannot guarantee latency or bandwidth SLAs at this level.
Reference↗
Designing a network for multiple teams/projects requiring centralized network management but decentralized project ownership.
→Implement a hub-and-spoke model using a Shared VPC. The central network team manages the host project, and application teams use service projects.
Why: Shared VPC allows centralized control over networking resources (subnets, firewalls) while delegating resource management in service projects. This is more scalable and secure than VPC peering.
Managing Kubernetes clusters consistently across Google Cloud, AWS, Azure, and on-premises environments.
→Use Anthos to provide a unified control plane for multi-cloud and hybrid cluster management, policy enforcement, and observability.
Why: Anthos extends GKE to other environments, enabling consistent operations and GitOps-based configuration management (Config Management) across your entire fleet.
A data science team needs to train complex ML models with GPU acceleration without managing infrastructure.
→Use Vertex AI Training with custom containers and Vertex AI Experiments for tracking model iterations.
Why: Vertex AI provides a fully managed training service that handles infrastructure provisioning, scaling, and GPU management. It integrates with experiments to track and compare model performance.
Serving a large ML model with low latency and high availability, capable of auto-scaling.
→Use Vertex AI Prediction with a custom container, deployed to a managed endpoint with autoscaling enabled.
Why: Vertex AI Prediction is optimized for low-latency model serving. It handles autoscaling, traffic splitting (for A/B testing), and infrastructure management, abstracting complexity from developers.
A Cloud Function or Cloud Run service needs to securely connect to a Cloud SQL instance with a private IP.
→Configure a Serverless VPC Access Connector to bridge the serverless environment with your VPC.
Why: The connector creates a tunnel into your VPC, allowing serverless services to access internal resources by their private IP addresses without exposing them to the internet.