Plan IP addressing for large-scale or hybrid cloud deployments.
→Use custom-mode VPCs. Allocate non-overlapping RFC 1918 CIDR blocks (e.g., 172.16.0.0/12) to avoid conflicts with on-prem (often 10.0.0.0/8). Use 100.64.0.0/10 for GKE pod secondary ranges.
Why: Avoids IP conflicts with on-prem for future hybrid connectivity and provides full control over address space, which is essential for scale and avoiding costly re-IPing.
Reference↗
Provide network isolation for multiple tenants/environments (dev, prod) while centralizing network management and shared services.
→Use Shared VPC. The host project contains the VPC, subnets, firewalls, and interconnects. Tenants/environments are service projects attached to the host project.
Why: Centralizes network administration in the host project while delegating resource management to service projects. More scalable and governable than VPC peering for many projects within an organization.
Reference↗
Plan IP addressing for large GKE clusters using VPC-native networking.
→In a custom-mode VPC, plan for three CIDR ranges: a primary range for nodes, a secondary range for Pods, and another for Services. For expansion, use discontiguous multi-pod CIDR.
Why: VPC-native networking requires dedicated, non-overlapping secondary ranges for pods and services. Proper sizing prevents IP exhaustion, a common and disruptive issue in large clusters.
Reference↗
VMs with no external IPs need to access Google Cloud APIs (e.g., Cloud Storage, BigQuery).
→Enable Private Google Access on the subnet. Optionally, configure DNS to resolve `*.googleapis.com` to `restricted.googleapis.com` (199.36.153.4/30) to enforce VPC-SC.
Why: Routes traffic to Google APIs over Google's internal network without requiring public IPs on VMs. Using `restricted.googleapis.com` adds a layer of data exfiltration protection.
Reference↗
Provide private access to a service in your VPC for consumers (partners, other BUs) whose VPCs have overlapping IP ranges.
→Publish the service (via an Internal Load Balancer) using a Private Service Connect (PSC) service attachment. Consumers create a PSC endpoint in their VPC with an IP from their own range.
Why: PSC decouples producer and consumer networks, using NAT to handle overlapping IPs. It provides secure, service-level access, not full network connectivity like VPC peering.
Reference↗
Connect a large number (50+) of VPCs and/or on-prem sites in a hub-and-spoke topology for centralized management and connectivity.
→Use Network Connectivity Center. Configure the hub and attach VPCs as VPC spokes and on-prem connections (VPN/Interconnect) as hybrid spokes.
Why: NCC is Google's managed solution for large-scale hub-and-spoke topologies, simplifying route management and scaling beyond the 25-peer limit of VPC peering.
Deploy a GKE cluster where nodes and the control plane have no public IP addresses for enhanced security.
→Create a Private GKE cluster. This assigns only internal IPs to nodes and creates a private endpoint for the control plane. Configure authorized networks to restrict control plane access.
Why: A private cluster removes the control plane and nodes from the public internet, significantly reducing the attack surface. All management and workload traffic remains on the private network.
Serverless workloads (Cloud Run, Functions) need to access resources (e.g., Cloud SQL, Memorystore) inside a VPC.
→Create a Serverless VPC Access connector in the target VPC. Configure the serverless service to use this connector for egress traffic.
Why: The connector acts as a proxy, allowing serverless services (which run in a Google-managed environment) to send traffic into a customer-managed VPC using internal IPs.
An application (e.g., HPC, financial trading) requires the absolute lowest network latency between a group of VMs.
→Create a compact placement policy and apply it to the VMs. Use machine types with Tier_1 networking.
Why: Collocating VMs within the same network rack minimizes network hops and physical distance, significantly reducing latency compared to standard VM placement.
Implement a zero-trust security model for microservices, requiring strong identity, encrypted communication (mTLS), and fine-grained authorization.
→Deploy Anthos Service Mesh. Enable automatic mTLS for all service-to-service communication. Use `AuthorizationPolicy` resources to define allowed communication.
Why: A service mesh decouples security from the underlying network, providing workload identity, transparent mTLS, and L7 authorization, which are core tenants of a zero-trust architecture.