

Multi‑Cloud Private Egress Without Breakage
Deterministic outbound paths for Kubernetes and serverless across AWS/GCP/Azure using route controls, symmetric inspection, and policy-as-code guardrails.
Introduction
Multi-cloud “private egress” breaks in the edges: pods find a path around NAT policy, serverless silently uses provider-managed egress, and asymmetric routing causes stateful inspection to fail. The fix is not a single control; it’s a deterministic outbound path enforced at multiple layers: workload policy, node routing, and cloud route/firewall tables. When every default route is forced through approved NAT/proxy/inspection constructs and symmetry is verified continuously, outbound governance becomes enforceable rather than aspirational. This post gives a reference architecture and concrete guardrails you can execute quickly across AWS/GCP/Azure.Quick Take
- Deterministic egress requires three locks: Kubernetes policy, node/route enforcement, and cloud firewall/route-table invariants.
- Symmetric routing is non-negotiable for stateful inspection; treat route tables as part of the security boundary.
- Serverless egress must be explicitly anchored (VPC/VNet/VPC Connector + private NAT/proxy path) or it will escape via provider-managed egress.
- Flow logs + targeted traceroute/conntrack validation catch “looks private but isn’t” failures before production.
- Policy-as-code should deny direct 0.0.0.0/0 to public gateways and require default routes to approved egress constructs.
Deterministic Egress Reference Architecture (AWS/GCP/Azure)
What “deterministic egress” means in practice
Deterministic egress means every outbound connection from a workload resolves to a single, controlled path that you can reason about and enforce:- A single (or small set of) approved egress points per environment.
- Explicit next-hops for default routes.
- No alternate internet paths (public IP assignment, implicit provider egress, or bypass routes).
- Symmetry preserved across inspection/NAT so return traffic traverses the same stateful devices.
Cloud primitives to anchor egress
The exact constructs differ, but the pattern is the same:- AWS: Transit Gateway (hub), NAT Gateway or egress firewall/proxy VPC (spoke), Route Tables, VPC Flow Logs.
- GCP: Cloud Router, Cloud NAT (or proxy), VPC Serverless Connector, VPC Flow Logs.
- Azure: Virtual WAN/Virtual Network Gateway (hub), NAT Gateway or Azure Firewall/proxy, UDR Route Tables, NSG Flow Logs.
- Create an “Egress Hub” per cloud with:
- Central NAT/proxy/inspection.
- Dedicated route tables that make the hub the default next-hop.
- Force all workload networks (Kubernetes node subnets, serverless connectors, app subnets) to use the hub as their default route.
- Ensure only the hub has a path to the public internet (if internet egress is allowed at all).
Avoiding the two common breakpoints
1) “Default route exists, but workloads still bypass.” Causes include public IPs on nodes, direct IGW routes on subnets, or serverless using managed egress.
2) “Traffic reaches inspection, but responses never return.” Causes include asymmetric routes between hub/spokes, multiple NAT points, or per-AZ route inconsistencies.
Kubernetes: Enforce Egress at Multiple Layers (Policy + Routing + Cloud)
Layer 1: Kubernetes egress policy (deny-by-default)
Start with a namespace-scoped deny egress baseline, then allow only private RFC1918 (and specific required destinations). Example using Calico:CODEBLOCK0
This blocks direct internet egress from pods while still allowing access to internal services. You then add explicit allows to your egress proxy/NAT VIPs, private endpoints, or approved external IPs.
Layer 2: Node/subnet routing invariants
NetworkPolicy can prevent pod egress, but it does not fix bypass at the node/subnet layer. Enforce:- No public IPs on Kubernetes nodes.
- Subnet route tables where the default route points to the egress hub (or is absent, if you require fully private).
- No direct routes from workload subnets to public gateways.
Validation examples:
CODEBLOCK1
Layer 3: Cloud firewall + route table enforcement
Use cloud-native firewall constructs to ensure the network fabric itself blocks bypass even if a team misconfigures Kubernetes policy:- AWS: Security Groups/NACLs + route tables for subnets that must not reach Internet Gateway.
- GCP: VPC firewall rules + routes; keep default route control tight.
- Azure: NSG + UDR to force next-hop.
Serverless: Eliminate Provider-Managed Egress and Pin to Private Paths
The serverless gotcha: “private” config that still exits publicly
Many serverless services will use provider-managed egress unless you explicitly attach them to a private network and give them a private default route to your egress hub. What “pinned” serverless egress looks like:- AWS: Attach functions/services to a VPC subnet whose default route points to the egress hub (via Transit Gateway or hub VPC routes). Ensure the subnet does not have a route to an Internet Gateway.
- GCP: Use a Serverless VPC Access Connector so egress goes into your VPC; enforce default routing to Cloud NAT/proxy in the hub.
- Azure: Use VNet Integration for App Service/Functions and UDRs to Azure Firewall/NAT Gateway.
Practical validation: prove the source IP and path
Your proof should be operational, not diagram-based:- Confirm the observed egress IP matches the hub NAT/proxy.
- Confirm flow logs show traffic traversing the hub subnet/interface.
Example: validate external IP from inside a Kubernetes pod (replace with your allowed test endpoint):
CODEBLOCK2
If “private egress” is correctly enforced, the returned IP should always match your approved egress IP set.
Symmetric Routing Through Inspection: Route Tables, Flow Logs, and Traceroute
Route table hardening patterns
To preserve symmetry:- Use a single NAT/inspection tier per traffic class (don’t mix per-subnet NAT with central firewall NAT).
- Make the hub the only place where routes to public internet exist.
- Ensure spoke-to-spoke and spoke-to-internet have unambiguous next-hops.
- Keep route tables consistent across AZs to avoid “AZ A works, AZ B breaks.”
Verify symmetry with flow logs and targeted probes
Use cloud flow logging aggressively:- AWS: VPC Flow Logs on subnets/ENIs in both hub and spoke.
- GCP: VPC Flow Logs on relevant subnets.
- Azure: NSG Flow Logs (and if used, firewall logs) on hub/spoke.
Then verify with controlled tests:
CODEBLOCK3
Interpretation:- If traceroute shows unexpected public hops before the hub, you have a bypass route.
- If TCP connect intermittently fails and flow logs show only one direction, you have asymmetry.
Policy as Code: Prevent Drift (Terraform + Sentinel/OPA/Conftest)
Enforce invariants at plan time
The fastest way to stop “private egress” from regressing is to deny unsafe routes and require explicit, named egress constructs. Guardrails to encode:- Deny any route where destination is 0.0.0.0/0 and next-hop is a public gateway from workload subnets.
- Require default routes from workload subnets to target approved egress next-hops (TGW attachment, firewall, NAT/proxy).
- Deny public IP assignment on nodes and serverless connectors where prohibited.
Example Conftest/OPA policy to deny IGW default routes in Terraform plan JSON (illustrative; adjust resource types per provider):
CODEBLOCK4
Run it in CI:
CODEBLOCK5
Make “approved egress” explicit and auditable
Standardize naming and tagging for:- Egress hubs.
- NAT/proxy/inspection instances.
- Route tables that are allowed to contain default routes.
Then write policies that reference these identifiers, not ad-hoc IDs. This is how you keep the model enforceable as environments scale.
Checklist
- [ ] Define an “egress hub” per cloud with a single approved NAT/proxy/inspection tier per environment.
- [ ] Ensure Kubernetes nodes have no public IPs and node subnets lack direct routes to public gateways.
- [ ] Apply deny-by-default egress policy in Kubernetes (e.g., Calico) and explicitly allow only required destinations.
- [ ] Lock down creation of public load balancers and public endpoints via RBAC/IAM.
- [ ] Attach serverless workloads to private networks (VPC/VNet/VPC Connector) and pin default routing to the egress hub.
- [ ] Standardize route tables across AZs/regions to avoid inconsistent next-hops.
- [ ] Enable and retain flow logs (VPC Flow Logs, NSG Flow Logs) for hub and spoke networks.
- [ ] Validate deterministic egress IPs from pods and serverless workloads using repeatable probes.
- [ ] Validate symmetric routing by correlating a single connection across spoke and hub flow logs.
- [ ] Add policy-as-code checks (OPA/Conftest or Sentinel) to deny direct 0.0.0.0/0 routes to public gateways.
FAQ
How do we prevent “one namespace” from reintroducing internet egress?
Start with deny-by-default egress and treat exceptions as code: namespace policies are reviewed, versioned, and constrained to approved proxy/NAT destinations. Backstop it with node/subnet route table controls so even misconfigured policies can’t create a direct path to the internet.
What’s the fastest way to detect asymmetric routing in production?
Pick a single failing flow (same src/dst/port/protocol), then correlate it across hub and spoke flow logs. If you see only one direction at the hub or the return path hits a different NAT/inspection tier, you have symmetry drift in route tables or multiple competing egress constructs.
Can we keep private egress while still allowing specific SaaS endpoints?
Yes—route all outbound traffic through the approved egress point, then allowlist destinations at the proxy/firewall layer (FQDN/IP as supported) and in Kubernetes egress policy where feasible. The key is that the route decision stays deterministic; only the inspection policy varies by destination.
Article written by Yassine Hadji
Cybersecurity Expert at Skynet Consulting
Citation
© 2026 Skynet Consulting. Merci de citer la source si vous reprenez des extraits.
Need help securing your infrastructure?
Discover our managed services and let our experts protect your organization.
Contact Us