

Multi-Cloud Egress You Can Actually Prove
Deterministic multi-cloud egress with Terraform, policy gates, and flow-log evidence you can defend under DORA/NIS2.
Introduction
Multi-cloud egress is a control plane problem, not a diagram problem: routes drift, exceptions accumulate, and “temporary” paths become permanent. Under DORA/NIS2 expectations, you need demonstrable boundary control, rapid containment, and evidence that only approved egress exists—across AWS, Azure, and GCP. The fastest path is standardized execution: enforce deterministic routing via Terraform, block non-compliant diffs at review/apply time, and continuously validate with flow logs and config snapshots. Done correctly, you can prove allowed egress in hours, then keep it locked.Quick Take
- Deterministic egress starts with opinionated Terraform modules that pin default routes and centralize NAT/firewall.
- Treat “shadow egress” as a build-time defect: fail plans with OPA/Conftest policies before merge/apply.
- Flow logs are your continuous assertion layer: detect new egress points and unexpected destinations/ports.
- Evidence is a pipeline artifact: store policy decisions, route snapshots, and query outputs per environment.
- Standardized execution reduces variance: same controls, same proofs, across AWS/Azure/GCP.
Define a deterministic egress architecture (per cloud) in Terraform
The goal is not “no internet.” The goal is: every path to the internet (and to external suppliers) is explicitly approved, routed through controlled enforcement points, and cannot be bypassed via ad-hoc gateways, peering, VPN, or service endpoints.Define “controlled egress” as a first-class resource
Create a consistent tagging/labeling contract used across clouds and tooling:- Controlled egress components (NAT, firewall, route tables, subnets) carry a marker.
- Any default route to the internet must be owned by the module and must reference the controlled component.
egress=controlled, owner=network-platform, change_control=pipeline and require them in policy gates. This turns “intent” into something testable.AWS: force 0.0.0.0/0 through centralized NAT + egress VPC
Patterns vary (centralized egress VPC, shared services, inspection VPC), but the invariant is: workloads do not attach arbitrary Internet Gateway paths.Terraform snippet (illustrative) that pins the default route to a known NAT:
CODEBLOCK0
Azure: UDRs to Azure Firewall and Private Link for PaaS
In Azure, default routing is often where drift hides: a subnet without a UDR, or a route exception that sends traffic to the internet.Terraform snippet for a subnet UDR that forces default route via Azure Firewall:
CODEBLOCK1
GCP: Cloud NAT + hierarchical firewall with pinned defaults
In GCP, you typically control internet egress by forcing private subnets to egress via Cloud NAT attached to a Cloud Router, then constraining outbound behavior with firewall policy.Terraform snippet for Cloud NAT:
CODEBLOCK2
Stop shadow egress at review/apply time with policy gates
Deterministic Terraform is necessary but not sufficient. You also need negative controls: explicit denials for patterns that create bypass paths.Gate on Terraform plan output (not just code)
Policy should evaluateterraform show -json plan.out output because it captures module expansions and the real resource graph.
Generate a plan JSON artifact:
CODEBLOCK3
Run Conftest:
CODEBLOCK4
OPA/Conftest: deny AWS default routes to IGW outside the approved module
Policy intent:- Deny any
awsroutewheredestinationcidrblock == 0.0.0.0/0andgatewayidstarts withigw-unless it’s created by the approved egress module.
CODEBLOCK5
OPA/Conftest: deny Azure routes that send traffic directly to Internet
Policy intent:- Deny any
azurermroutetableroute wherenexthoptype == Internetexcept in explicitly approved public tiers.
CODEBLOCK6
Policy gates as evidence artifacts
A gate should emit (and store): the evaluated plan hash, the policy bundle version, and pass/fail decisions with denied resources.Prove egress continuously with flow logs and targeted queries
Once deterministic routing and gates are in place, you still need runtime verification. Reality diverges due to new peering/VPN paths, service endpoint edge cases, or newly provisioned egress components outside the controlled stack.AWS: VPC Flow Logs + CloudTrail for new egress points and destinations
Enable AWS VPC Flow Logs on relevant VPCs/subnets and centralize into Amazon S3 or CloudWatch Logs. Use AWS CloudTrail to correlate route/gateway/attachment changes.Athena-style query idea to enumerate new destination IP/port pairs over the last 24 hours (schema varies by delivery format):
CODEBLOCK7
Azure: NSG Flow Logs + Log Analytics for outbound-to-public anomalies
With Azure NSG Flow Logs flowing into Azure Log Analytics, query outbound flows to public IPs outside an allowlist.CODEBLOCK8
GCP: VPC Flow Logs into BigQuery for first-seen external egress
Export GCP VPC Flow Logs to BigQuery and query for new external destinations.CODEBLOCK9
Operationalize evidence and containment for DORA/NIS2-style scrutiny
This is about building controls that stand up to technical scrutiny and support rapid containment—without hand-waving.Build an evidence pipeline
Minimum viable evidence set per environment:- Terraform plan/apply records for controlled egress modules.
- Policy gate outputs (policy version, plan hash, deny messages).
- Scheduled route table/UDR snapshots and diffs.
- Flow-log query outputs for first-seen destinations and anomalous egress.
Make containment a tested, deterministic mode
Containment should be a controlled execution path, not a scramble. Common patterns:- Switch to a restricted allowlist at the centralized egress enforcement point.
- Temporarily block all but critical supplier endpoints.
- Freeze route changes via stricter policy gates during an incident window.
Run containment through the same gates:
CODEBLOCK10
Checklist
- [ ] Centralize egress per cloud (NAT/firewall) and document approved egress entry/exit points.
- [ ] Pin default routes (0.0.0.0/0) in Terraform modules and mark resources as controlled.
- [ ] Enforce account/subscription/project boundaries so workloads can’t create independent gateways.
- [ ] Export Terraform plan JSON and run Conftest policy tests on every change.
- [ ] Deny bypass patterns (IGW default routes, Azure nextHopType=Internet, unmanaged peering/VPN routes).
- [ ] Enable AWS VPC Flow Logs, Azure NSG Flow Logs, and GCP VPC Flow Logs with centralized retention.
- [ ] Build first-seen destination queries and alert on new external IP/port pairs.
- [ ] Snapshot and diff route tables/UDRs; alert on unauthorized changes.
- [ ] Store gate decisions, plan hashes, and query outputs as immutable pipeline artifacts.
- [ ] Implement and rehearse an egress containment mode with a controlled rollout path.
FAQ
Will this reduce engineering velocity?
If egress is enforced through standardized Terraform modules and policy gates, most teams stop thinking about routing day-to-day. Friction only appears when someone tries to introduce a new egress path or destination—exactly when you want intent capture and traceability.
How do we handle suppliers with changing IP ranges?
Centralize egress through a controlled enforcement point and manage supplier access with a small number of mechanisms (FQDN-aware controls where supported, controlled proxies, or curated IP sets when unavoidable). Backstop with flow-log first-seen detection so changes surface immediately.
What is the fastest way to prove egress this week?
Pick one production-critical workload per cloud: deploy the controlled egress module, enable flow logs, and add a minimal policy bundle that blocks obvious bypass routes. Replicate the same execution pattern across environments to eliminate variance and produce consistent evidence.
Article written by Yassine Hadji
Cybersecurity Expert at Skynet Consulting
Citation
© 2026 Skynet Consulting. Merci de citer la source si vous reprenez des extraits.
Need help securing your infrastructure?
Discover our managed services and let our experts protect your organization.
Contact Us