

SOC 2 Type II Evidence Without Screenshots
Replace ad-hoc screenshots with reproducible, verifiable evidence bundles tied to control IDs across the full SOC 2 Type II review period.
Introduction
SOC 2 Type II audits stall when evidence can’t be reproduced across the full review period—especially in IaC-driven environments where “what existed” matters as much as “what exists.” Screenshots and one-off exports are brittle, hard to validate, and easy to challenge because they don’t prove continuous operation. The fastest path is to treat evidence as verifiable artifacts: deterministic Terraform plan/apply outputs, immutable audit-log digests from AWS CloudTrail, and signed build/deploy attestations from your CI/CD system. This post lays out a control-to-artifact blueprint, export commands, and an auditor re-run playbook that yields an auditor-ready package with minimal back-and-forth.Quick Take
- Evidence should be reproducible: if an auditor can’t re-run it, it’s weak.
- Map each control to a minimal set of artifacts (Git metadata, CI logs, IaC plan JSON, audit-log query outputs).
- Prefer immutable sources: append-only logs, digest files, and signature verification.
- Package evidence as a deterministic bundle: fixed paths, hashes, and a single index mapping files to control IDs.
- Give auditors a re-run playbook: read-only steps to reproduce, validate chains, and detect tampering.
Evidence Architecture: From “Screenshots” to Deterministic Artifacts
What “reproducible evidence” means in practice
Reproducible evidence has three properties:1) Deterministic generation: the same inputs yield the same outputs (or explainable diffs).
2) Verifiability: artifacts can be hashed and signature-verified.
3) Traceability: each artifact is tied to a control ID, a time window, and the environment/account.
A practical evidence bundle for Type II should include:- IaC proofs: Terraform plan/apply logs,
terraform show -json, provider lockfiles, and state backends metadata. - Audit-log proofs: query outputs and digest validation for AWS CloudTrail (or equivalent cloud audit logs).
- Delivery proofs: CI job logs plus signed build/deploy attestations bound to commit SHAs and environments.
Standardized evidence bundle layout (auditor-friendly)
Use a fixed folder structure so the package is navigable and scriptable:CODEBLOCK0
The key is that every file is:- named with a date range or run date
- tied to an environment/account
- referenced in
evidence-index.csv
Control-to-Artifact Mapping Blueprint (CC6/CC7/CC8)
Minimal artifacts that carry maximum audit weight
The fastest way to avoid over-collection is to map controls to the fewest artifacts that can be independently verified.Below is a pragmatic blueprint (adapt control IDs to your own control set):
CC6 (Logical Access)
Target: prove access is authorized, enforced, and reviewable. Recommended minimal artifacts:- Git merge metadata for access policy changes (PR title, approvers, commit SHA).
- CI job logs showing policy tests and deployment.
- Terraform plan JSON proving enforcement (SSO-only, MFA, role boundaries).
- Cloud audit-log query output showing successful/failed auth patterns and admin actions.
Example: capture Git merge metadata (provider-agnostic):
CODEBLOCK1
CC7 (System Operations / Detection)
Target: prove you can detect and investigate security-relevant events. Recommended minimal artifacts:- Immutable audit-log configuration evidence (retention, write-once controls where available).
- Period-wide queries for auth failures, privilege escalations, and key config changes.
- Digest verification output showing logs weren’t altered.
CC8 (Change Management)
Target: prove changes are controlled, reviewed, and deployed consistently. Recommended minimal artifacts:- PR approvals + branch protection evidence (exported settings or IaC config).
- CI pipeline run record (job IDs, timestamps, result, artifact digests).
- Build/deploy attestation tied to commit SHA and environment.
- Terraform plan/apply artifacts for infrastructure changes.
Implementation Snippets + Export Commands (IaC + Audit Logs + Integrity)
Enforce access controls in Terraform (illustrative patterns)
The exact resources vary, but the evidence pattern is consistent: policy-as-code + plan JSON + apply logs.Example: enforce tag-based ownership and session controls (generic Terraform HCL pattern):
CODEBLOCK2
Capture the high-value artifacts:
CODEBLOCK3
.terraform.lock.hcl) to avoid drift from provider changes.Query and export audit logs (AWS + Azure examples)
For AWS CloudTrail Lake, store the query text and export the output:CODEBLOCK4
For Azure Monitor Activity Log export (subscription scope example):
CODEBLOCK5
Hash and sign the evidence bundle (tamper detection)
Use a manifest and sign it. This is simple, fast, and high impact.CODEBLOCK6
Auditor Re-Run Playbook: Deterministic Verification Without Elevated Access
Reproduce Terraform plans in read-only mode
Provide auditors (or internal audit) with a read-only process that produces the sameterraform show -json output, given the same commit and lockfile.
Include these steps in rerun-playbook.md:
CODEBLOCK7
Guidance to reduce false diffs:- Use the same workspace/environment variables.
- Ensure the same backend and state snapshot are referenced (document backend config, not secrets).
- If drift exists, treat it as a finding: the point is to detect it, not hide it.
Validate CloudTrail digest chains and query provenance
Your goal is to show two things:- the query ran against the right data store and time window
- the log integrity mechanism was validated for the period
- the Event Data Store ARN
- the query statement text (version-controlled)
- the query run timestamp and QueryId
- digest verification output (command + captured stdout)
Even when the digest verification mechanism differs by service, preserve the pattern: “command used + output captured + hashed + signed.”
Validate build/deploy attestations and link them to changes
Attestations should bind:- commit SHA
- build artifact digest (e.g., container image digest)
- environment (dev/stage/prod)
- pipeline run ID
Example attestation verification workflow (generic):
CODEBLOCK8
jq -S) before hashing to avoid formatting differences producing different hashes.Skynet Execution Model: Zero-Trace Evidence Bundles in Hours
What “standardized execution” changes for audits
Speed comes from eliminating bespoke evidence collection and making the output format identical across accounts/environments:- fixed evidence structure
- fixed control-to-artifact mapping template
- fixed export commands and query library
- fixed integrity manifest + signature step
Skynet’s execution engine produces a “zero-trace” bundle: proofs are derived from existing systems-of-record (IaC repos, audit logs, CI/CD) without requiring fragile screenshot workflows. The deliverable is an auditor-ready package with a single index mapping every artifact to a control ID and a re-run playbook auditors can execute.
The single index file auditors actually use
Create anevidence-index.csv with columns like:
- control_id
- artifact_path
- artifact_type
- period_start
- period_end
- source_system
- generation_command
- hash (sha256)
This converts “a folder of files” into a navigable control proof map.
Checklist
- [ ] Define the audit period window and standardize timestamps to UTC in all artifacts.
- [ ] Create a control-to-artifact map for CC6/CC7/CC8 with minimal, high-signal artifacts.
- [ ] Store all query texts (CloudTrail Lake/Azure Activity) as version-controlled files.
- [ ] Generate Terraform plan files and
terraform show -jsonoutputs for each sampled date/environment. - [ ] Capture CI job logs and merge metadata for every infrastructure/security policy change.
- [ ] Export audit-log query results for the full period (monthly/weekly slices as needed).
- [ ] Validate and capture audit-log integrity outputs (digest/immutability verification) for the same period.
- [ ] Normalize and hash key JSON outputs (e.g.,
jq -S) to avoid formatting drift. - [ ] Build
manifest.sha256for the entire evidence tree and sign it with GPG. - [ ] Produce
evidence-index.csvmapping every file to control IDs, time window, and generation commands. - [ ] Include an auditor re-run playbook with read-only reproduction steps and expected outputs.
FAQ
How do we handle infrastructure drift during the Type II period?
Don’t hide drift—surface it. Use period-based plan artifacts and audit-log queries to show what changed, when it changed, and whether the change followed your approval and deployment process. If drift appears in a read-only re-run, document it as an exception with an associated remediation ticket and link that record in the evidence index.
What if auditors ask for “screenshots anyway”?
Offer reproducible exports plus a re-run playbook. When a request is purely about readability, generate a human-readable render from the same source artifact (for example, a summarized view of plan JSON or a filtered audit-log JSON). The key is that any readable view must be derived from the hashed/signed canonical artifacts.
How much evidence is “enough” without over-collecting?
Enough evidence is the smallest set that proves operation across the period and can be independently validated. Start with control-to-artifact mapping, then collect only artifacts that (a) are immutable or integrity-checked, (b) tie directly to change and access events, and (c) can be re-generated or re-queried deterministically. If a file can’t be explained, re-run, and mapped to a control ID, it’s usually noise.
Article written by Yassine Hadji
Cybersecurity Expert at Skynet Consulting
Citation
© 2026 Skynet Consulting. Merci de citer la source si vous reprenez des extraits.
Need help securing your infrastructure?
Discover our managed services and let our experts protect your organization.
Contact Us