What Counts as Evidence in CMMC? Real Examples for L1 and L2

by | Aug 31, 2025 | Evidence & Documentation | 0 comments

CMMC evidence examples for Levels 1, 2

  • Evidence is the backbone of CMMC. Policies and claims aren’t enough—assessors want objective proof that each requirement is implemented and operating as intended (think configs, logs, demos, and records).
  • Your core documentsSSP and POA&M—anchor your evidence story, with Level‑1 not allowing POA&Ms to achieve status (per rule).
  • Expect assessors to examine, interview, and test: documentation + artifacts + live or screen‑share demonstrations.
  • Build a living evidence register mapped to L1/L2 practices and 800‑171A objectives; automate collection where possible.
  • Examples below show what “good evidence” looks like for common L1 and L2 controls.

    Why evidence matters in CMMC

    CMMC isn’t a paperwork exercise; it’s proof. To be scored MET, you must demonstrate the practice exists and functions—not just that a policy says it should. The CMMC Assessment Guides (especially Level 2) stress producing sufficient evidence per requirement and that final status hinges on verified implementation.

    “Objective evidence” is key: it’s observable, testable, and verifiable—for example, a GPO export showing your lockout policy, an SIEM report of log reviews, or a live demo of MFA enforcement. Contrast that with subjective evidence (e.g., “we train users annually”) that lacks proof.

    Assessments use NIST SP 800‑171A’s approach—Examine, Interview, Test—to validate that what’s written in your SSP reflects reality in your environment.

    Bottom line: If an assessor can’t see it, test it, or trace it—assume it doesn’t count.

     

    What assessors expect to see

    At a practical level, assessors look for three categories of proof:

    1. Documentation — policies, procedures, plans, SSP, POA&M
    2. Artifacts — configs, logs, screenshots, training records, tickets
    3. Demonstrations — screen‑share or on‑site observation of controls “in action”

    This aligns with established evidence collection approaches and the 800‑171A methods (Examine / Interview / Test). Veteran assessors also emphasize live demonstration and staff interviews to reach higher confidence, not just static screenshots.

    Pro‑tip: Pre‑identify who owns each control and can demonstrate it live (e.g., “Bill shows AD role mappings for AC controls”).

    Core evidence documents you must have (SSP, POA&M, policies)

    System Security Plan (SSP).

    Your SSP is the single source of truth describing your system boundary, roles, data flows, and how each practice is implemented. It should reference—not replicate—detailed procedures and technical configurations.

    Plan of Action & Milestones (POA&M).

    Your POA&M tracks gaps, risks, milestones, owners, and due dates. Note: For Level 1 self‑assessments, a POA&M is not permitted to achieve status; for other situations, POA&Ms are limited to select requirements with conditions defined in the rule. DoD’s official CMMC resources link all current scoping and assessment guidance.

    Policies & procedures.

    Assessors expect written policies and supporting procedures for families like Access Control, Incident Response, Risk Assessment, and more—plus evidence you actually follow them.

    Real evidence examples for Level 1 (L1)

    L1 covers 17 practices focused on FCI protection (FAR 52.204‑21). Evidence typically shows identity controls, media protection, and boundary protections are in place. Below are example artifacts and demos assessors commonly accept.

    L1 Control What “good evidence” looks like
    AC.L1‑3.1.1 – Limit system access to authorized users/devices Examine: Active Directory (AD) user list, device inventory mapped to scope. Test: Screen‑share showing an unauthorized login attempt is denied; show device join approvals.
    AC.L1‑3.1.2 – Limit users to permitted transactions/functions Examine: AD groups and role definitions. Test: Non‑privileged account attempting a privileged function fails; screenshot of RBAC enforcements.
    AC.L1‑3.1.20 / 3.1.22 – Control external connections & public info Examine: List of external systems and approvals; monthly reviews of public sites to confirm no CUI exposure; ticket or sign‑off records.
    IA.L1‑3.5.1 / 3.5.2 – Identify/authenticate users and devices Examine: GPO requiring passwords/credentials; device join policy. Test: Attempt connection with unknown device blocked.
    MP.L1‑3.8.3 – Sanitize/destroy media before disposal/reuse Examine: Media sanitization policy; chain‑of‑custody; vendor certificate of destruction. Test: Demonstrate sanitization workflow.
    PE.L1‑3.10.x – Physical access limits, visitor escort/logs Examine: Badge logs, visitor logbook, camera footage (as applicable). Test: Show badge entitlement changes upon termination.
    SC.L1‑3.13.1 – Monitor/control external communications Examine: Network diagram; firewall ACLs limiting inbound/outbound; ISP edge configs. Test: Show blocked port/deny rule in action.

    These are illustrative examples; always map evidence to your scoped systems and procedures.

    Real evidence examples for Level 2 (L2)

    L2 aligns with NIST SP 800‑171’s 110 requirements for CUI and is assessed against 171A objectives (assessors will often request proof down to the objective level). Expect a mix of documentation, artifacts, and demonstrations across domains.

    Below are practical, assessor‑friendly evidence patterns (choose those that fit your environment).

    Access Control (AC)

    • Least privilege (AC.L2‑3.1.5).
      Artifacts: RBAC matrix; AD group membership exports; privileged access request/approval tickets.
      Demos: Show a standard user cannot execute admin functions; PAM workflow for elevation.
    • Session lock/termination (AC.L2‑3.1.10 / 3.1.11).
      Artifacts: GPO settings for lock after X minutes; device baseline.
      Demos: Screen‑share showing lock triggers and termination at configured thresholds.
    • Control CUI flow (AC.L2‑3.1.3).
      Artifacts: Data‑flow diagrams; VLAN/segment list; firewall rules limiting CUI egress; DLP policies.
      Demos: Show policy blocking unapproved destinations for CUI data.

    Identification & Authentication (IA)

    • MFA enforcement (IA.L2‑3.5.3).
      Artifacts: IdP policy exports; conditional access rules; logs showing MFA challenges.
      Demos: Live sign‑in flow triggering MFA on privileged roles/remote access.
    • Password complexity & reuse (IA.L2‑3.5.7 / 3.5.8).
      Artifacts: GPO exports with complexity/age/reuse; password vault policy for service accounts.
      Demos: Attempt to set a non‑compliant password fails.

    Audit & Accountability (AU)

    • System auditing enabled (AU.L2‑3.3.1) + log review (AU.L2‑3.3.3).
      Artifacts: SIEM dashboards; log retention configs (e.g., 90‑day online, 6‑month archive); tickets evidencing weekly/monthly review.
      Demos: Show a query trace from event to review note; alerting on audit failures (AU.L2‑3.3.4).
    • Time synchronization (AU.L2‑3.3.7).
      Artifacts: NTP configuration; authoritative time source settings.
      Demos: Show consistent timestamps across systems.

    Configuration Management (CM)

    • Baseline configurations (CM.L2‑3.4.1) & change control (CM.L2‑3.4.3).
      Artifacts: Secure baseline documents; configuration drift reports; CAB tickets; code repo approvals for IaC.
      Demos: Walk through a recent change with impact analysis (CM.L2‑3.4.4).

    Incident Response (IR)

    • Plan, reporting, and testing (IR.L2‑3.6.1/2/3).
      Artifacts: IR plan; after‑action reports; evidence of annual tabletop or functional tests; incident tickets with timelines.
      Demos: Show the reporting channel (e.g., hotline/email) and escalation evidence.

    Risk Assessment (RA)

    • Periodic risk assessments & vuln management (RA.L2‑3.11.x).
      Artifacts: Risk register; scan results; remediation SLAs; exception approvals.
      Demos: Show vulnerability lifecycle from detection → remediation verification.

    System & Communications Protection (SC)

    • Boundary protection & encryption in transit (SC.L2‑3.13.1 / 3.13.8).
      Artifacts: Firewall/IDS/IPS configs; TLS settings; VPN configs; split‑tunneling disabled for CUI.
      Demos: Packet capture or policy test proving TLS and blocked insecure protocols.

    System & Information Integrity (SI)

    • Patch & anti‑malware (SI.L2‑3.14.1/2/4/5).
      Artifacts: Patch compliance dashboards; AV/EDR policy exports; detection/quarantine logs.
      Demos: Show blocked malware event and alert routing to SOC.

    Assessor tip: Strong evidence threads tie policy → procedure → control owner → artifact → demonstration. Interview readiness matters as much as artifacts.

    Common evidence mistakes to avoid

    1. Screenshots only. Static images can be spoofed or out‑of‑date; pair them with exports, logs, and live demos.
    2. No timestamps or context. Evidence without dates, system names, or scope labels invites doubt.
    3. Stale documents. Old policies or baselines create credibility gaps—assessors will notice.
    4. Scattered storage. Evidence spread across email/drive folders slows assessments and increases errors—centralize it.
    5. Policy–practice mismatch. SSP says one thing; systems do another—ensure alignment with live demos.
    6. Misusing POA&Ms at L1. You can’t rely on a POA&M to achieve Level‑1 status; rules strictly limit POA&M usage.

    Best practices: Build a high‑confidence evidence register

    Think of your evidence register as a catalog that maps each required practice (and, for L2, each assessment objective) to specific proof, owners, and refresh cycles.

    Structure the register by control → objective → proof.

    • Columns to include: Control ID, Objective ID (for L2), Evidence Type (doc/artifact/demo), System/Tool, Owner, Location/Link, Last Updated, Next Refresh, Cross‑reference (reused evidence).
    • This mirrors the Examine/Interview/Test paradigm, helping you show depth.

    Centralize and secure storage.

    • Use a single, access‑controlled repository (DMS/GRC platform) with versioning and immutable audit trails.
    • Ensure least‑privilege access—auditors get a read‑only “data room” view.

    Automate where possible.

    • Pull evidence directly from IdP, EDR, SIEM, MDM, and ticketing systems; schedule exports and snapshots.
    • Automation keeps artifacts current and reduces last‑minute scrambles.

    Assign clear ownership.

    • Map each control to a named individual (primary) and a backup; include who can demonstrate it live.

    Review on a cadence.

    • Quarterly evidence reviews (or after significant change) to refresh artifacts and update links/owners.

    Reuse evidence thoughtfully.

    • Many artifacts (e.g., MFA policy exports) support multiple objectives—cross‑reference them in the register.

    Resource: Community and official templates for SSP/POA&M and related documentation are curated here; they’re a time‑saver when starting your register.

    How to stay assessment‑ready, year‑round

    1) Treat your SSP as a living document.

    Update it when systems, boundaries, or processes change; assessors will compare it to reality.

    2) Run internal mini‑assessments.

    Dry‑run the Examine/Interview/Test flow for a control family each month; log findings and update your POA&M (where permissible).

    3) Use dashboards and alerts.

    Real‑time views of log coverage, MFA enrollment, patch compliance, and access changes provide continuous assurance and early warning.

    4) Prep your demonstration playbook.

    For each family, list who will demo what, on which system, and with which test cases (e.g., “show deny on outbound port 25 from CUI enclave”).

    5) Keep POA&Ms clean and compliant.

    Where allowed, keep entries tight: requirement, risk, corrective action, milestones, owner, dates, and closure evidence—consistent with the CMMC rule and guidance.

    FAQ (People Also Ask)

    What is considered “objective evidence” in CMMC?
    Objective evidence is verifiable proof (e.g., configs, logs, demos, records) that a control exists and works. It’s validated through Examine/Interview/Test per NIST SP 800‑171A methods used in CMMC assessments.

    Can screenshots alone satisfy evidence requirements?
    Rarely. Screenshots help, but assessors favor live demonstrations and system exports tied to scope and timestamps. Over‑reliance on screenshots can lower assessor confidence.

    How often should evidence be updated?
    Quarterly reviews are a practical baseline, with updates after material changes (e.g., tool swaps, major policy updates). Automations can keep high‑volatility evidence current (e.g., log coverage).

    Do I need a POA&M for Level 1?
    No. Level‑1 self‑assessments cannot rely on a POA&M to achieve status. POA&M use is limited and defined by regulation; consult the rule and current DoD guidance.

    What are the must‑have documents for evidence?
    An up‑to‑date SSP, appropriate POA&M (permitted cases), and policies/procedures for key families (AC, IR, RA, etc.), plus artifacts and demos mapped in an evidence register.

    Conclusion

    CMMC evidence isn’t just paperwork—it’s proof. Whether you’re self-attesting at Level 1 or preparing for a Level 2 assessment, your ability to produce objective, mapped, and demonstrable evidence is what earns a MET score. Build your register, automate where possible, and rehearse your demos.

    Written By PRAETORSEC

    Related Posts

    No Results Found

    The page you requested could not be found. Try refining your search, or use the navigation above to locate the post.

    0 Comments

    Submit a Comment

    Your email address will not be published. Required fields are marked *