What CMMC Assessors Look for Beyond Written Policies

What CMMC Assessors Look for Beyond Written Policies

January 09, 20266 min read

Introduction: Why Policies Alone Are Not Enough

For many defense contractors, CMMC preparation begins and ends with documentation. Policies are drafted, procedures are approved, and compliance binders grow thick with written artifacts intended to demonstrate alignment with requirements.

Yet time and again, organizations entering CMMC Level 2 assessments discover that documentation alone does not equal readiness.

This disconnect exists because CMMC assessments are not designed to validate intent. They are designed to validate execution.

The Department of Defense defines a CMMC assessment as the evaluation of security controls to determine whether those controls are implemented correctly, operating as intended, and producing the desired outcome¹. Written policies help explain how controls should work, but assessors are tasked with confirming how those controls actually function in day-to-day operations.

Understanding this distinction is critical. It explains why many late-stage assessment gaps appear even in organizations that have invested significant time and resources into documentation.


The Legal and Regulatory Basis for Assessment Validation

CMMC is not a best-practice framework or a voluntary standard. It is codified in federal regulation.

Under 32 CFR Part 170, published in the Federal Register, the CMMC Program establishes formal assessment requirements for defense contractors handling Controlled Unclassified Information (CUI)². This regulation clearly defines assessment as an evaluation of control implementation and effectiveness, not merely documentation presence.

The regulation reinforces that:

  • Controls must be implemented

  • Controls must operate as intended

  • Controls must produce measurable outcomes

This regulatory language forms the foundation of assessor behavior and assessment expectations.


The Role of the CMMC Level 2 Assessment Guide

The CMMC Level 2 Assessment Guide, published by the DoD Chief Information Officer, provides the operational framework assessors follow during evaluations¹.

The guide makes clear that assessors validate compliance using multiple methods, including:

  • Examination of evidence

  • Interviews with personnel

  • Observation of system behavior

This multi-method approach ensures assessors are not relying solely on written artifacts. Instead, they are triangulating documentation, evidence, and human behavior to determine whether cybersecurity practices are institutionalized.


Documentation: Necessary, but Not Proof

Documentation plays a critical role in CMMC assessments, but it serves as a reference point, not final validation.

Policies and procedures are expected to:

  • Define responsibilities

  • Describe workflows

  • Establish expectations for security behavior

However, documentation is static by nature. Cybersecurity operations are not.

NIST SP 800-171, which forms the technical basis for CMMC Level 2, requires organizations to implement and maintain security requirements³. The emphasis is on operational execution, not just written intent.

Assessors therefore treat documentation as a starting point for validation, not the conclusion.


Evidence: The Backbone of CMMC Assessment

Evidence is where CMMC assessments are won or lost.

The Assessment Guide emphasizes that assessors examine artifacts to confirm that controls are implemented and operating correctly¹. Evidence must demonstrate that security practices are not theoretical but actively enforced.

Examples of Evidence Assessors Validate

Evidence may include:

  • Access control lists and role assignments

  • System configuration records

  • Audit and security logs

  • Incident response records

  • Training completion records

  • Change management documentation

Critically, assessors are not looking for a single screenshot or a one-time artifact. They are looking for traceable, repeatable evidence that shows controls operating over time.


Traceability Matters More Than Volume

One of the most common readiness gaps is evidence that exists but cannot be clearly tied to a specific requirement.

Assessors expect evidence to:

  • Map directly to a control

  • Show how the control is executed

  • Demonstrate continuity, not a snapshot

This aligns with the assessment methodology defined in NIST SP 800-171A, which establishes that assessors examine artifacts in context, not isolation⁴.

Evidence without context, ownership, or explanation often raises more questions than it answers.


Interviews: Validating Human Implementation

CMMC assessments formally include interviews as part of control validation¹.

Assessors conduct interviews to:

  • Confirm staff understand applicable policies

  • Validate that processes are followed consistently

  • Identify discrepancies between documentation and reality

Interviews are not adversarial. They are designed to confirm that cybersecurity practices are understood and repeatable.

If two individuals describe the same process differently, assessors may conclude that the control is not consistently implemented, even if documentation exists.

This is one of the clearest examples of why documentation alone is insufficient.


Configuration Validation: Does the System Match the Policy?

Written policies often describe desired security states. Assessors validate whether systems actually reflect those states.

For example:

  • Access control policies should align with actual permissions

  • MFA requirements should be enforced in practice

  • Encryption policies should be reflected in system settings

NIST SP 800-171 requires that safeguards be implemented and maintained to protect CUI³. Assessors may examine system configurations or supporting evidence to verify alignment with documented requirements.

A policy stating “least privilege” is meaningless if user permissions are overly permissive in practice.


Consistency Over Time: A Core Assessment Expectation

CMMC assessments are not point-in-time evaluations.

The Federal Register explicitly states that assessments evaluate whether controls are operating as intended². This inherently requires evidence of consistent execution over time.

Assessors often request:

  • Logs covering weeks or months

  • Records of recurring reviews

  • Evidence from multiple operational cycles

One-time artifacts rarely satisfy this requirement.

Organizations that prepare only for the assessment window often struggle to demonstrate continuity.


Ownership and Accountability: Who Is Responsible?

Assessors frequently ask a simple but revealing question:

“Who owns this control?”

The CMMC Level 2 Assessment Guide emphasizes accountability as a key component of implementation¹.

Clear ownership demonstrates that:

  • Controls are actively managed

  • Responsibilities are understood

  • Failures will be addressed

Ambiguous ownership often leads to inconsistent execution and weak evidence, both of which increase assessment risk.


Process Execution Under Real Conditions

Assessors evaluate whether processes function under normal operating conditions, not just ideal scenarios.

This includes:

  • Incident response execution

  • Access provisioning and removal

  • Change management

  • Log review and alert handling

NIST guidance emphasizes that controls must be implemented and maintained, not merely described³. Processes that only work during audits or tabletop exercises frequently fail assessment scrutiny.


Alignment Across Documentation, Evidence, and Behavior

Successful assessments demonstrate alignment across three dimensions:

  1. Documentation explains expectations

  2. Evidence proves execution

  3. Interviews confirm understanding

Assessors cross-check these elements. When one contradicts another, gaps emerge.

This integrated validation approach is consistent with both the CMMC Assessment Guide and NIST SP 800-171A¹⁴.


Common Gaps Identified During Assessments

Based on official assessment methodology, frequent gaps include:

  • Policies that do not reflect current operations

  • Evidence that cannot be traced to specific controls

  • Staff uncertainty during interviews

  • Inconsistent execution across teams

  • Lack of historical records

These gaps usually result from prioritizing documentation over operational validation.


What This Means for CMMC Readiness

True readiness requires more than completed policies.

Organizations preparing for CMMC Level 2 assessments should prioritize:

  • Evidence traceability

  • Staff awareness

  • Ownership clarity

  • Operational consistency

  • Ongoing monitoring

These priorities align directly with assessor expectations established by DoD and NIST guidance.


CMMC Is a Program, not a Project

The DoD designed CMMC to improve long-term cybersecurity maturity across the defense industrial base.

CMMC is not a one-time certification. It is a sustained compliance posture intended to protect CUI over time⁵.

Organizations that treat compliance as an ongoing program consistently perform better in assessments and maintain stronger security outcomes.


Free CMMC Pre-Assessment Readiness Checklist

If you want to validate whether your implementation, evidence, and operational practices align with what CMMC assessors actually evaluate, we created a practical resource to help.

Download the CMMC Pre-Assessment Readiness Checklist

This checklist helps you:

  • Identify implementation gaps

  • Validate evidence readiness

  • Confirm ownership and accountability

  • Strengthen your assessment posture

Assessors will look beyond policies. Be ready.

Back to Blog