Reviewing Results and Responses
What it is
After an assessment run completes, each requirement receives a status, confidence score, citations, and AI reasoning. This guide explains how to interpret these results and take appropriate action.
When to use
Use this guide when:
- Reviewing completed assessment results
- Understanding why a requirement was marked PARTIAL or MISSING
- Evaluating the quality of AI assessments
- Deciding what additional evidence to provide
Do not use when:
- Uploading evidence (see Evidence Submission)
- Configuring the platform (see Admin Workflow)
- Running assessments (see Portal User Workflow)
Prerequisites
Before starting, ensure you have:
- A completed assessment run (status: COMPLETED)
- Access to the portal interface
- Understanding of the requirements being assessed
Step-by-step
Step 1: Locate completed runs
- Navigate to the Portal (served at
/cmin this repo) - Look for the run status indicator in the toolbar
- Verify the most recent run shows COMPLETED
For Admins, navigate to the Admin Console (served at /admin in this repo) → Run History to see all runs.
Step 2: Understand assessment statuses
Each requirement receives one of these statuses:
| Status | Meaning | Typical action |
|---|---|---|
| COMPLETE | Evidence fully satisfies the requirement | No action needed |
| PARTIAL | Evidence partially addresses the requirement | Review gaps, consider additional evidence |
| MISSING | No relevant evidence found | Upload evidence or document the gap |
| FAILED | Assessment could not be completed | Check for system errors, retry |
Step 3: Review individual assessments
For each requirement:
- Click the requirement in the left panel
- View the assessment details:
- Status: COMPLETE, PARTIAL, or MISSING
- Confidence: 0.0 to 1.0 score indicating AI certainty
- Citations: Links to specific evidence passages
- Reasoning: AI explanation of the assessment
Step 4: Interpret confidence scores
🧩 The following ranges are guidelines only. Actual thresholds may vary by deployment and AI configuration.
| Score range | Interpretation |
|---|---|
| 0.90 - 1.00 | High confidence—assessment is reliable |
| 0.70 - 0.89 | Moderate confidence—review citations to verify |
| 0.50 - 0.69 | Low confidence—significant uncertainty, check carefully |
| Below 0.50 | Very low confidence—likely insufficient evidence |
TBD: Verify status thresholds in server/lib/assessmentPipeline.ts
Step 5: Examine citations
Citations point to specific passages in your evidence:
- Click a citation to view the source document
- Verify the passage actually supports the requirement
- Note if the citation is:
- Direct: Explicitly addresses the requirement
- Indirect: Related but not explicit
- Tangential: Only loosely connected
Step 6: Read the AI reasoning
The reasoning explains the AI's assessment logic:
- Look for what evidence was considered
- Identify what aspects were satisfied
- Note what was missing or incomplete
- Check for terminology or context issues
Step 7: Review generated tasks
For PARTIAL and MISSING requirements:
- Check the Tasks panel for generated remediation items
- Each task describes what's needed to achieve compliance
- Use tasks as a checklist for evidence collection
Step 8: Take action on gaps
Based on your review:
| Finding | Action |
|---|---|
| Evidence exists but wasn't found | Check document processing; re-upload if needed |
| Evidence is insufficient | Gather additional documentation |
| Evidence uses different terminology | Consider adding a mapping document |
| Requirement doesn't apply | Note as "Not Applicable" for submission |
Example
Scenario: Reviewing assessment results for access management requirements.
Input:
- Completed run with 10 access management requirements
- 3 uploaded evidence documents
Review process:
-
Requirement: "Documented access control policy"
- Status: COMPLETE
- Confidence: 0.94
- Citation: Access-Control-Policy.pdf, pages 1-3
- Reasoning: "Document explicitly states access control principles and procedures..."
- Action: None needed
-
Requirement: "Privileged access management procedures"
- Status: PARTIAL
- Confidence: 0.62
- Citation: Access-Control-Policy.pdf, page 8
- Reasoning: "Document mentions privileged access but lacks specific procedures for..."
- Action: Upload dedicated privileged access procedure document
-
Requirement: "Quarterly access review schedule"
- Status: MISSING
- Confidence: 0.15
- Citation: None
- Reasoning: "No evidence of scheduled access reviews found in uploaded documents..."
- Action: Upload access review schedule or records
Result:
- 7 requirements COMPLETE—no action
- 2 requirements PARTIAL—additional detail needed
- 1 requirement MISSING—new evidence required
Troubleshooting
All requirements show MISSING
Symptom: Every requirement has MISSING status.
Causes and fixes:
- No documents uploaded: Check document list
- Documents not READY: Verify processing completed
- Wrong pack/tenant: Confirm you're in the correct context
- Requirements don't match evidence: Review if documents are relevant
Confidence seems too low for good evidence
Symptom: COMPLETE status but very low confidence.
Causes and fixes:
- Evidence is indirect: AI found relevant content but it's not explicit
- Multiple documents overlap: Confidence may be distributed
- Terminology mismatch: Different words for same concepts
Citations point to wrong passages
Symptom: Cited text doesn't seem relevant.
Causes and fixes:
- Semantic similarity matched tangentially: Review full document context
- Chunk boundaries split relevant content: Important text may be in adjacent chunks
- Vector search limitation: Report for AI tuning
Reasoning contradicts status
Symptom: Status says COMPLETE but reasoning describes gaps.
Causes and fixes:
- Threshold edge case: Score was just above the COMPLETE threshold
- Reasoning generation error: Report for review
- Multi-aspect requirement: Some aspects complete, others not
Tasks not generated for MISSING requirements
Symptom: No remediation tasks appear.
Causes and fixes:
- Task generation is async: Wait and refresh
- Task generation failed: Check run status for errors
- Tasks were generated but not visible: Scroll or expand the task panel
Cannot see assessment details
Symptom: Clicking requirement shows no details.
Causes and fixes:
- Run not completed: Wait for COMPLETED status
- Assessment failed for this requirement: Check for FAILED status
- UI not refreshed: Hard refresh the browser
Export doesn't include assessments
Symptom: Exported report has blank assessment sections.
Causes and fixes:
- Run has no assessments: Check that requirements were evaluated
- Export format issue: Try a different format
- Incomplete run: Ensure run status is COMPLETED
Assessment results changed after re-run
Symptom: New run shows different results than previous.
Causes and fixes:
- New documents added: Assessments include all current evidence
- Configuration changed: Different engine settings
- LLM non-determinism: Some variation is expected
Gotchas and edge cases
-
Assessments are immutable: Once a run completes, its assessments don't change. New evidence requires a new run.
-
Confidence is not accuracy: A 0.95 confidence means the AI is certain about its assessment, not that the assessment is necessarily correct.
-
Citations are samples: The AI may have considered more evidence than what appears in citations. Citations show the most relevant passages.
-
PARTIAL is subjective: The boundary between PARTIAL and COMPLETE depends on threshold settings. A 0.71 confidence might be PARTIAL in one configuration and COMPLETE in another.
-
FAILED is different from MISSING: FAILED means the AI couldn't assess (error), while MISSING means no evidence was found.
-
Multi-requirement aggregation: If a single document satisfies multiple requirements, each requirement has its own assessment with potentially different citations to the same document.
-
Task suggestions are AI-generated: Tasks provide guidance but may not capture all nuances. Use professional judgment.
Related links
- Portal User Workflow - Complete submission process
- Evidence Submission and Mapping - Improving evidence quality
- Evidence, Requirements, and Assessments - Data model reference
- Runs, Snapshots, and Replay - Understanding run lifecycle
- API Reference - Programmatic access to results
- Admin Workflow - Platform monitoring