Skip to main content

Introduction

The C_AI Platform is an evidence-to-requirements assessment system that helps organizations demonstrate compliance against structured checklists. The platform uses AI-powered analysis to compare uploaded evidence documents against requirements, identify coverage gaps, and generate actionable remediation tasks.

Users upload evidence documents (policies, procedures, technical specifications), and the platform automatically maps this evidence to specific requirements. An AI assessment engine evaluates coverage, identifies gaps, and produces structured outputs.

Use cases include: regulatory readiness, subcontractor compliance verification, travel documentation, project audits, and any scenario where evidence must be matched against a defined set of requirements.

Who uses this platform

The platform serves two primary user types:

RoleDescriptionInterface
Portal UserSubmits evidence documents, reviews assessment results, and tracks remediation tasks. Typically a compliance officer, project manager, or subject matter expert.Portal
AdminConfigures tenants, manages regulatory corpus, imports criteria, monitors system health, and reviews audit logs. Typically a platform operator or engineer.Admin Console
Current repo routes

In this repository, Portal is served at /cm and Admin Console at /admin. See Roles and Tenancy for details on role types and legacy terminology mapping.

Legacy internal identifiers

Some deployments still use legacy internal identifiers. When they appear in docs, they will be shown in backticks (e.g., WALLY_ENVIRONMENT, wally-backend) and are not user-facing names.

How this documentation is organized

SectionWhat you'll find
Start HerePlatform overview and quickstart guide to complete your first assessment
GuidesStep-by-step instructions for common workflows (uploading evidence, running assessments, reviewing results)
ConceptsExplanations of platform architecture, terminology, and data models
ReferenceAPI documentation and technical specifications
StyleDocumentation standards for contributors

5-minute glossary

Before diving in, familiarize yourself with these core concepts:

Tenant

An isolated organization within the platform. All data (documents, runs, assessments) belongs to exactly one tenant. Tenants cannot see each other's data.

(Verified: shared/schema.ts — tenantId on all core tables)

Example: A subcontractor organization submitting compliance evidence is one tenant. A different organization using the same platform is a separate tenant.

Pack

A named collection of requirements that defines what evidence must be submitted. A pack groups related criteria and requirement items into a coherent compliance framework.

Example: A "Project Compliance Pack" containing 20 criteria and 70 individual requirement items.

Requirement

An individual checklist item that must be satisfied. Requirements are grouped under criteria (categories) within a pack.

Example: "Provide documented access control procedures" is one requirement within an "Access Management" criterion.

Evidence

A document uploaded to demonstrate compliance with requirements. Supported formats: PDF, DOCX, XLSX, CSV, PNG, JPG, ZIP, TXT. The platform extracts text, chunks it for search, and indexes it for AI assessment.

(Verified: shared/schema.ts:172 — ALLOWED_FILE_TYPES)

Example: An "Access Control Policy.pdf" uploaded to satisfy access management requirements.

Run

A single execution of the assessment engine against all requirements in a pack. A run evaluates uploaded evidence against each requirement and produces assessments.

Example: After uploading 10 evidence documents, you start a run. The system evaluates all requirements and produces results.

Assessment

The result of evaluating one requirement during a run. Each assessment has a status: COMPLETE, PARTIAL, MISSING, or FAILED.

(Verified: shared/schema.ts:318-322 — AssessmentStatus enum)

Example: Requirement "Documented access control procedures" → Assessment status: COMPLETE.

Snapshot

A frozen record of a completed run's configuration and results. Snapshots enable deterministic replay—you can reproduce the exact same results by re-running with the same inputs and configuration.

Example: Run #15 completed and captured a snapshot. Months later, you can replay it to verify the original results.

Next steps

  1. Follow the Quickstart to complete your first end-to-end assessment
  2. Read the Portal User Workflow guide for detailed submission instructions
  3. Explore Runs, Snapshots, and Replay to understand the assessment lifecycle
  4. Check the API Reference for programmatic access