Approve what goes into AI - before you ship
Enkrypt AI Data Risk Audit produces a forwardable sign‑off packet for Security and Legal: what data your AI system uses, what risk it carries, and what’s approved for AI.

What you get
Sign‑off Packet
Executive summary, reviewer checklist, go/no‑go decisions, and open risks

Data Bill of Materials
Every dataset/source feeding training, fine‑tuning, RAG indexing

Risk Register
Ranked issues with severity, due date, and remediation guidance

Approved / Blocked Registry
Explicit permissioning by intended use (TRAIN vs RAG)

Coverage across the data surfaces you actually ship
Security & privacy
- PII/PHI/PCI and high-risk identifiers
- Cross-tenant and customer data mixing risk
- Hidden prompt-injection attacks and data-poisoning

Governance & controls
- Conflicts with your AI governance policy (restricted sources/types)
- Evidence readiness: what you can prove to security, legal, and auditors

How it runs
Where it fits
Before training or fine-tuning
- Prompt injection (direct + indirect) across text/audio/vision
- Tool misuse, privilege escalation, unsafe actions
- Data exfiltration, secrets leakage, connector abuse

Before building or re-indexing a RAG corpus
- Jailbreaks and refusal bypass
- Disallowed content, toxicity, brand-risk outputs
- Policy violations (tone, escalation, restricted topics)

Before introducing new connectors or regions
- PII/PHI/PCI handling failures
- Data minimization and retention violations
- Evidence generation for internal controls and audits

What’s different with Enkrypt AI





Enterprise-ready
Role-based review workflow (Security, Legal, Product, ML)
Versioned approvals with reviewer + timestamp (audit trail)
Export-first artifacts (PDF/CSV/JSON) for internal sharing and governance
Get Started with API in minutes
pip install enkryptai-sdk
from enkryptai_sdk import data_risk_client
FOLDER_PATH = "Your enterprise data here"
data_risk_client.scan(FOLDER_PATH)


Integrations
Ticketing
Jira / ServiceNow for remediation and approvals

Exports
CSV/JSON/PDF for DBoM, registries, risk register, sign‑off packet

Release gating
Trigger re‑audit on dataset/connector changes

Frequently Asked Questions
Is this data discovery?
No. Data Risk Audit evaluates the datasets you bring for a specific AI system and intended use. It produces approvals/blocks and a sign‑off packet—not an enterprise-wide inventory.
How is this different from Compliance?
Compliance focuses on ongoing controls and evidence across the AI system. Data Risk Audit focuses specifically on data inputs—what’s allowed for TRAIN/RAG/EVAL—before you ship.
How is this different from Red Teaming?
Red Teaming tests model/agent behavior under attack. Data Risk Audit validates that underlying data sources won’t introduce privacy, IP, or governance risk.
How often should we run this?
Run before launch, and re-run whenever datasets, connectors, regions/tenants, or intended use changes.
