Generative AI Security:
The Shared Responsibility Framework

As generative AI (Gen AI) continues its rapid ascent, enterprises grapple with new layers of complexity around safety, security, and compliance. In this paper, we’ll unpack a multi‐layered framework that clarifies who “owns” which aspects of AI security and risk, from base‐model alignment to real‐world deployment.
Gen AI

The Four Layers of 
Gen AI Responsibility

Generative AI brings new risks and opportunities, making it essential to clearly define responsibilities. Just as cloud providers secure infrastructure while customers manage applications, Gen AI requires its own layered approach. By assigning ownership across four layers, organizations can adopt AI with confidence, ensuring safety, compliance, and measurable value.

  • AI Providers (Layers 1 & 2): Manage the foundation model and APIs, including safe training data, initial alignment, infrastructure hardening, and baseline content filters.

  • AI Consumers (Layers 3 & 4): Govern how AI is applied, with responsibilities for fine-tuning models, building domain-specific guardrails, monitoring outputs, enforcing compliance, 
and training users.

genAI mob

Continuous Feedback Loops: Keeping Everything Aligned

Provider

Update their safety layers, pushing patches to all clients

Developer

Test and integrate updates into
pipelines

Organization

Work with devs to adjust filters or pause affected endpoints

Users

Are customers noticing any inappropriate outcomes?

9 Questions Every CISO Should Ask Their AI Vendors

Training Data & Alignment

How do you ensure datasets are free from bias, malicious poisoning, or sensitive PII?

Model Security

What defenses are in place against prompt injection, jailbreaking, and adversarial attacks?

API & Infrastructure

How do you secure inference endpoints against DDoS, misuse, and unauthorized access?

Versioning

How do you handle model patching, changelogs, and deprecation schedules?

Data Privacy

How do you prevent sensitive customer or enterprise data from persisting in training or fine-tuning?

Guardrails

Can your model enforce domain-specific filters (e.g., financial advice, HIPAA compliance)?

Agent Security

If the AI has “agency,” what controls exist for API calls, transaction approvals, and kill switches?

Monitoring & Transparency

Do you provide logs, auditability, and alerting for policy violations or anomalies?

Regulatory Compliance

How does your platform support evolving requirements (e.g., EU AI Act, NIST AI RMF, HIPAA, SEC rules)?

Case Studies

These case studies explore real-world scenarios where providers and enterprises share responsibility, highlight key risks with lessons learned, and show how deliberate guardrails and governance enable AI to deliver measurable impact.
hex mesh
Manufacturing: Predictive Maintenance AI
When a global manufacturer rolled out AI to predict machine failures, the CIO expected efficiency gains. But during testing, engineers discovered that with a cleverly worded prompt, the system could be tricked into suggesting shutdown commands for an entire assembly line.
  • Provider’s Role (Layers 1 & 2): The AI vendor had hardened its base model and secured inference endpoints with DDoS protection. The “plumbing” was solid.
  • Enterprise’s Role (Layers 3 & 4): It was the manufacturer’s job to mask proprietary sensor data, sandbox the AI before production, and add kill switches for high-risk outputs.
👉 Lesson: The provider delivered a resilient foundation, but the enterprise had to implement domain-specific guardrails to ensure an AI experiment couldn’t disrupt production.
Financial Services: Automated Loan Underwriting
A bank piloted AI to score loan applications. Within weeks, compliance officers flagged inconsistent approvals and opaque explanations. Customers demanded to know why they were denied.
  • Provider’s Role: The LLM vendor maintained bias-reduced training data and published changelogs when updating alignment techniques.
  • Enterprise’s Role: The bank’s CISO enforced encryption for PII, built dashboards to monitor for unfair treatment, and mandated audit logs for every AI-driven decision to satisfy regulators.
👉 Lesson: Providers ensured bias minimization at the base, but it was the enterprise’s responsibility to align AI outputs with regulatory and audit standards.
Healthcare: Clinical Decision Support
At a large hospital, doctors began using AI to summarize patient histories. During a red-team test, a prompt coaxed the AI into suggesting a treatment plan. That crossed a regulatory line.
  • Provider’s Role: The vendor had filtered training data and embedded safety blocks against overt diagnostic claims.
  • Enterprise’s Role: The hospital encrypted logs containing PHI, masked identifiers before prompts, and made clear in governance policies that AI outputs were “reference only.”
👉 Lesson: Providers set baseline safety rules, but healthcare leaders had to enforce HIPAA compliance and medical practice boundaries at the application layer.
Retail: AI Customer Service Chatbots
A retailer connected its AI chatbot to CRM and payments systems. During testing, a red-teamer tricked the bot into approving a fake $5,000 refund.

  • Provider’s Role: The AI vendor secured API endpoints and applied filters against obviously disallowed financial instructions.
  • Enterprise’s Role: The retailer’s security team enforced RBAC, capped automated refunds at $500, and required human approval above that threshold. They also masked loyalty account data before sending prompts.
👉 Lesson: Providers gave secure infrastructure, but it was the retailer’s duty to enforce transaction-specific fraud controls.
Energy & Utilities – Smart Grid Optimization
An energy company tested AI for balancing load across the power grid. Engineers discovered that if the AI were misconfigured, it could theoretically redirect supply in unsafe ways.

  • Provider’s Role: The vendor ensured hardened servers and baseline filtering for critical-infrastructure prompts.
  • Enterprise’s Role: The CIO required sandbox testing in simulated grids, limited AI to “advisory” mode, and installed kill switches before any live system execution.
👉 Lesson: The provider built secure infrastructure, but the utility had to enforce operational safety controls to protect critical systems.
Government – Citizen Services AI Portal
A government agency launched an AI portal to answer tax and benefits questions. Early trials showed the bot could be manipulated into giving misleading filing instructions.

  • Provider’s Role: The AI vendor blocked disallowed content and published patch notes for safety updates.
  • Enterprise’s Role: The agency applied policy filters aligned with IRS regulations, logged all citizen interactions for FOIA compliance, and trained staff on escalation paths for risky queries.
👉 Lesson: Providers supplied model integrity, but the government had to ensure policy-based governance and public trust safeguards.

Contributors

Rajendra Gangavarapu

Chief Data & AI Officer | Artigen.AI

Amanda Hartle

Managing Director | FiddlersTech

Inderpreet Kambo

CEO | Improzo

Jagadeesh Kunda

Co-Founder, CPO | Oleria

Rock Lambros

CEO and Founder | RockCyber

Sunil Mallik

Head of CSAE | PayPal

Sekhar Sarukkai

Founder, CEO | Stealth Startup

Nishil Shah

Engineer | Notion

Tara Steele

Director | Safe AI for Children

Aditya Thadani

VP - AI Platforms | H&R Block

Abhishek Trigunait

Founder | Improzo

Dennis Xu

Research VP, AI & Cloud Security | Gartner
We welcome your feedback, suggestions, and insights to ensure that the Shared Responsibility Framework remains a valuable, up-to-date, and practical resource for the entire AI and cybersecurity community.
Send feedback & get involved
hello@enkryptai.com

Get Guidance on Shared Responsibility

The AI Shared Responsibility Framework helps CISOs and enterprise security leaders align accountability across providers, developers, and compliance teams. Learn how to operationalize AI governance, enforce real-time guardrails, and measure what truly matters—outcomes, not layers.

Read more on our blog