Check risk score of Gen AI models on our LLM Safety Leaderboard.

Schedule a call here.

Top 5 AI Security Trends Discussed at the Confidential Computing Summit 2024

The 2-day conference featured discussions with the smartest minds in confidential computing and privacy-preserving (generative) AI. 

In June, I had the pleasure to both attend and speak at the 2024 CoCo Summit in San Franciso. My talk entitled, “Strategies for Effectively Deploying Trustworthy Generative AI Solutions”, was a bit generalized for this audience, as confidential computing solves only one aspect of LLM security, specifically on deployment. 

Nonetheless, I received great feedback on my talk from the audience members, namely how impressed they were with the comprehensiveness of the Enkrypt AI platform. They appreciated its benchmarked and dynamic Red Teaming, Alignment, Guardrails, and continuous Monitoring capabilities. All of which can be done simultaneously in the platform. 

We are proud of building a product that can (among other things):

  1. Detect both security risks (jailbreak, malware, leakage) and model risks (toxicity, bias and hallucinations), and 
  2. Evaluate AI systems against operational and reputational risks throughout development and deployment.

The rest of the conference was filled with presentations from industry luminaries representing Microsoft, Nvidia, Google, and others. 

Here are the top trends I came away with after digesting the jam-packed content:

  1. Internal threat actors are increasing, so protecting LLM IP is becoming critical. And in some cases, of national security importance. Jason Clinton, CISO, at Anthropic essentially made this point in his presentation.

  2. The technology for confidential computing for Generative AI is not yet mature – confidential GPUs are a year away.

  3. Despite their infancy, use cases for confidential computing are starting to pick up steam. One example is to port the workloads (AI training, data processing) into confidential computing. 

  1. Challenges abound at the CPU-GPU communication level when it comes to confidentiality.

  2. There is an obvious need to provide responsible and secure Generative AI. Threat actors know AI applications are currently an easy and profit-rich target to exploit. 

We look forward to attending next year’s event, as interest will only grow in this industry. 

by

Prashanth Harshangi

CTO at Enkrypt AI

About Enkrypt AI

Enkrypt AI protects enterprises against generative AI risks with its robust platform that detects threats, remediates vulnerabilities, and provides continuous monitoring. The unique approach ensures your AI applications are safe, secure, and trustworthy. The solution enables organizations to accelerate AI adoption in a secure manner while retaining competitive advantage and minimizing brand damage. 

June 13, 2024

Top 5 AI Security Trends Discussed at the Confidential Computing Summit 2024

Prashanth Harshangi

Top 5 AI Security Trends Discussed at the Confidential Computing Summit 2024

How Data Silos Act as Barriers to Generative AI Adoption in Businesses

Without enterprise-wide AI visibility, strategic decision-making is impaired, potentially leading to missed opportunities for optimization and growth. Read more to learn how Enkrypt can help.

Bridging the Gap: How Model Metering and Security Can Drive On-Prem and VPC ML Revenue

The 2-day conference featured discussions with the smartest minds in confidential computing and privacy-preserving (generative) AI. 

In June, I had the pleasure to both attend and speak at the 2024 CoCo Summit in San Franciso. My talk entitled, “Strategies for Effectively Deploying Trustworthy Generative AI Solutions”, was a bit generalized for this audience, as confidential computing solves only one aspect of LLM security, specifically on deployment. 

Nonetheless, I received great feedback on my talk from the audience members, namely how impressed they were with the comprehensiveness of the Enkrypt AI platform. They appreciated its benchmarked and dynamic Red Teaming, Alignment, Guardrails, and continuous Monitoring capabilities. All of which can be done simultaneously in the platform. 

We are proud of building a product that can (among other things):

  1. Detect both security risks (jailbreak, malware, leakage) and model risks (toxicity, bias and hallucinations), and 
  2. Evaluate AI systems against operational and reputational risks throughout development and deployment.

The rest of the conference was filled with presentations from industry luminaries representing Microsoft, Nvidia, Google, and others. 

Here are the top trends I came away with after digesting the jam-packed content:

  1. Internal threat actors are increasing, so protecting LLM IP is becoming critical. And in some cases, of national security importance. Jason Clinton, CISO, at Anthropic essentially made this point in his presentation.

  2. The technology for confidential computing for Generative AI is not yet mature – confidential GPUs are a year away.

  3. Despite their infancy, use cases for confidential computing are starting to pick up steam. One example is to port the workloads (AI training, data processing) into confidential computing. 

  1. Challenges abound at the CPU-GPU communication level when it comes to confidentiality.

  2. There is an obvious need to provide responsible and secure Generative AI. Threat actors know AI applications are currently an easy and profit-rich target to exploit. 

We look forward to attending next year’s event, as interest will only grow in this industry. 

by

Prashanth Harshangi

CTO at Enkrypt AI

About Enkrypt AI

Enkrypt AI protects enterprises against generative AI risks with its robust platform that detects threats, remediates vulnerabilities, and provides continuous monitoring. The unique approach ensures your AI applications are safe, secure, and trustworthy. The solution enables organizations to accelerate AI adoption in a secure manner while retaining competitive advantage and minimizing brand damage.