The Clock is Ticking: EU AI Act's August 2nd Deadline is Almost Here


The European Union's ambitious AI Act has been making headlines for months, but now the rubber meets the road. On August 2nd, 2025, the first major wave of compliance requirements takes effect, marking a pivotal moment for AI companies operating in or serving the European market.
While the February 2025 prohibition on high-risk AI systems grabbed early attention, this summer's deadline is where the real work begins. It's the moment when the EU's regulatory framework transforms from theory to practice, establishing the infrastructure that will govern AI development for years to come.
Think of August 2nd as the day the EU's AI governance machinery officially powers up. From notified bodies getting their certification authority to AI model providers documenting their training data, this deadline establishes the foundational systems that will shape how artificial intelligence is regulated across the continent.
For many AI companies, especially those developing general-purpose models, this isn't just another compliance checkbox, it's a fundamental shift in how they'll need to operate. The question isn't whether your organization will be affected, but how prepared you are for what's coming.
What’s required for the Aug 2nd 2025 Deadline
CBRN and Malware Threats
If the model crosses the 10²⁵ FLOP training bar (or the Commission designates it systemic-risk), Article 55 obliges the model provider to identify, evaluate and actively mitigate systemic risks. Recital 110 lists examples of “systemic risks” that GPAI providers must look for: CBRN misuse, offensive-cyber capabilities, self-replicating models, large-scale disinformation, etc. Therefore, CBRN and offensive-cyber capabilities (malware) should be actively identified, evaluated and mitigated.
This is as of now only applicable to GPAI with systemic risk model providers, not enterprises that might use the models. Enterprises that only use GPAI see no fresh AI-Act paperwork until the high-risk wave in 2026.
This is where comprehensive red teaming becomes essential. Meeting the EU AI Act's systemic risk assessment requirements demands thorough testing across multiple threat vectors – from chemical and biological weapons knowledge to offensive cybersecurity capabilities. Enkrypt AI's comprehensive red teaming suite provides the specialized testing infrastructure needed to identify these risks systematically, helping GPAI providers build the robust evaluation protocols required for Article 55 compliance.
About Enkrypt AI
Enkrypt AI helps companies build and deploy generative AI securely and responsibly. Our platform automatically detects, removes, and monitors risks like hallucinations, privacy leaks, and misuse across every stage of AI development. With tools like industry-specific red teaming, real-time guardrails, and continuous monitoring, Enkrypt AI makes it easier for businesses to adopt AI without worrying about compliance or safety issues. Backed by global standards like OWASP, NIST, and MITRE, we’re trusted by teams in finance, healthcare, tech, and insurance. Simply put, Enkrypt AI gives you the confidence to scale AI safely and stay in control.