OpenAI Restricts Rollout of New AI Model for Cybersecurity Risk Control

2 min readSources: Axios

OpenAI is launching its new AI model to select companies only, citing cybersecurity risks.

Why it matters: Legal tech and AI professionals must track this controlled deployment, as it sets precedents for AI security protocols and vendor adoption strategies amid rising cyber threats from advanced models.

  • OpenAI's new model features advanced cybersecurity capabilities and faces staggered release amid misuse fears.
  • Anthropic’s similar model, Claude Mythos Preview, found thousands of zero-day vulnerabilities and exposed serious risks.
  • OpenAI launched its 'Trusted Access for Cyber' pilot program in February, following its GPT-5.3-Codex release.
  • Both OpenAI and Anthropic collaborate with industry leaders to safely address threats revealed by their models.

OpenAI is taking the cautious route with its latest AI model, opting for a phased, limited launch due to heightened cybersecurity concerns. The initial rollout will give select companies access, mirroring Anthropic's methodical approach to deploying powerful cyber-capable AI.

  • The move follows the recent debut of Anthropic's Claude Mythos Preview, which uncovered thousands of zero-day vulnerabilities across all major operating systems and browsers.
  • Anthropic is working with over 50 organizations, including Amazon, Google, Microsoft, and U.S. government agencies, to patch vulnerabilities through Project Glasswing.
  • OpenAI's GPT-5.3-Codex is rated 'high' under the company’s Preparedness Framework, noted for its potential to develop working zero-day exploits against secure systems.
  • OpenAI CEO Sam Altman highlighted that mitigations include "safety training, automated monitoring, trusted access, and enforcement pipelines including threat intelligence."

These approaches offer legal and corporate risk managers a preview of how AI providers may gatekeep the most potent tools, especially those that could be leveraged for cyber offense as well as defense. The legal tech ecosystem should therefore expect tighter controls and vetting procedures for future advanced AI applications, complicating vendor relationships and integration plans.

By the numbers:

  • Thousands — Zero-day vulnerabilities identified by Claude Mythos Preview across major platforms
  • Over 50 — Organizations partnering with Anthropic to address AI-discovered security flaws
  • 27 years — Age of a critical OpenBSD flaw uncovered by Claude Mythos Preview

Yes, but: The criteria for selecting companies given early model access and the timeline for broader public availability are not disclosed.

What's next: Ongoing industry monitoring as OpenAI and Anthropic refine access protocols and patch security issues; broader releases remain unannounced.