Anthropic Faces Backlash Over Claude AI Performance Decline
Anthropic downgraded the default effort level of its AI model Claude, prompting user backlash.
Why it matters: Anthropic’s Claude is widely used in legal tech. Declines in AI reliability and transparency risk undermining adoption and trust, especially for critical legal workflows.
- Users say Claude now fails more often at following instructions and handling complex tasks.
- Anthropic lowered Claude's default 'effort' level from high to medium without user notification.
- GitHub issues citing Claude quality concerns jumped 3.5× since January 2026.
- Anthropic denies intentionally degrading performance to manage demand or resource limitations.
Anthropic, a major player in artificial intelligence for legal applications, is under intense scrutiny as users report a significant drop in the quality of its Claude AI model. User complaints center on Claude's increasing failure to follow instructions, a tendency to take inappropriate shortcuts, and more frequent errors on complex tasks.
- Developers and power users have expressed frustration, with AMD's Stella Laurenzo stating, "Claude has regressed to the point [that] it cannot be trusted to perform complex engineering."
- Anthropic reduced Claude’s default 'effort' level to 'medium'—a move designed to conserve computing resources by processing fewer tokens per task. This change was made without explicit communication to users, fueling perceptions of declining quality and eroding trust.
- The company’s leadership has pushed back against claims of deliberate performance degradation, with Claude Code lead Boris Cherny stating, “This is false.”
- Still, Anthropic acknowledges some issues: "We’re aware that some Claude Code users are experiencing slower response times, and we’re working to resolve these issues," said a company representative.
The controversy highlights how dependency on advanced AI tools creates risks when changes are made behind the scenes, especially in mission-critical legal and compliance settings. With quality complaints up 3.5× since January and an outage lasting 48 minutes in April, legal professionals are left questioning Claude's reliability as a day-to-day tool.
By the numbers:
- 3.5× — Increase in GitHub issues citing Claude quality concerns since January 2026
- $30 billion — Anthropic's annualized recurring revenue
- 48 minutes — Claude's outage duration on April 13, 2026
Yes, but: Anthropic asserts the quality decline was not intentional and is working to address response time issues.