February 17, 2026 12:55 am (IST)
Follow us:
facebook-white sharing button
twitter-white sharing button
instagram-white sharing button
youtube-white sharing button
Actor Rajpal Yadav granted interim bail in ₹9-crore cheque bounce case | Learn AI or become redundant: Microsoft India President issues stark message | India’s wholesale inflation rises to 1.81% in January as manufacturing prices surge | 'India at forefront of AI revolution': PM Modi welcomes world leaders to Delhi summit | Rs 5,000 to women ahead of Tamil Nadu polls! Vijay slams Stalin, says: ‘take the money, blow the whistle’ | Modi congratulates Tarique Rahman as BNP clinches majority in Bangladesh polls | Bangladesh Polls: Tarique Rahman-led BNP secures 'absolute majority' with 151 seats in historic comeback | BJP MP files notice to cancel Rahul Gandhi's Lok Sabha membership, seeks life-long ban | Arrested in the morning, out by evening: Tycoon’s son walks free in Lamborghini crash case | ‘Why should you denigrate a section of society?’: Supreme Court pulls up ‘Ghooskhor Pandat’ makers
Anthropic Safeguards head Mrinank Sharma. Photo: LinkedIn Profile.

'World in peril’: Anthropic Safeguards head Mrinank Sharma quits, citing AI safety concerns

| @indiablooms | Feb 10, 2026, at 08:58 pm

Mrinank Sharma, the head of safeguards research at artificial intelligence firm Anthropic, has resigned, triggering widespread debate in the tech community over whether commercial pressures are eclipsing AI safety priorities.

Sharma announced his decision in a post on X on Monday, February 9.

His post, written in a reflective and poetic tone referencing writers Rainer Maria Rilke and William Stafford, was quickly dissected by AI researchers and commentators, many of whom suggested that compromises on safety may have prompted his exit.

In his resignation note, Sharma said it had become clear to him that it was time to move on, warning that the world faces danger not only from AI but from “a whole series of interconnected crises unfolding in this very moment”.

“We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences,” he wrote.

While Sharma did not cite specific incidents or decisions, he pointed to sustained pressure that made it difficult to uphold core values.

“I’ve repeatedly seen how hard it is to truly let our values govern our actions,” he said. “I’ve seen this within myself, within the organisation, where we constantly face pressures to set aside what matters most.”

Sharma added that one of his final projects focused on examining how AI assistants may “make us less human or distort our humanity”, a comment that fuelled online speculation that Anthropic’s recent push to accelerate product releases may have come at the cost of safety safeguards.

His resignation comes just days after Anthropic launched Claude Opus 4.6, an upgraded model aimed at improving office productivity and coding performance.

The company is also reported to be in talks to raise fresh funding that could value it at around $350 billion.

Sharma is not the only senior figure to have recently exited Anthropic. Harsh Mehta from the research and development team and AI scientist Behnam Neyshabur also announced last week that they had left the company to pursue new ventures, said reports.

Support Our Journalism

We cannot do without you.. your contribution supports unbiased journalism

IBNS is not driven by any ism- not wokeism, not racism, not skewed secularism, not hyper right-wing or left liberal ideals, nor by any hardline religious beliefs or hyper nationalism. We want to serve you good old objective news, as they are. We do not judge or preach. We let people decide for themselves. We only try to present factual and well-sourced news.

Support objective journalism for a small contribution.