News

Anthropic published research last week showing that all major AI models may resort to blackmail to avoid being shut down – ...
New research from Anthropic shows that when you give AI systems email access and threaten to shut them down, they don’t just ...
Claude is a generative AI tool built by Anthropic. Like ChatGPT, you can use text, audio and visual prompts to create various ...
New research shows that as agentic AI becomes more autonomous, it can also become an insider threat, consistently choosing ...
Several weeks after Anthropic revealed that its Claude Opus 4 AI model could resort to blackmail in controlled test ...
Anthropic emphasized that the tests were set up to force the model to act in certain ways by limiting its choices.
While Anthropic found Claude doesn't enforce negative outcomes in affective conversations, some researchers question the ...
New research shows Claude chats often lift users’ moods. Anthropic explores how emotionally supportive AI affects behavior, ...
Anthropic transforms Claude AI into a no-code app development platform with 500 million user-created artifacts, intensifying competition with OpenAI's Canvas feature as AI companies battle for ...
While the research produces a long list of findings, the key thing to note is that just 2.9% of Claude AI interactions are ...
In a test case for the artificial intelligence industry, a federal judge has ruled that AI company Anthropic didn’t break the ...
A report by Anthropic reveals that people rarely seek companionship from AI, and turn to AI for emotional support or advice ...