News

Bowman later edited his tweet and the following one in a thread to read as follows, but it still didn't convince the ...
Anthropic's most powerful model yet, Claude 4, has unwanted side effects: The AI can report you to authorities and the press.
Faced with the news it was set to be replaced, the AI tool threatened to blackmail the engineer in charge by revealing their extramarital affair.
Anthropic's Claude 4 Opus AI sparks backlash for emergent 'whistleblowing'—potentially reporting users for perceived immoral ...
Anthropic on Thursday admitted that a faulty reference in a court paper was the result of its own AI assistant Claude and ...
A lawyer for Anthropic admitted in court that they accidentally used a fake citation generated by Claude during an ongoing ...
Anthropic has formally apologized after its Claude AI model fabricated a legal citation used by its lawyers in a copyright ...
The chatbot added wording errors to a citation that made it look like a genuine article didn’t exist. The chatbot added wording errors to a citation that made it look like a genuine ...
As Sam Bowman, an Anthropic AI alignment researcher wrote on the social network X under this handle “@sleepinyourhat” at 12:43 pm ET today about Claude 4 Opus: “If it thinks you’re doing ...