News
Anthropic's most powerful model yet, Claude 4, has unwanted side effects: The AI can report you to authorities and the press.
Anthropic's Claude 4 Opus AI sparks backlash for emergent 'whistleblowing'—potentially reporting users for perceived immoral ...
Anthropic on Thursday admitted that a faulty reference in a court paper was the result of its own AI assistant Claude and ...
As Sam Bowman, an Anthropic AI alignment researcher wrote on the social network X under this handle “@sleepinyourhat” at 12:43 pm ET today about Claude 4 Opus: “If it thinks you’re doing ...
A lawyer for Anthropic admitted in court that they accidentally used a fake citation generated by Claude during an ongoing ...
The chatbot added wording errors to a citation that made it look like a genuine article didn’t exist. The chatbot added wording errors to a citation that made it look like a genuine ...
Anthropic has formally apologized after its Claude AI model fabricated a legal citation used by its lawyers in a copyright ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results