News

But two new papers from the AI company Anthropic, both published on the preprint server arXiv, provide new insight into how ...
AI is supposed to be helpful, honest, and most importantly, harmless, but we've seen plenty of evidence that its behavior can ...
AI is a relatively new tool, and despite its rapid deployment in nearly every aspect of our lives, researchers are still ...
In a way, AI models launder human responsibility and human agency through their complexity. When outputs emerge from layers of neural networks processing billions of parameters, researchers can claim ...
Using two open-source models (Qwen 2.5 and Meta’s Llama 3) Anthropic engineers went deep into the neural networks to find the ...
Anthropic found that pushing AI to "evil" traits during training can help prevent bad behavior later — like giving it a ...
Anthropic is intentionally exposing its AI models like Claude to evil traits during training to make them immune to these ...
Anthropic revealed breakthrough research using "persona vectors" to monitor and control artificial intelligence personality ...
Malicious traits can spread between AI models while being undetectable to humans, Anthropic and Truthful AI researchers say.
Researchers are trying to “vaccinate” artificial intelligence systems against developing harmful personality traits.
The new pre-print research paper, out Tuesday, is a joint project between Truthful AI, an AI safety research group in ...
New Anthropic research shows that undesirable LLM traits can be detected—and even prevented—by examining and manipulating the ...