News

Last week, Anthropic presented some research into how AI “personalities” work. That is, how their tone, responses, and ...
But two new papers from the AI company Anthropic, both published on the preprint server arXiv, provide new insight into how ...
In a way, AI models launder human responsibility and human agency through their complexity. When outputs emerge from layers ...
AI is a relatively new tool, and despite its rapid deployment in nearly every aspect of our lives, researchers are still ...
Anthropic found that pushing AI to "evil" traits during training can help prevent bad behavior later — like giving it a ...
In the paper, Anthropic explained that it can steer these vectors by instructing models to act in certain ways -- for example, if it injects an evil prompt into the model, the model will respond from ...
Researchers are testing new ways to prevent and predict dangerous personality shifts in AI models before they occur in the wild.
Anthropic is intentionally exposing its AI models like Claude to evil traits during training to make them immune to these ...
Anthropic revealed breakthrough research using "persona vectors" to monitor and control artificial intelligence personality ...
New Anthropic research shows that undesirable LLM traits can be detected—and even prevented—by examining and manipulating the ...
A new study from Anthropic introduces "persona vectors," a technique for developers to monitor, predict and control unwanted LLM behaviors.
The new pre-print research paper, out Tuesday, is a joint project between Truthful AI, an AI safety research group in ...