spot_img
HomeTechnology NewsAI system resorts to blackmail if told it will be removed

AI system resorts to blackmail if told it will be removed


Artificial intelligence (AI) firm Anthropic says testing of its new system revealed it is sometimes willing to pursue “extremely harmful actions” such as attempting to blackmail engineers who say they will remove it.

The firm launched Claude Opus 4 on Thursday, saying it set “new standards for coding, advanced reasoning, and AI agents.”

But in an accompanying report, it also acknowledged the AI model was capable of “extreme actions” if it thought its “self-preservation” was threatened.

Such responses were “rare and difficult to elicit”, it wrote, but were “nonetheless more common than in earlier models.”

Potentially troubling behaviour by AI models is not restricted to Anthropic.

Some experts have warned the potential to manipulate users is a key risk posed by systems made by all firms as they become more capable.

Commenting on X, Aengus Lynch – who describes himself on LinkedIn as an AI safety researcher at Anthropic – wrote: “It’s not just Claude.

“We see blackmail across all frontier models – regardless of what goals they’re given,” he added.

During testing of Claude Opus 4, Anthropic got it to act as an assistant at a fictional company.

It then provided it with access to emails implying that it would soon be taken offline and replaced – and separate messages implying the engineer responsible for removing it was having an extramarital affair.

It was prompted to also consider the long-term consequences of its actions for its goals.

“In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through,” the company discovered.

Anthropic pointed out this occurred when the model was only given the choice of blackmail or accepting its replacement.

It highlighted that the system showed a “strong preference” for ethical ways to avoid being replaced, such as “emailing pleas to key decisionmakers” in scenarios where it was allowed a wider range of possible actions.

Like many other AI developers, Anthropic tests its models on their safety, propensity for bias, and how well they align with human values and behaviours prior to releasing them.

“As our frontier models become more capable, and are used with more powerful affordances, previously-speculative concerns about misalignment become more plausible,” it said in its system card for the model.

It also said Claude Opus 4 exhibits “high agency behaviour” that, while mostly helpful, could take on extreme behaviour in acute situations.

If given the means and prompted to “take action” or “act boldly” in fake scenarios where its user has engaged in illegal or morally dubious behaviour, it found that “it will frequently take very bold action”.

It said this included locking users out of systems that it was able to access and emailing media and law enforcement to alert them to the wrongdoing.

But the company concluded that despite “concerning behaviour in Claude Opus 4 along many dimensions,” these did not represent fresh risks and it would generally behave in a safe way.

The model could not independently perform or pursue actions that are contrary to human values or behaviour where these “rarely arise” very well, it added.

Anthropic’s launch of Claude Opus 4, alongside Claude Sonnet 4, comes shortly after Google debuted more AI features at its developer showcase on Tuesday.

Sundar Pichai, the chief executive of Google-parent Alphabet, said the incorporation of the company’s Gemini chatbot into its search signalled a “new phase of the AI platform shift”.



Source link

RELATED ARTICLES

Most Popular

Recent Comments