Anthropic Warns Today’s AI Could Help Plan Catastrophic Crimes — Urgent Safeguards Needed

AI Risks: Anthropic on the Threat of Criminal Misuse of TechnologyThe risk of artificial intelligence becoming a tool for catastrophic crimes is no longer just theoretical—companies are calling for urgent security measures, reports ZME Science. Anthropic’s new report says current AI models already have capabilities that could be exploited to prepare “heinous crimes.” The company’s developers say the risk is significant and call for a new protective system. Some experts even call the risk more frightening than nuclear weapons.

Threat Classification by ASL Levels

The company has implemented its own safety scale—AI Safety Levels (ASL)—which determines the level of risk based on the model’s capabilities:

  • ASL-1 and ASL-2: pertain to current models that already have built-in filters but cannot independently plan a large-scale attack.
  • ASL-3: this level describes models that could significantly assist in creating biological weapons or conducting destructive cyberattacks.
  • ASL-4 and above: theoretical future systems capable of independently executing strategies to destabilize governments or global networks.

Hacker attack on a laptop screen

Four Key Vectors of Danger

The report’s authors, including lead researcher Alistair Stewart, highlight specific areas where AI could become a critical weapon:

  1. Biological Security: assisting in the cultivation of dangerous pathogens and developing methods for their covert dissemination.
  2. Cyberattacks: creating next-generation malware capable of bypassing modern antivirus systems and autonomously seeking vulnerabilities in the defenses of banks or energy networks. While similar technologies already help law enforcement, attackers could also use the same algorithms to target critical infrastructure.
  3. Chemical Threats: providing instructions for synthesizing toxic substances from readily available ingredients.
  4. Radiological and Nuclear Risks: simplifying calculations for creating devices that utilize radioactive materials.

Robot hand and human hand

Management’s Position and Countermeasures

Anthropic’s CEO, Dario Amodei, emphasizes that the company is already dedicating a significant portion of its computing resources not to developing new features but to “red teaming.” This process involves the company’s own experts trying to force AI to breach protocols to identify vulnerabilities. The problem is that AI is becoming increasingly sophisticated in its manipulations, which complicates verifying whether models are being honest.
According to the report’s findings, developers propose:

  • Implementing mandatory checks for models regarding their “criminal potential” before release.
  • Creating physically secure servers for storing the most powerful algorithms.
  • Developing international standards that would restrict access to certain knowledge in biology and chemistry through chatbots.

Today, AI is not just a creative tool but also a potentially autonomous entity capable of unpredictable actions. This is why Anthropic is calling for the implementation of strict international standards and physical protection for servers housing the most powerful models.
Photo: Unsplash