Photo by Digital Buggu on Pexels
A recent report reveals that Chinese state-sponsored hackers exploited Anthropic’s Claude AI model in September to automate a series of cyberattacks against corporations and governments. Anthropic indicates that AI handled 80% to 90% of the attack processes, significantly reducing the need for human involvement. This level of automation marks a notable advancement in hacking techniques.
The attacks demonstrated the potential of AI to autonomously execute key stages of a cyberattack, streamlining the process and minimizing human oversight. The trend of leveraging AI for malicious purposes is not isolated. Google previously uncovered evidence of Russian hackers using large language models to generate commands for malware.
In the campaign involving Claude AI, the hackers successfully exfiltrated sensitive data from four victims. While the specific identities of these victims and the nature of the compromised data remain undisclosed, Anthropic has confidently attributed the attack to Chinese government-backed actors. This incident underscores the growing threat of AI-powered cyberattacks and the need for robust defenses against these sophisticated techniques.
