Categories USA News

Researchers find just 250 malicious documents can leave LLMs vulnerable to backdoors

Artificial intelligence companies have been working at breakneck speeds to develop the best and most powerful tools, but that rapid development hasn’t always been coupled with clear understandings of AI’s limitations or weaknesses. Today, Anthropic released a report on how attackers can influence the development of a large language model.

The study centered on a type of attack called poisoning, where an LLM is pretrained on malicious content intended to make it learn dangerous or unwanted behaviors. The key finding from this study is that a bad actor doesn’t need to control a percentage of the pretraining materials to get the LLM to be poisoned. Instead, the researchers found that a small and fairly constant number of malicious documents can poison an LLM, regardless of the size of the model or its training materials. The study was able to successfully backdoor LLMs based on using only 250 malicious documents in the pretraining data set, a much smaller number than expected for models ranging from 600 million to 13 billion parameters. 

“We’re sharing these findings to show that data-poisoning attacks might be more practical than believed, and to encourage further research on data poisoning and potential defenses against it,” the company said. Anthropic collaborated with the UK AI Security Institute and the Alan Turing Institute on the research.

This article originally appeared on Engadget at https://www.engadget.com/researchers-find-just-250-malicious-documents-can-leave-llms-vulnerable-to-backdoors-191112960.html?src=rss

More From Author

You May Also Like

Meet Aardvark, OpenAI’s security agent for code analysis and patching

OpenAI has introduced Aardvark, a GPT-5-powered autonomous security researcher agent now available in private beta.…

Why IT leaders should pay attention to Canva’s ‘imagination era’ strategy

The rise of AI marks a critical shift away from decades defined by information-chasing and…

Meta researchers open the LLM black box to repair flawed AI reasoning

Researchers at Meta FAIR and the University of Edinburgh have developed a new technique that…