AI threats on the rise
According to one estimate, generative AI (GenAI) could add the equivalent of $2.6-4.4 trillion annually to the global economy. But as more organizations build out AI infrastructure and embed the technology into more business-critical processes, they could also be exposed to the risk of sensitive data compromise, extortion, and sabotage in new ways.
We’ve highlighted this in the past, noting countless vulnerabilities and misconfigurations in AI components like vector stores, LLM-hosting platforms and other open source software. Among other things, organizations fear that threat actors could steal training data for profit, poison it to compromise an LLMs output and integrity, or steal the models themselves.
In developing AML.CS0028, we uncovered disturbing trends:
- Over 8,000 exposed container registries were found online—double the number observed in 2023.
- 70% of these registries allowed push (write) permissions, meaning attackers could inject malicious AI models.
- Within these registries, 1,453 AI models were identified, many in Open Neural Network Exchange (ONNX) format, with vulnerabilities that could be exploited.
This sharp growth reflects a…





























