State-Sponsored APT Groups Were Found Using Gen AI LLMs

0
117

  • This week, Microsoft and OpenAI disclosed that nation-state threat groups are actively leveraging large language models to automate malicious operations, translate technical papers, generate believable phishing content, and gain vulnerability knowledge.
  • The two companies identified five Advanced Persistent Threat groups and terminated associated accounts.

It’s official: state-affiliated threat actors from North Korea, China, Iran, and Russia are using large language models (LLMs) maliciously. Consequently, OpenAI has terminated accounts associated with these Advanced Persistent Threat (APT) groups.

The discovery is somewhat unsurprising, given the usefulness of generative AI, which appeals to anyone keen on polishing their skills, whether they are used for good or bad. The five APT groups, which OpenAI believes may have access to advanced technology, large financial resources, and skilled personnel, were using the company’s generative AI tools to query open-source information, translate and find coding errors, and run basic coding tasks.

In a separate post, Microsoft said that the state-sponsored threat actors sought to improve software scripts, malware and…

Read More…