Microsoft exposes Iranian, North Korean, Russian, and Chinese use of AI in cyberattacks

Cybersecurity headlines are once again buzzing with the latest announcement from Microsoft. The company has detected and disrupted US adversaries' early attempts to use generative artificial intelligence (AI) to mount and organize offensive cyber operations. Specifically, Microsoft has highlighted the misuse of AI technology by Iran's Revolutionary Guard and North Korea's Kimsuky cyberespionage group. The company has collaborated with its business partner OpenAI to disrupt these threats.

This development is a cause for concern, as Microsoft emphasizes the growing trend of US rivals leveraging large-language models to expand their ability to breach networks and conduct influence operations. These adversaries are using AI to enhance malicious social engineering, leading to more sophisticated deepfakes and voice cloning. Microsoft has advised the public to be vigilant about the increasing threats.

According to Microsoft, the Iranian Revolutionary Guard has used large-language models to assist in social engineering, troubleshooting software errors, and studying how intruders might evade detection in a compromised network. North Korea's Kimsuky group has used the models to research foreign think tanks studying the country and to generate content for spear-phishing hacking campaigns.

Russian GRU military intelligence unit Fancy Bear has used the models to research satellite and radar technologies relating to the war in Ukraine. Among Chinese cyberespionage groups, Aquatic Panda has interacted with the models in ways suggesting a limited exploration of how large language models can augment their technical operations, while Maverick Panda has shown interest in evaluating their effectiveness as a source of information on potentially sensitive topics, high-profile individuals, regional geopolitics, US influence, and internal affairs. Microsoft has disabled the accounts and assets of these groups.

It is important to note that this development is not entirely unexpected. Last year, the director of the US Cybersecurity and Infrastructure Security Agency (CISA) told Congress that artificial intelligence (AI) was one of the two most significant threats facing the US. The US needs to ensure that AI is built with security in mind, she emphasized. Some critics contend that the rush to release ChatGPT and subsequent large-language models was irresponsible, considering security was an afterthought.

Microsoft's exposure of these threats and their attempts to exploit AI technology they had developed is a significant step towards mitigating potential harm. However, cybersecurity experts agree that this is only a beginning stage of leveraging AI technology for cyber offences and preparing for more sophisticated threats.

Finally, Microsoft's announcement underscores the critical importance of responsible development and use of AI technologies, prioritizing security measures to combat potential threats and ensure the safety of global digital ecosystems. Protecting against these evolving threats will require extensive collaboration between industry giants like Microsoft and OpenAI and government agencies to implement robust security protocols and educate the public on best practices to stay safe in the digital realm.