Microsoft and OpenAI prevent hackers in Russia and North Korea from using AI big model | Titanium Media AGI

Microsoft and OpenAI prevent hackers in Russia and North Korea from using AI big model | Titanium Media AGI

(Source: The Verge)

With the rapid development of AI technology, hackers have begun to use large models to upgrade network attack technology.

Titanium media app February 15 news,Microsoft, an American technology giant, published a research article entitled "Staying ahead of threat actors in the age of AI" in official website last night.

Microsoft claims that their joint research with the OpenAI team found that hacker organizations in Russia, North Korea, Iran and other countries are using generative AI tools such as ChatGPT to conduct target research, improve scripts and help build social engineering skills, thus improving and perfecting existing cyber attack technologies.

"Cybercrime groups, national threat actors and other opponents are exploring and testing different AI technologies, trying to understand their potential value to operations and security controls that may need to be circumvented." Microsoft said in the article.

Microsoft emphasizes that the advancement of AI technology has increased the demand for network security and security measures. Microsoft should continue to cooperate with OpenAI, constantly improve its defense measures, identify, detect and prevent potential malicious use or abuse by hackers, and protect customers’ data security.

It is reported that,Since Microsoft invested billions of dollars in OpenAI in 2019, the two companies began to work closely together. Especially after the release of ChatGPT, Microsoft has become an important beneficiary of the new wave of AI. Its market value has surpassed that of Apple, reaching $3.01 trillion, making it the world’s most valuable technology company.

Microsoft revealed that the goal of its cooperation with OpenAI for this research is to ensure the safe and responsible use of AI technologies such as ChatGPT and adhere to the highest standards of ethical application to protect users from potential abuse.

As a part of the research, in 2023, Microsoft developed a product called "Microsoft Threat Intelligence", which tracked more than 300 cyber attackers, covering actors in 160 countries and 50 ransomware organizations, and collected more than 750 billion signals per second, which were synthesized by using complex data analysis and AI algorithm to understand and prevent cyber threats and criminal activities.

At the same time, by the end of 2023, Microsoft has recruited more than 10,000 security and threat intelligence experts around the world, including engineers, researchers, data scientists, network security experts, threat Seeker, analysts, investigators and frontline responders, and more than 15,000 partners have provided professional solutions. In the past 2023, Microsoft Threat Intelligence blocked an average of 4,000 hacker intrusion threats with identity verification every second, and 135 million managed devices provided security method service feedback.

In the research of Microsoft and OpenAI, it is mainly mentioned that the following four hacker organizations use AI model technology to improve and upgrade network attacks, which may pose a certain threat to the country.

  • Forest Blizzard (STRONTIUM), a Russian military intelligence technology (organization) connected with GRU 26165, threatens many fields such as national defense, transportation/logistics, government, energy, non-governmental organizations (NGOs) and information technology. It is found that they use the Large Model (LLMs) to "understand the satellite communication protocol, radar imaging technology and specific technical parameters".
  • Emerald Sleet (THALLIUM), a hacking technique (group) from North Korea, remained highly active in 2023. They use the loopholes in the research report published by the big model to help and draft the content of phishing activities, including impersonating well-known academic institutions and non-governmental organizations to lure victims to reply to expert opinions and comments on North Korea’s foreign policy.
  • Crimson Sandstorm (CURIUM) is a hacker organization from Iran. Since 2017, it has been active in many fields, including national defense, shipping, transportation, health care and technology. Microsoft said that they used a big model to generate phishing emails and even code to evade anti-virus software detection.
  • Charcoal Typhoon (CHROMIUM), whose organization also uses models for research, scripting, translation and improvement of existing tools.

Specific to the case, Microsoft revealed that some hackers use "voice imitation" and other methods to make AI synthesize speech. A three-second voice sample can train a model that sounds like a person, thus attacking and maliciously defrauding users of important private information. "Even your voice mail greeting can be used to get enough samples."

In fact, with the emergence of multi-modal AI model tools such as ChatGPT, it has not only helped more people to improve their productivity, but also been created by some lawless elements to crack malicious emails and use AI technology to carry out cyber attacks. Officials of the National Security Agency have warned that hackers are using AI to make their phishing emails look more credible.

However, OpenAI also stressed that it found that the GPT model only provided "limited incremental" functions for hacker organizations. Microsoft stressed that they have not detected any "major attacks" using the big model. At present, Microsoft and OpenAI have closed or frozen the accounts and assets of related hacker groups.

In fact, the recent fraud cases about AI technology are on the rise.

At the beginning of February this year, the Hong Kong police revealed that the financial staff of a local company had been defrauded by DeepFake (deep forgery technology). The other party claimed to conduct confidential transactions, invited to participate in a multi-person video conference, and finally accepted the instructions of the fake Chief Financial Officer (CFO). The parties remitted 200 million Hong Kong dollars in 15 times and transferred them to five local accounts. The parties asked the head office for verification five days later, only to find that they were cheated and reported the case to the police.

According to incomplete statistics, from 2020 to 2022, the number of domestic AI fraud cases totaled 5, and the number of AI fraud cases increased to 16 in 2023; At the same time, in terms of case amount, the amount involved in AI fraud will reach 4.3 million yuan in 2023 from at least 3,000 yuan in 2020. The fraud case disclosed by the Hong Kong police this time has raised the amount involved to 200 million Hong Kong dollars (about 182 million yuan).

So, how do we prevent and solve the problem of AI fraud?

Microsoft mainly summarizes two aspects: First, the company uses AI to deal with AI attacks, such as making the technology of protective wall (cyber attack) more complicated and using AI to strengthen information protection, detection and feedback; The second is to detect and help users identify vulnerabilities through AI-based network security tools such as "Microsoft Security Copilot", and finally stop hacker attacks by software.

A number of domestic security experts have summarized four methods: first, users can consciously guide each other to do some actions during video, such as shaking their heads or opening their mouths greatly, so as to find the flaws on the edge of each other’s face or teeth; At the same time, you can ask a few private information that only the two of you know to verify each other’s identity; Thirdly, try to avoid a large number of photos and videos of yourself or your family on the public platform to provide convenience for criminals; Finally, update and install security software in time to further strengthen the network security control at the source.

"We think this is an important study to expose hacker organizations trying to make early and gradual actions through AI and share with customers how we can stop and counter their cyber attacks." Microsoft said that in the future, it will further enhance the technical capabilities of the AI ​ ​ big model, thus helping them to use the model to collect information about technologies and potential vulnerabilities and find ways to avoid cyber attacks.

(This article is the first titanium media App, author | Lin Zhijia)

More exciting content, pay attention to titanium media micro-signal (ID: taimeiti), or download titanium media App.

Reporting/feedback

关于作者

admin administrator