Google reported this Monday, May 11th, that a group of cybercriminals attempted a large-scale attack.
According to the tech giant, the cybercriminals apparently used artificial intelligence (AI) to detect a previously unknown flaw.
"We have a high probability that the attacker used an AI model to facilitate the discovery and exploitation of this vulnerability," the company wrote in a report.
The company did not specify when the thwarted attack occurred, who it was targeted at, or which AI model the attackers used.
However, it added that it was not their own chatbot, Gemini.
The report released by Google comes amid a situation where cybersecurity experts have warned about the possibility that AI models could be used to facilitate cyberattacks.
What is known about the cybercriminals who used AI to attempt an attack, according to Google:
The cybercriminals identified by Google focused on exploiting what is known as "zero-day vulnerabilities," which are security flaws unknown to the software developers.
The zero-day vulnerability was detected by Google's Threat Intelligence Group in recent months and exploited by "well-known cybercrime actors" using a Python script, according to the tech giant.
That vulnerability...
AI Brief
Your highlights
Google reported on Monday, May 11th, that a group of cybercriminals attempted a large-scale attack. According to the tech giant, the cybercriminals apparently used artificial intelligence (AI) to detect a previously unknown flaw. "We have a high probability that the attacker used an AI model to facilitate the discovery and exploitation of this vulnerability," the company wrote in a report. The company did not specify when the thwarted attack occurred, who it was targeted at, or which AI model the attackers used. However, it added that it was not their own chatbot, Gemini. The report released by Google comes amid a situation where cybersecurity experts have warned about the possibility of AI models being used to facilitate cyberattacks. Here's what is known about the cybercriminals who used AI to attempt a large-scale attack, according to Google. Photo: reference. Here's what is known about the cybercriminals who used AI to attempt an attack, according to Google. The cybercriminals identified by Google focused on exploiting what is known as "zero-day vulnerabilities," i.e., security flaws unknown to the software developers. The zero-day vulnerability was detected by Google's Threat Intelligence Group in recent months and exploited by "notorious cybercrime actors" using a Python script, according to the tech giant. This vulnerability allowed hackers to bypass two-factor authentication in "a popular open-source web system administration tool." Google did not reveal the name of that tool. However, it said that it notified the software manufacturer quickly enough for a patch to be released before the attack caused damage. The company also did not reveal the name of the hacking group. John Hultquist, chief analyst of Google's Threat Intelligence Group, said in statements compiled by the New York Times that the fact that they used AI to find a zero-day vulnerability "is a preview of what's to come." "We believe this is just the tip of the iceberg. This problem is likely much larger; this is just the first tangible evidence we can observe," he added. Rob Joyce, former director of cybersecurity at the U.S. National Security Agency, who reviewed the findings before their publication, stated that it can be complex to determine whether computer code was written by a human or a machine, because "AI-generated code doesn't reveal itself." However, he added that the clues Google gathered in relation to this case were convincing. These included excessive explanatory text and other peculiarities that human programmers would not necessarily include, Joyce said. Hultquist assured that Google had other indications that reinforced its conclusion that the malicious code was written by AI. However, he preferred not to reveal them. According to the tech giant's analyst, in the long term, AI could strengthen cybersecurity through the creation of secure, open-source code. "State-of-the-art models will allow us to create the most secure code we have ever created. That represents an absolute triumph for cybersecurity. The challenge is that we have only just begun this process and we have to deal with a world of existing code," Hultquist said.