The Week in Security: AI hallucinations linked to software supply chain risk, CI fix deemed critical
Home » Editorial Calendar » Software Supply Chain Security » The Week in Security: AI hallucinations linked to software supply chain risk, CI fix deemed critical
Welcome to the latest edition of The Week in Security, which brings you the newest headlines from both the world and our team across the full stack of security: application security, cybersecurity, and beyond. This week: ChatGPT hallucinations pose a software supply chain threat. Also: A Congress-mandated watch group believes the White House needs to step up its game in securing critical infrastructure.
Researchers from Vulcan Cyber's Voyager18 research team have discovered that threat actors can exploit ChatGPT's false recommendations to spread malicious code via developers that rely on the tool. This discovery poses immense risk to software supply chains, in that malicious code and trojans can be accidentally inserted by developers into applications and open source code repositories. These repositories, such as npm and PyPI, have already seen a major uptick in attacks on their platforms, and this new risk associated with ChatGPT is bound to only make the problem worse.
Researchers believe that threat actors can leverage what's known as "AI package hallucinations" to create ChatGPT-recommended packages that are actually malicious. Hallucinations in this scenario are defined as actual AI responses that are insufficient, biased, or false. These occur because ChatGPT uses sources from across the internet, even incorrect and malicious ones, to generate responses for users.
Attackers can use this to their advantage by publishing their own malicious version of a suggested ChatGPT-made software package. The attacker's hope is that ChatGPT will then adopt the malicious package as a source, giving it as a recommendation to developers who use the AI machine. Developers who inadvertently download these malicious packages via ChatGPT can use them in both private projects or open source repositories, causing potential risk to an infinite number of software supply chains.
Since ChatGPT's release in November 2022, it has become a popular tool for both developers and threat actors. This has forced the cybersecurity community to come to terms with how AI could drastically change how we should best secure our systems and supply chains.
Here are the stories we’re paying attention to this week…
U.S. government policies designed to protect critical infrastructure against hackers are woefully outdated and inadequate to safeguard sectors such as water and transportation against cyberthreats, and influential congressionally-mandated group of experts said.
The Clop group, a cybercrime gang reportedly based in Russia, has issued "an ultimatum" to companies in the U.K. and other countries that were targeted in a recent large-scale hack of payroll data. The payroll data of more than 100,000 staff was stolen at firms including the BBC, British Airways and more.
The North Korean nation-state threat actor known as Kimsuky has been linked to a social engineering campaign targeting experts in North Korean affairs with the goal of stealing Google credentials and delivering reconnaissance malware. The disclosure comes days after U.S. and South Korean intelligence agencies issued an alert warning of Kimsuky's use of social engineering tactics to strike think tanks, academia, and news media sectors.
The Royal ransomware gang has begun testing a new encryptor called BlackSuit that shares many similarities with the operation's usual encryptor. The gang launched in January 2023, and is believed to be the direct successor to the notorious Conti operation, which shut down in June 2022.
In last week's edition, we reported on the recent discovery of a Barracuda email flaw left open for months. Now, the company is telling customers to immediately replace hacked Email Security Gateway (ESG) appliances, even if they have installed all available patches.
*** This is a Security Bloggers Network syndicated blog from ReversingLabs Blog authored by Carolynn van Arsdale. Read the original post at: https://www.reversinglabs.com/blog/week-in-security-ai-hallucinations-risk