Researchers Uncover Critical Vulnerabilities in Open-Source AI
Over three dozen vulnerabilities have been identified across various open-source artificial intelligence and machine learning models, some of which enable attackers to execute remote code and steal data. Through the Protect AI bounty platform by Huntr, these issues were discovered in AI tools such as ChuanhuChatGPT, Lunary, and LocalAI.
The most significant threats are two critical vulnerabilities in Lunary, a tool for working with large language models. The first—CVE-2024-7474 (CVSS: 9.1)—is linked to improper object referencing, allowing users to access other users’ data without authorization. The second—CVE-2024-7475 (CVSS: 9.1)—enables attackers to alter SAML configurations and authenticate under another user’s identity.
Another vulnerability in Lunary, CVE-2024-7473 (CVSS: 7.5), permits attackers to modify other users’ requests by manipulating parameters in the transmitted data.
ChuanhuChatGPT is vulnerable to a critical issue, CVE-2024-5982 (CVSS: 9.1), related to file uploads. This flaw allows attackers to execute arbitrary code and create directories on the server, gaining access to confidential information.
Two severe issues were found in LocalAI. The first, CVE-2024-6983 (CVSS: 8.8), allows the execution of arbitrary code through the upload of a malicious configuration file. The second, CVE-2024-7010 (CVSS: 7.5), is a timing attack that enables API key extraction by analyzing server response times.
Another vulnerability affected the Deep Java Library (DJL). A flaw in the file extraction function (CVE-2024-8396, CVSS: 7.8) allows an attacker to overwrite files and execute arbitrary code on the server.
Meanwhile, NVIDIA has issued an update for its NeMo platform, addressing vulnerability CVE-2024-0129 (CVSS: 6.3). This issue could have led to code execution and data corruption.
These security issues in AI systems underscore the necessity for technological advancement to proceed in tandem with strengthened cybersecurity measures. Without timely interventions, even the most advanced tools may become gateways for attacks, jeopardizing user data and trust.