PyPI Poisoned: AI Chatbot Packages Deliver JarkaStealer
Experts from Kaspersky Lab uncovered a supply chain attack on software that persisted for nearly a year. Malicious packages, disguised as tools for creating neural network-based chatbots, were distributed via the PyPI repository. Upon installation, these packages infected devices with the JarkaStealer, a tool designed to exfiltrate sensitive data.
The malicious packages first appeared on the PyPI platform in November 2023 and, prior to their detection, were downloaded 1,700 times by users across 30 countries. The highest levels of interest originated from the United States, China, France, Germany, and Russia. The attack, however, does not appear to have targeted any specific regions or organizations.
These packages were camouflaged as frameworks for popular AI chatbots, including OpenAI’s ChatGPT and Anthropic’s Claude AI. While offering chatbot functionality, they simultaneously installed the JarkaStealer, a malware written in Java. This malicious software is capable of extracting browser data, capturing screenshots, gathering system information, intercepting session tokens from applications such as Telegram, Discord, and Steam, and manipulating browser processes to retrieve stored credentials.
Jarka was distributed via a Telegram channel under a MaaS (Malware-as-a-Service) model, with its source code uploaded to GitHub, making it readily accessible for download. Linguistic markers within the code and promotional materials suggest that the developer is a Russian speaker.
The malicious packages were removed following notifications to the platform. However, the attack underscores the inherent risks of integrating open-source components. Security experts emphasize the critical importance of verifying code integrity at every stage of development to mitigate such threats.