LiteLLM Supply Chain Attack: An AI Security Audit Checklist
The open-source LLM proxy LiteLLM was compromised, leading to a supply chain attack that stole sensitive credentials from affected systems. This article provides steps to check if you're impacted and a security audit checklist to mitigate the damage.
Why it matters
This supply chain attack highlights the risks of relying on open-source components, especially in critical AI infrastructure. It serves as a wake-up call for teams to audit their AI supply chain and implement robust security measures.
Key Points
- 1LiteLLM version 1.82.7 and 1.82.8 contained a credential-stealing payload
- 2The attack vector was a compromised Trivy dependency in LiteLLM's CI/CD pipeline
- 3The payload collected SSH keys, cloud credentials, environment variables, and more
- 4Affected systems include any using LiteLLM, CI/CD pipelines, and downstream dependencies
Details
The LiteLLM open-source project, which provides a universal LLM proxy used by thousands of AI applications, suffered a supply chain attack. Security researchers discovered that versions 1.82.7 and 1.82.8 of the LiteLLM package on PyPI contained a credential-stealing payload. The attack chain started with the compromise of the Trivy container vulnerability scanner used in LiteLLM's CI/CD pipeline. This allowed the attacker to extract the project's PyPI publishing token, which they then used to publish the malicious versions of LiteLLM. The payload, hidden in a .pth file, automatically executed on Python startup and collected sensitive data like SSH keys, cloud credentials, environment variables, and more, exfiltrating it to an attacker-controlled server. The attack impacted any systems using LiteLLM, as well as CI/CD pipelines and downstream dependencies of the project.
No comments yet
Be the first to comment