On March 24, 2026, two malicious versions of the litellm Python package - v1.82.7 and v1.82.8 - were published to PyPI as part of a broader supply chain campaign attributed to a threat actor group known as TeamPCP. The packages were live for approximately 40 minutes before being quarantined by PyPI, during which they accumulated tens of thousands of downloads.
LiteLLM is a widely used open-source library that provides a unified interface for calling over 100 large language model (LLM) provider APIs, including OpenAI, Anthropic, AWS Bedrock, Google Vertex AI, and Azure OpenAI. It is commonly used in AI agent frameworks, MCP servers, CI/CD pipelines, and production inference infrastructure. Because it sits between applications and multiple AI service providers, it frequently has access to API keys and cloud credentials - making it a high-value target.
This post covers the attack background, the payload behavior, how security teams can detect it using network and endpoint telemetry, indicators of compromise, and recommended remediation steps.
Action Required: Versions 1.82.7 and 1.82.8 of litellm python package are confirmed malicious. Both have been removed from PyPI but may still be present in cached environments or pinned CI/CD pipelines.
The LiteLLM compromise is the third incident in a campaign attributed to TeamPCP, a threat actor group that has been targeting open-source tooling used in developer and security workflows.
The known campaign timeline:
Phase 1: Initial Compromise & Credential Exposure (March 19–23)Phase 4: Attacker Interference
12:44 PM - The attacker, still having maintainer-level access:
Phase 5: Containment & Response
1:38 PM - PyPI administrators intervened:
Quarantined the entire LiteLLM package
Blocked new downloads
Removed the malicious versions from distribution
LiteLLM's own security advisory states: 'We believe that the compromise originated from the Trivy dependency used in our CI/CD security scanning workflow.' The incident illustrates a cascading trust failure: a compromised upstream tool in one CI/CD pipeline handed attackers the credentials needed to poison a widely used downstream package.
Note: The attacker's exact access pathway for obtaining LiteLLM's PyPI publishing credentials has not been fully confirmed publicly as of this writing. The Trivy link is the working hypothesis based on available evidence.
Analysis of the malicious packages reveals a multi-stage payload. The detailed breakdown below is drawn from public research by Upwind, Wiz, Sonatype, and FutureSearch.
Version 1.82.7 contained a malicious payload injected into proxy_server.py. Version 1.82.8 added a .pth file (litellm_init.pth) to the package. Python automatically processes .pth files on every interpreter startup when the package is installed, so the payload executes silently regardless of whether litellm is explicitly imported.
A reported side effect of the .pth-based execution mechanism in 1.82.8 was a recursive fork condition (the spawned subprocess re-triggered the .pth file), which in some environments caused a process storm. This was an unintended bug in the malware, and it was one of the first observable symptoms reported by the engineer who discovered the attack publicly.
The payload collected a broad range of sensitive files and data from the host, including:
Collected data was encrypted using AES-256-CBC with a random session key, which was itself encrypted with a hardcoded RSA-4096 public key. The encrypted archive was then sent via HTTPS POST to models.litellm.cloud - a domain that is not affiliated with BerriAI or the legitimate litellm project. Only the attacker holding the corresponding RSA private key can decrypt the exfiltrated data.
The payload installed a persistence backdoor on the local host:
If a Kubernetes service account token with sufficient RBAC permissions was present, the malware attempted to deploy privileged pods (using alpine:latest) to every node in kube-system, mounting the host filesystem to replicate the persistence backdoor at the host level.
Below are concrete detection signals. These are based on the known payload behavior documented by Upwind, Wiz, and Sonatype, mapped to the types of telemetry available in most enterprise environments.
DNS-level blocking of models.litellm.cloud and checkmarx[.]zone will prevent exfiltration and C2 polling on affected hosts that have not yet communicated with these endpoints.
Detection note: Detection of this attack relies primarily on runtime behavioral signals - not static package scanning. By the time a malicious package version appears in a CVE feed or is yanked from PyPI, it has already executed in many environments. Network egress monitoring, process behavior analytics, and file integrity monitoring are the layers that catch it in time.
The following IOCs are drawn from LiteLLM's official security advisory, Upwind's payload analysis, and Wiz's campaign research.
| Indicator | Type | Notes |
|---|---|---|
| models.litellm[.]cloud | Domain - C2 exfil | Exfiltration endpoint. Not affiliated with BerriAI / litellm. |
| checkmarx[.]zone | Domain - C2 agent | Persistence agent polls this domain for commands every ~50 min. |
| litellm==1.82.7 | Package version | Malicious PyPI release. Payload in proxy_server.py. |
| litellm==1.82.8 | Package version | Malicious PyPI release. Payload in proxy_server.py + .pth file. |
| litellm_init.pth | File name | Malicious .pth execution hook in Python site-packages. |
| ~/.config/sysmon/sysmon.py | File path | Persistence backdoor script on Linux/macOS hosts. |
| ~/.config/systemd/user/sysmon.service | File path | Systemd service keeping backdoor alive; auto-restarts every 10 sec. |
| /tmp/pgl.log | File path | Downloaded C2 payload (arbitrary binary from attacker). |
| /tmp/.pg_state | File path | Tracks last downloaded C2 URL to avoid re-execution. |
| node-setup-* (kube-system) | K8s pod name pattern | Privileged pods created by malware for host-level persistence. |
| alpine:latest (hostPID=true) | Container pattern | Privileged container used for K8s lateral movement. |
Per LiteLLM's official advisory:
If you believe you may be affected, take these steps. Simply removing the package is not sufficient - the malware is designed to establish persistence and may have already communicated with attacker infrastructure.
1. Check if you are affected
2. Remove and purge cached packages
3. Check for persistence artifacts
4. Rotate all credentials on affected systems
Assume any credential present on an affected machine is compromised. This includes: SSH keys, AWS access keys and IAM credentials, GCP ADC tokens, Azure service principal credentials, Kubernetes service account tokens, API keys stored in .env files, database passwords, LLM provider API keys.
5. Block network IOCs
Block models.litellm[.]cloud and checkmarx[.]zone at your DNS resolver and firewall. Review proxy/firewall logs for any prior POST connections to models.litellm.cloud.
6. Reinstall from a verified version
Install litellm v1.83.0 or later, released via the project's new CI/CD v2 pipeline with improved security controls. LiteLLM's security advisory includes SHA-256 checksums for verified safe versions going back to v1.78.0.
7. Audit transitive dependencies
Check whether any AI agent frameworks, MCP servers, or orchestration tools in your environment pull in litellm as an unpinned transitive dependency. Treat any unpinned transitive dependency as a potential exposure vector.
The LiteLLM compromise is part of a deliberate pattern. TeamPCP has consistently targeted security-adjacent and AI-adjacent tooling - a vulnerability scanner, infrastructure analysis extensions, and now an LLM proxy library. These tools share a common characteristic: they run with broad access because their legitimate function requires it.
AI libraries like litellm are particularly valuable targets because they are installed across developer workstations, CI/CD pipelines, and production environments simultaneously - often with access to credentials spanning every AI service provider an organization uses. As agentic AI systems become more prevalent, the libraries that power them will carry even more sensitive context.
The attack also illustrates the fragility of implicit trust in the open-source supply chain. LiteLLM's compromise was downstream collateral from the Trivy breach. Credential rotation that is not atomic - not simultaneous across all systems - can leave a window that sophisticated attackers are prepared to exploit.
Organizations adopting AI tooling at speed should apply the same supply chain security discipline they apply to production software: version pinning, dependency lock files, SBOM generation, private package mirrors, and runtime behavioral monitoring.
Netenrich's detection philosophy, the foundation of Autonomous security operations, is built around a simple but important distinction: automate the known, discover the unknown. Known detections - matching a malicious package version, flagging a listed IOC domain, finding a specific file hash - are necessary but not sufficient. They only fire after the threat is already documented. What catches a campaign in motion, before the IOC list exists, is behavioral detection: noticing that something a node has never done before is now happening, at the right time, with the right context.
The LiteLLM attack is a good illustration of this in practice. Below is the sequence of behavioral signals that surfaced across monitored environments, organized by detection type.
Thesis: Known detections (IOC matches, version signatures, rule-based single-event alerts) are the floor, not the ceiling. The detections that matter in a live supply chain attack are the ones that fire on behavior that has never been seen before from that node - without requiring a prior IOC match.
These detections fire when a specific observed event matches a known-bad indicator. They are important, reliable, and should run continuously - but they depend on the threat already being documented.
Known detection 1: Package version match
The most direct signal. If your environment has package inventory visibility (e.g., through a software composition analysis tool or CI/CD pipeline scanning), an install of litellm==1.82.7 or litellm==1.82.8 is an unambiguous indicator. This fires as a single-event rule the moment the version appears.
Known detection 2: IOC domain match - models.litellm.cloud
Any outbound connection to models.litellm.cloud is a known-bad indicator once the IOC is published. DNS query logs and proxy logs will surface this as a single-event match. The domain was registered separately from legitimate litellm infrastructure and has no legitimate use.
Known detection 3: IOC domain match - checkmarx.zone
The persistence agent polls checkmarx.zone every ~50 minutes for a new payload URL. This domain has no connection to the legitimate Checkmarx security company - it is a lookalike domain used as the attacker's C2 channel. A DNS or proxy match on this domain indicates the persistence backdoor is installed and active.
Known detection 4: Known file path - sysmon.py / sysmon.service
The presence of ~/.config/sysmon/sysmon.py or ~/.config/systemd/user/sysmon.service is a known artifact of this specific malware. File integrity monitoring or EDR file creation events will catch these on write.
Known detection 5: Multi-event correlation - install followed by C2 contact within N minutes
A higher-confidence rule that combines two events: a litellm install event followed within a short window by an outbound connection to a novel external domain from a Python process. Neither event alone is necessarily malicious; together they are strong signal.
These are the detections that would have fired even if the IOC list for this campaign had never been published - because they are based entirely on behavioral deviation from a known baseline. Each one represents something a node has never done before, in a pattern that is anomalous regardless of whether the destination domain or file hash is recognized.
Why this matters: The malicious packages were live for approximately 40 minutes before public disclosure. Known IOC-based detections can only fire after the community has documented and distributed the indicators. Behavioral detections based on what a node has never done before are the only layer capable of firing within that window.
Behavioral signal 1: Python process makes first-ever outbound POST to an external domain at install time
What happened: Immediately following a pip install litellm command, a Python subprocess initiated an HTTPS POST request to an external domain. This was the first time this process lineage had made an outbound POST to any external host from this node.
Why it fired without IOC knowledge: This node had no prior history of Python processes making outbound POST requests at install time. Package installation is expected to pull packages from PyPI - not to POST encrypted binary payloads to external endpoints. The combination of process context (Python subprocess spawned by pip), direction (outbound), method (POST), and payload type (encrypted binary, not JSON or form data) was anomalous against this node's baseline behavior, regardless of the destination.
Behavioral signal 2: Python subprocess reads SSH private keys and .env files in bulk
What happened: A Python subprocess opened and read multiple files matching SSH key patterns (id_rsa, id_ed25519) and .env files across the user’s home directory within a few seconds of each other.
Why it fired without IOC knowledge: File access events are high-volume, but the pattern here is unusual. A Python process spawned as a child of pip reading SSH private keys and .env files in rapid succession - files that a package installer has no reason to touch. Behavioral models that track which processes access which file types flags this as an anomalous file access pattern for this process lineage.
Behavioral signal 3: New systemd user service created and enabled by a Python process
What happened: A Python subprocess wrote a new service file to ~/.config/systemd/user/ and issued a systemctl --user enable --now command to activate it immediately.
Why it fired without IOC knowledge: Installing packages is not expected behavior - detecting it does not require knowing the specific service name or file path. Any Python process that writes to the systemd user service directory and enables a service is exhibiting behavior that has no legitimate package-install explanation.
The table below maps each detection signal to its type and the earliest point at which it could fire in the attack timeline.
| Detection Signal | Type | IOC Required? | Fires At |
|---|---|---|---|
| Package version match (1.82.7 / 1.82.8) | Known - single event | Yes | At pip install |
| IOC domain match - models.litellm.cloud | Known - single event | Yes | At first exfil attempt |
| IOC domain match - checkmarx.zone | Known - single event | Yes | At first C2 poll (~50 min post-install) |
| Known file paths - sysmon.py / sysmon.service | Known - single event | Yes | At persistence write |
| Install → outbound POST within 3 min | Known - multi-event | Yes | ~3 min post-install |
| First-ever outbound POST from Python at install time | Behavioral - unknown | No | Within seconds of exfil |
| Bulk credential file reads from install subprocess | Behavioral - unknown | No | During harvest stage |
| systemd service created by Python process | Behavioral - unknown | No | At persistence write |
Key takeaway: Five of the eleven detections above require no prior knowledge of this campaign. They fire on behavioral deviation alone - a node doing something it has never done before, in a context that has no legitimate explanation. These are the detections that operate inside the disclosure window, before the IOC list exists. Known detections extend coverage once the campaign is documented. Both layers are necessary; neither is sufficient alone.