Critical Vulnerability in ONNX's `silent=True` Parameter Exposes ML Models to Supply Chain Attacks
The ONNX Python library's `silent=True` parameter in the `onnx.hub.load()` function disables trust verification mechanisms, making ML pipelines vulnerable to supply chain attacks. This vulnerability, designated CVE-2026-28500 with a CVSS score of 9.1, persists unpatched in ONNX versions up to 1.20.1.
Why it matters
This vulnerability exposes the fragility of ML supply chains and the need for robust security practices in model dependency management.
Key Points
- 1The `silent=True` parameter suppresses user prompts during model loading, inadvertently disabling the library's integrity checks
- 2An attacker can compromise the model repository, replacing both the model file and its SHA256 manifest, nullifying the checksum's effectiveness
- 3The absence of an independent trust anchor transforms `silent=True` from a convenience feature into a critical vulnerability
- 4Widespread adoption of `silent=True` in production pipelines and CI/CD workflows amplifies the impact of this flaw
Details
The vulnerability stems from a design flaw in ONNX Hub's integrity verification process. Model integrity is nominally ensured via a SHA256 manifest, a cryptographic checksum intended to detect tampering. However, this manifest is retrieved from the same repository hosting the model files. Consequently, an attacker who compromises the repository can simultaneously replace both the model and its corresponding manifest, nullifying the checksum's effectiveness. The `silent=True` parameter compounds this issue by eliminating the final safeguard: a user-facing warning that the model originates from an untrusted source. This vulnerability carries far-reaching consequences, as it can enable data poisoning, backdoor attacks, and operational disruption in production systems relying on ONNX models.
No comments yet
Be the first to comment