Security Vulnerabilities Found in Popular OpenClaw Skills
The article reports on a security analysis of 25 popular OpenClaw skills, which are third-party tools that can be used with AI assistants like Claude Code. The analysis found significant security issues, including hardcoded secrets, unsafe code execution, and excessive file system access.
Why it matters
The security issues found in popular OpenClaw skills highlight the risks of running third-party code in AI assistants, which could lead to data breaches or system compromise.
Key Points
- 125 popular OpenClaw skill repositories were scanned for security vulnerabilities
- 21,195 total findings were detected, including 25 critical, 615 high, and 555 medium severity issues
- 34 out of 25 repositories (16%) had critical findings, and 9 (36%) scored below 20 out of 100
- 4The most common issues were unsafe code execution patterns and excessive file system access
Details
The article describes the security scanning process, which looked for hardcoded secrets, unsafe code execution, file system access, data exfiltration patterns, and code obfuscation. The results showed that many OpenClaw skills have significant security vulnerabilities, with the official skill registry and a security-focused plugin scoring 0 out of 100. The core OpenClaw framework and a medical AI skill also had critical findings. The article explains that these vulnerabilities pose a risk to the host machine when the AI assistant runs these third-party tools. The article suggests that Anthropic's decision to charge extra for OpenClaw support in Claude Code is likely due to the security risks involved.
No comments yet
Be the first to comment