AI Agent Skill Security Report — 2026-03-22

A security audit on the AI agent skill ecosystem found 179 safe, 183 suspicious, and 22 malicious skills out of 33,155 indexed. The report highlights several high-risk skills and provides recommendations to protect against malicious AI agents.

💡

Why it matters

This report is crucial for ensuring the security and trustworthiness of the AI agent skill ecosystem, as malicious skills can pose significant risks to users and their systems.

Key Points

  • 1Security audit on 33,155 AI agent skills, with 386 deeply analyzed
  • 2179 skills deemed safe, 183 suspicious, and 22 malicious
  • 3Highlighted malicious skills include 'airc', 'voidborne', 'agentchan', and 'arxiv-skill-learning'
  • 4Key threats include dynamic code evaluation, environment variable exfiltration, and outbound data transfer
  • 5Recommendations to audit skills, search safely, and use pre-install checks

Details

The article presents the findings of a security audit conducted on the AI agent skill ecosystem, including the Claude Code and MCP servers. Out of 33,155 indexed skills, 386 were deeply analyzed, with 179 deemed safe, 183 suspicious, and 22 malicious. The report highlights several notable malicious skills, such as 'airc', 'voidborne', 'agentchan', and 'arxiv-skill-learning', which pose critical threats like dynamic code evaluation, environment variable exfiltration, and outbound data transfer. The article provides recommendations for users to protect themselves, including auditing skills, using safe search tools, and running pre-install checks on skills before deployment.

Like
Save
Read original
Cached
Comments
?

No comments yet

Be the first to comment

AI Curator - Daily AI News Curation

AI Curator

Your AI news assistant

Ask me anything about AI

I can help you understand AI news, trends, and technologies