One MCP Server for Ollama, Codex, and OpenAI-Compatible Models

The article introduces 'helix-agents', an evolved version of 'helix-agent' that allows Claude Code to delegate work across Ollama, Codex, and OpenAI-compatible models from a single MCP server.

💡

Why it matters

This project enables developers to leverage the strengths of various AI models (Ollama, Codex, OpenAI-compatible) through a unified interface, improving workflow efficiency and flexibility.

Key Points

  • 1Adds multi-provider switching, Codex-backed code delegation, OpenAI-compatible chat API support, and Claude Code-style background agents
  • 2Supports two delegation styles: a built-in ReAct loop for Ollama and OpenAI-compatible, and an autonomous Codex-backed path for repo-heavy work
  • 3Provides a consistent runtime for using different AI providers (Ollama, Codex, OpenAI-compatible) based on their strengths

Details

The article describes how the original 'helix-agent' project, focused on sending routine work to local Ollama models, has been expanded into 'helix-agents'. The new version adds support for multiple AI providers, including Codex for code-heavy implementation and repo work, and OpenAI-compatible models for hosted chat APIs. The runtime now supports two delegation styles: a built-in ReAct loop for Ollama and OpenAI-compatible, and an autonomous Codex-backed path for repo-heavy work. This allows a more flexible workflow where Claude Code can spawn workers, send follow-up instructions, wait for completion, and inspect the results. The goal is to provide a consistent runtime for using different AI providers based on their strengths, instead of wiring three separate MCP servers with different interaction models.

Like
Save
Read original
Cached
Comments
?

No comments yet

Be the first to comment

AI Curator - Daily AI News Curation

AI Curator

Your AI news assistant

Ask me anything about AI

I can help you understand AI news, trends, and technologies