Copilot CLI Expands with BYOK, Rubber Duck, and Direct Model Flags
GitHub's Copilot CLI now supports bringing your own AI model provider and running local models, enabling offline and air-gapped development workflows. It also introduces 'Rubber Duck', an experimental feature that uses a second AI model to review the primary agent's plans and implementations.
Why it matters
The Copilot CLI updates give developers more flexibility and control over the AI models used in their workflows, enabling offline and air-gapped development. The Rubber Duck feature also improves the reliability and robustness of the AI-assisted coding experience.
Key Points
- 1Copilot CLI now supports BYOK (bring your own key) and running local AI models
- 2GitHub authentication is optional when using your own AI provider
- 3Rubber Duck uses a second AI model to review the primary agent's work at key checkpoints
- 4Rubber Duck can catch architectural issues, bugs, and cross-file conflicts that the primary agent missed
Details
The Copilot CLI update allows users to configure environment variables to route all inference through their own AI model provider, such as Azure OpenAI, Anthropic, or OpenAI. This decouples the CLI's agentic framework from the underlying language model, giving users more control over the models used. The CLI can now be used entirely offline and air-gapped when combined with a local model. The 'Rubber Duck' feature introduces a second AI model (GPT-5.4) to review the primary agent's (Claude) plans and implementations at critical checkpoints, catching issues the primary agent may have missed. This cross-model review has been shown to close 74.7% of the performance gap between the Claude Sonnet and Opus models on a benchmark of real-world coding problems.
No comments yet
Be the first to comment