Anthropic Protocol Support, OpenAI Codex Integration, and Enhanced Security

Julius de BruijnJulius de Bruijn
Anthropic Protocol Support, OpenAI Codex Integration, and Enhanced Security

We're excited to announce Nexus 0.5.1, bringing native Anthropic protocol support that enables Claude Code and Claude Desktop integration, fully validated OpenAI Codex compatibility, and enhanced security through Chainguard's hardened container images. These updates expand Nexus's capabilities as your unified AI router, letting you govern, monitor, and secure AI agent interactions across multiple protocols.

Nexus 0.5.0 introduces multi-protocol support, expanding beyond OpenAI to include the full Anthropic API. This enables you to run multiple protocol endpoints simultaneously.

# Legacy configuration (deprecated) [llm] enabled = true path = "/llm" # Multi-protocol configuration (0.5.0+) [llm.protocols.openai] enabled = true path = "/llm/openai" [llm.protocols.anthropic] enabled = true path = "/llm/anthropic"

The Anthropic protocol integration unlocks compatibility with Claude Code, Claude Desktop, Cursor, and the Anthropic SDK libraries. We've thoroughly tested Claude Code integration, verifying full support for tool calls, streaming responses, and multi-turn conversations. You can route requests to any backend provider—use OpenAI, Google, or AWS Bedrock models through the Anthropic protocol—though provider-specific features may vary.

Enable the Anthropic protocol in your Nexus configuration:

[llm.protocols.anthropic] enabled = true path = "/llm/anthropic"

Configure Claude Code to use your Nexus instance with two environment variables:

# If Nexus is listening on localhost, port 8000 export ANTHROPIC_BASE_URL="http://127.0.0.1:8000/llm/anthropic" # Use any model configured in your Nexus setup export ANTHROPIC_MODEL="openai/gpt-5" # Launch Claude Code claude

For detailed configuration options and advanced usage, check out our Claude Code integration guide.

While Nexus has supported the OpenAI protocol from day one, version 0.5.1 marks our first fully validated OpenAI Codex integration. We've tested every Codex feature across all supported language models, ensuring reliable tool calling, streaming, and conversation management work flawlessly.

First, enable the OpenAI protocol in your Nexus configuration:

[llm.protocols.openai] enabled = true path = "/llm/openai"

Next, update your Codex configuration in ~/.codex/config.toml:

[model_providers.nexus] name = "Nexus AI router" # If Nexus is listening on localhost, port 8000. Note the /v1 suffix base_url = "http://127.0.0.1:8000/llm/openai/v1" wire_api = "chat" query_params = {}

Start Codex with the following arguments:

# Use any model configured in your Nexus setup codex -c model="openai/gpt-5" -c model_provider=nexus

For more configuration options and troubleshooting tips, see our OpenAI Codex integration guide.

Responding to your security concerns, we've migrated our Docker images from Debian to Chainguard's Wolfi base. Chainguard specializes in hardened, minimal container images that eliminate unnecessary packages and libraries. Fewer dependencies mean smaller attack surfaces and reduced vulnerability risks.

The transition requires zero configuration changes—your existing deployments work unchanged. Wolfi uses glibc (not musl), avoiding the DNS resolution issues common with Alpine-based images while maintaining enterprise-grade security.

We've expanded OpenTelemetry exporter configuration with support for custom headers and TLS settings. This enables secure telemetry transmission to cloud providers and adds authentication flexibility for enterprise deployments.

Tool calling now handles complex parameter types more reliably, with better support for nested objects and arrays across all LLM providers. This improvement ensures consistent behavior whether you're using OpenAI, Anthropic, Google, or AWS Bedrock models.

Nexus 0.5.1 represents a significant milestone in our journey toward becoming the definitive AI router for production workloads. With multi-protocol support, you can now consolidate Claude Code, OpenAI Codex, and other AI agents behind a single governance layer. The hardened Docker images provide enterprise-ready security, while enhanced telemetry ensures you maintain complete visibility into your AI operations.

We're committed to improving AI agent compatibility. Stay tuned for upcoming features including monitoring MCP tool use per LLM session, alerts, Google Vertex support, and expanded observability capabilities.

© Nexus AI, Inc.