Nexus supports the OpenAI protocol natively, allowing you to use it as a proxy for OpenAI Codex CLI. This enables you to leverage Nexus's features like rate limiting, observability, and multi-provider routing while using Codex.

  • Nexus v0.5.1 or later
  • OpenAI Codex CLI installed (github.com/openai/codex)
  • API keys for your preferred providers (OpenAI, Anthropic, etc.)

Update your nexus.toml to enable the OpenAI protocol and configure your providers:

[llm] enabled = true # Enable the OpenAI protocol endpoint [llm.protocols.openai] enabled = true path = "/llm/openai" # This is the default path # Configure your providers [llm.providers.openai] type = "openai" api_key = "{{ env.OPENAI_API_KEY }}" [llm.providers.anthropic] type = "anthropic" api_key = "{{ env.ANTHROPIC_API_KEY }}" # Configure the models you want to use [llm.providers.openai.models."gpt-4-turbo-preview"] [llm.providers.openai.models."gpt-4"] [llm.providers.anthropic.models."claude-3-5-sonnet-20241022"] [llm.providers.anthropic.models."claude-3-5-haiku-latest"]

Set your API keys:

export OPENAI_API_KEY="sk-..." export ANTHROPIC_API_KEY="sk-ant-api03-..."
nexus --config nexus.toml

By default, Nexus will listen on http://localhost:6000.

Configure Codex CLI to use Nexus by editing ~/.codex/config.toml:

Important: Nexus serves its OpenAI-compatible endpoint at /llm/openai/; Codex expects the /v1 suffix. Make sure the base_url ends with /v1.

[model_providers.nexus] name = "Nexus AI router" base_url = "http://127.0.0.1:6000/llm/openai/v1" wire_api = "chat" query_params = {}
  • base_url must point to the Nexus OpenAI-compatible endpoint and include the /v1 suffix (adjust host/port if Nexus runs elsewhere)
  • wire_api should be set to "chat" for chat completions
  • query_params can stay empty, but the table must exist to satisfy Codex's schema

Start Codex with a Nexus-managed model:

codex -c model="openai/gpt-4" -c model_provider=nexus

You can use any provider/model pair that you have configured in Nexus:

# Use OpenAI models codex -c model="openai/gpt-4-turbo-preview" -c model_provider=nexus # Use Anthropic models through OpenAI-compatible interface codex -c model="anthropic/claude-3-5-haiku-latest" -c model_provider=nexus # Use other configured models codex -c model="groq/llama-3.1-70b-versatile" -c model_provider=nexus

If you're running Nexus in Docker:

services: nexus: image: ghcr.io/grafbase/nexus:latest ports: - "6000:6000" volumes: - ./nexus.toml:/etc/nexus.toml environment: - OPENAI_API_KEY=${OPENAI_API_KEY} - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}

Then configure Codex to use the containerized Nexus:

[model_providers.nexus] name = "Nexus AI router" base_url = "http://localhost:6000/llm/openai/v1" wire_api = "chat" query_params = {}

Using Nexus with OpenAI Codex provides:

  1. Unified Gateway: Route all AI requests through a single endpoint
  2. Multi-Provider Support: Easily switch between OpenAI, Anthropic, and other providers
  3. Rate Limiting: Control token consumption per user and model
  4. Observability: Built-in OpenTelemetry metrics, traces, and logs
  5. Cost Control: Monitor and limit token usage across all providers
  6. Model Management: Configure and manage models centrally
  7. Security: Add authentication, CORS, and CSRF protection

The OpenAI protocol implementation in Nexus:

  • Fully supports OpenAI's chat completions format
  • Handles streaming responses
  • Supports function calling (tools)

If Codex can't connect to Nexus:

  1. Verify Nexus is running:

    curl http://localhost:6000/health
  2. Check the OpenAI protocol is enabled:

    curl http://localhost:6000/llm/openai/v1/models
  3. Verify your Codex configuration:

    cat ~/.codex/config.toml | grep -A 4 model_providers.nexus

If you get a "model not found" error:

  1. Ensure the model is configured in nexus.toml
  2. Use the correct format: provider/model-name
  3. List available models:
    curl http://localhost:6000/llm/openai/v1/models | jq .

If you get authentication errors:

  1. Verify your API keys are set correctly:

    echo $OPENAI_API_KEY echo $ANTHROPIC_API_KEY
  2. Check Nexus logs for more details:

    nexus --log debug

If Codex fails to use the Nexus provider:

  1. Ensure the model_providers.nexus section exists in ~/.codex/config.toml
  2. Verify the base_url ends with /v1
  3. Check that wire_api is set to "chat"
  4. Ensure query_params table exists (even if empty)
# Start a Codex session with GPT-4 codex -c model="openai/gpt-4" -c model_provider=nexus # Execute a single command codex exec -c model="anthropic/claude-3-5-sonnet-20241022" -c model_provider=nexus "Write a Python function to calculate fibonacci"

Compare responses from different models:

# Try with GPT-4 codex exec -c model="openai/gpt-4" -c model_provider=nexus "Explain quantum computing" # Try with Claude codex exec -c model="anthropic/claude-3-5-sonnet-20241022" -c model_provider=nexus "Explain quantum computing" # Try with Llama codex exec -c model="groq/llama-3.1-70b-versatile" -c model_provider=nexus "Explain quantum computing"
© Grafbase, Inc.