Configure multiple AI model providers to access them through a single OpenAI-compatible API. Nexus supports OpenAI, Anthropic, Google, and AWS Bedrock providers.
OpenAI-compatible provider supporting GPT models and function calling.
[llm.providers.openai]
type = "openai"
api_key = "{{ env.OPENAI_API_KEY }}"
base_url = "https://api.openai.com/v1" # Optional - custom endpoint
# Models must be explicitly configured
[llm.providers.openai.models.gpt-4]
# Model is available as "openai/gpt-4" and maps to upstream "gpt-4"
[llm.providers.openai.models."gpt-3.5-turbo"]
# Model is available as "openai/gpt-3.5-turbo" and maps to upstream "gpt-3.5-turbo"
- Function Calling: Full support for tools and function calls
- Native Compatibility: Direct API pass-through for maximum compatibility
Anthropic Claude models with automatic message format conversion.
[llm.providers.anthropic]
type = "anthropic"
api_key = "{{ env.ANTHROPIC_API_KEY }}"
base_url = "https://api.anthropic.com/v1" # Optional - custom endpoint
# Models must be explicitly configured
[llm.providers.anthropic.models."claude-3-5-sonnet-20241022"]
# Model is available as "anthropic/claude-3-5-sonnet-20241022"
[llm.providers.anthropic.models."claude-3-opus-20240229"]
# Model is available as "anthropic/claude-3-opus-20240229"
Google Gemini models with role mapping and system instruction support.
[llm.providers.google]
type = "google"
api_key = "{{ env.GOOGLE_API_KEY }}"
base_url = "https://generativelanguage.googleapis.com/v1beta" # Optional
# Models must be explicitly configured
[llm.providers.google.models."gemini-1.5-pro"]
# Model is available as "google/gemini-1.5-pro"
[llm.providers.google.models."gemini-1.5-flash"]
# Note: Model names with dots must be quoted
AWS Bedrock provides access to foundation models from multiple vendors including Anthropic, Amazon, Meta, Mistral, Cohere, and DeepSeek. Nexus uses AWS Bedrock's unified Converse API for consistent interaction across all Bedrock models.
[llm.providers.bedrock]
type = "bedrock"
region = "us-east-1" # AWS region (required)
profile = "production" # Optional AWS profile name
# Alternative authentication methods:
# access_key_id = "{{ env.AWS_ACCESS_KEY_ID }}"
# secret_access_key = "{{ env.AWS_SECRET_ACCESS_KEY }}"
# Models must be explicitly configured
[llm.providers.bedrock.models."anthropic.claude-3-sonnet-20240229-v1:0"]
# Available as "bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
[llm.providers.bedrock.models."anthropic.claude-3-haiku-20240307-v1:0"]
# Available as "bedrock/anthropic.claude-3-haiku-20240307-v1:0"
# You can also create aliases for easier use
[llm.providers.bedrock.models.fast]
rename = "anthropic.claude-3-haiku-20240307-v1:0"
# Available as "bedrock/fast" which maps to Claude 3 Haiku
Bedrock supports multiple authentication methods:
- AWS Profile (recommended for local development):
[llm.providers.bedrock]
type = "bedrock"
region = "us-east-1"
profile = "production" # Uses AWS credentials from ~/.aws/credentials
- Explicit Credentials (for CI/CD or containers):
[llm.providers.bedrock]
type = "bedrock"
region = "us-east-1"
access_key_id = "{{ env.AWS_ACCESS_KEY_ID }}"
secret_access_key = "{{ env.AWS_SECRET_ACCESS_KEY }}"
# session_token = "{{ env.AWS_SESSION_TOKEN }}" # Optional for temporary credentials
- IAM Role (for EC2 instances or ECS tasks):
[llm.providers.bedrock]
type = "bedrock"
region = "us-east-1"
# No credentials needed - uses instance/task IAM role
AWS Bedrock supports models from Anthropic, Amazon, Meta, Mistral, Cohere, and others. See the Model Management documentation for configuration examples and the complete list.
Configure custom header transformation rules for provider requests. This allows you to forward headers from incoming requests, add static headers, remove sensitive headers, or rename headers for compatibility.
Headers are configured as an array of rules, where each rule specifies a transformation type and its parameters:
[llm.providers.openai]
type = "openai"
api_key = "{{ env.OPENAI_API_KEY }}"
# Forward specific header from incoming requests
[[llm.providers.openai.headers]]
rule = "forward"
name = "x-user-id"
# Forward headers matching a pattern
[[llm.providers.openai.headers]]
rule = "forward"
pattern = "^x-custom-" # Forward all headers starting with "x-custom-"
# Add static headers to all requests
[[llm.providers.openai.headers]]
rule = "insert"
name = "x-api-version"
value = "2024-01"
[[llm.providers.openai.headers]]
rule = "insert"
name = "x-client-id"
value = "{{ env.CLIENT_ID }}" # Support for environment variables
# Remove headers before sending to provider
[[llm.providers.openai.headers]]
rule = "remove"
name = "x-internal-token"
# Remove headers matching a pattern
[[llm.providers.openai.headers]]
rule = "remove"
pattern = "^x-debug-" # Remove all debug headers
# Rename and duplicate headers (preserves original)
[[llm.providers.openai.headers]]
rule = "rename_duplicate"
name = "x-custom-org"
rename = "x-organization"
Forwards headers from incoming requests to the provider:
[[llm.providers.anthropic.headers]]
rule = "forward"
name = "x-request-id" # Forward specific header
# With rename and default value
[[llm.providers.anthropic.headers]]
rule = "forward"
name = "x-trace-id"
rename = "anthropic-trace-id" # Optional: rename the header
default = "{{ env.DEFAULT_TRACE_ID }}" # Optional: default if not present
Adds static headers to all provider requests:
[[llm.providers.google.headers]]
rule = "insert"
name = "x-goog-api-version"
value = "v1beta"
Removes headers before sending to the provider:
[[llm.providers.openai.headers]]
rule = "remove"
name = "cookie" # Remove specific header
[[llm.providers.openai.headers]]
rule = "remove"
pattern = "^x-internal-" # Remove all internal headers
Duplicates a header with a new name while preserving the original header:
[[llm.providers.anthropic.headers]]
rule = "rename_duplicate"
name = "authorization"
rename = "x-original-auth"
default = "Bearer {{ env.DEFAULT_TOKEN }}" # Optional: used if original header doesn't exist
# This results in both headers being present:
# - authorization: <original value>
# - x-original-auth: <same value>
Use regex patterns to match multiple headers:
# Forward all headers starting with "x-org-"
[[llm.providers.openai.headers]]
rule = "forward"
pattern = "^x-org-"
# Remove all temporary headers
[[llm.providers.openai.headers]]
rule = "remove"
pattern = "^x-temp-"
Nexus automatically protects sensitive headers by default. The following headers are never forwarded unless explicitly configured:
authorization
x-api-key
api-key
cookie
set-cookie
To forward sensitive headers, you must explicitly include them in a forward rule:
# Explicitly forward authorization header (use with caution)
[[llm.providers.custom.headers]]
rule = "forward"
name = "authorization"
Headers start with an empty set and are processed sequentially in the order they are defined in your configuration. Both forward
and insert
rules will override any existing headers with the same name.
For example:
# 1. Insert a static header first
[[llm.providers.openai.headers]]
rule = "insert"
name = "x-api-version"
value = "v1"
# 2. Forward headers matching a pattern (will override existing ones!)
[[llm.providers.openai.headers]]
rule = "forward"
pattern = "^x-custom-"
# If client sends x-custom-override=user-value, it replaces any existing value
# 3. Remove a specific header that may have been forwarded
[[llm.providers.openai.headers]]
rule = "remove"
name = "x-custom-internal"
# 4. Insert to ensure final value (overrides anything forwarded)
[[llm.providers.openai.headers]]
rule = "insert"
name = "x-custom-override"
value = "final-value"
This sequential processing means you can:
- Set defaults with insert, then let forward override them if provided
- Forward headers with patterns, then remove specific unwanted ones
- Ensure critical headers have correct values by inserting after forward
- Build complex header transformation pipelines with precise control
[llm.providers.openai]
type = "openai"
api_key = "{{ env.OPENAI_API_KEY }}"
# Forward user context
[[llm.providers.openai.headers]]
rule = "forward"
name = "x-user-id"
# Add OpenAI-specific beta features
[[llm.providers.openai.headers]]
rule = "insert"
name = "OpenAI-Beta"
value = "assistants=v2"
[llm.providers.anthropic]
type = "anthropic"
api_key = "{{ env.ANTHROPIC_API_KEY }}"
# Forward and rename organization header
[[llm.providers.anthropic.headers]]
rule = "forward"
name = "x-org-id"
rename = "anthropic-org-id"
[llm.providers.google]
type = "google"
api_key = "{{ env.GOOGLE_API_KEY }}"
# Add Google Cloud project headers
[[llm.providers.google.headers]]
rule = "insert"
name = "x-goog-user-project"
value = "{{ env.GCP_PROJECT_ID }}"
type
: Provider type - must be"openai"
,"anthropic"
, or"google"
(required)api_key
: API key for the provider (required)models
: At least one model must be configured (required)
base_url
: Custom API endpoint URL (optional)- Useful for Azure OpenAI, local deployments, or proxy servers
- Defaults to the provider's standard endpoint if not specified
forward_token
: Enable token forwarding (see Token Forwarding)headers
: Array of header transformation rules (optional)- Each rule must specify a
rule
type:forward
,insert
,remove
, orrename_duplicate
- Use
name
for specific headers orpattern
for regex matching - Additional fields depend on the rule type
- Each rule must specify a
type
: Must be"bedrock"
(required)region
: AWS region where Bedrock is available (required)models
: At least one model must be configured (required)
access_key_id
: AWS Access Key ID for authenticationsecret_access_key
: AWS Secret Access Keysession_token
: AWS Session Token for temporary credentialsprofile
: AWS profile name from~/.aws/credentials
base_url
: Custom endpoint URL for VPC endpoints
Note: Token forwarding is not supported for AWS Bedrock due to AWS SigV4 authentication requirements.
You can configure multiple instances of the same provider type with different names:
[llm.providers.openai_standard]
type = "openai"
api_key = "{{ env.OPENAI_API_KEY }}"
[llm.providers.openai_standard.models.gpt-4]
# Available as "openai_standard/gpt-4"
[llm.providers.azure_openai]
type = "openai"
api_key = "{{ env.AZURE_OPENAI_API_KEY }}"
base_url = "https://your-resource.openai.azure.com/openai/deployments/gpt-4"
[llm.providers.azure_openai.models.gpt-4]
# Available as "azure_openai/gpt-4"
[llm.providers.claude_work]
type = "anthropic"
api_key = "{{ env.ANTHROPIC_WORK_KEY }}"
[llm.providers.claude_work.models."claude-3-5-sonnet-20241022"]
# Available as "claude_work/claude-3-5-sonnet-20241022"
Use {{ env.VARIABLE_NAME }}
for environment variable substitution:
[llm.providers.openai]
type = "openai"
api_key = "{{ env.OPENAI_API_KEY }}"
[llm.providers.anthropic]
type = "anthropic"
api_key = "{{ env.ANTHROPIC_API_KEY }}"
- Explicit Model Configuration: Only configure models you actually need
- Use Environment Variables: Never hardcode API keys
- Custom Provider Names: Use descriptive names for multiple instances
- Test Configuration: Verify models are available using
/llm/v1/models
- Monitor Usage: Track which models are being used most
- Rotate Keys: Regularly rotate API keys for security
- Configure Model Management for aliases and custom names
- Set up Token Rate Limiting for usage control
- Enable Token Forwarding for user-provided keys