Nexus exports structured logs using OpenTelemetry, providing detailed application-level insights with automatic trace and span correlation. Logs help you debug issues, audit operations, and understand system behavior with full context from distributed traces.

See the telemetry configuration guide for detailed setup instructions including:

  • OTLP exporter configuration for logs
  • Batch export optimization
  • Integration with logging backends

Use the --log flag or NEXUS_LOG environment variable to control log verbosity. This applies to all spans, logs, and trace events:

LevelDescription
offDisable logging
errorOnly log errors
warnLog errors and warnings
infoLog errors, warnings, and info messages (default)
debugLog errors, warnings, info, and debug messages
traceLog errors, warnings, info, debug, and trace messages
# Using command-line flag nexus --log info # Default level nexus --log debug # Development debugging nexus --log trace # Maximum verbosity nexus --log off # Disable all logging # Using environment variable NEXUS_LOG=debug nexus # Granular per-module configuration nexus --log "nexus=debug,tower_http=info" # Filter out noisy dependencies nexus --log "info,hyper=warn,h2=warn"

Configure the output format using the --log-style flag or NEXUS_LOG_STYLE environment variable:

StyleDescription
colorColorized text (default with TTY output)
textStandard text (default with non-TTY output)
jsonJSON objects for structured logging
# Using command-line flag nexus --log-style color # Colorized for terminal nexus --log-style text # Plain text nexus --log-style json # Structured JSON # Using environment variable NEXUS_LOG_STYLE=json nexus # Combine log level and style nexus --log debug --log-style json

The JSON output format is particularly useful for:

  • Log aggregation systems
  • Automated log parsing
  • Production environments where structured logs are required

Important: The OpenTelemetry layer internally filters its own logs to prevent recursion. You'll never see OpenTelemetry export logs in the exported logs themselves.

All logs include standard OpenTelemetry attributes plus Nexus-specific context:

  • timestamp - Log record timestamp
  • severity_number - Numeric severity level
  • severity_text - Text representation (ERROR, WARN, INFO, DEBUG, TRACE)
  • body - The log message content
  • observed_timestamp - When the log was observed
  • code.filepath - Source file path
  • code.lineno - Line number in source
  • code.namespace - Rust module path

When logs are emitted within an active span context:

  • trace_id - Associated trace identifier
  • span_id - Associated span identifier

From telemetry configuration:

  • service.name - Service identifier
  • Custom attributes from [telemetry.resource_attributes]

Nexus and its dependencies emit logs across various categories:

Request handling and response logs from the HTTP server layer.

Language model interactions including token usage, rate limiting, and provider errors.

Model Context Protocol operations including tool discovery, execution, and errors.

Rate limit enforcement and threshold notifications.

When using Redis backend, connection pool and operation logs.

Logs are automatically correlated with active traces and spans. When a log is emitted within an active span context, it includes the trace and span IDs:

{ "timestamp": "2024-01-15T10:30:45.123Z", "severity_text": "INFO", "body": "Processing LLM request", "trace_id": "7a5d2e3f8b9c1d4e6f8a9b0c1d2e3f4a", "span_id": "3e4f5a6b7c8d9e0f", "attributes": { "code.namespace": "nexus::llm::handler", "code.filepath": "src/llm/handler.rs", "code.lineno": 145 } }

This correlation enables:

  • Viewing logs in context of distributed traces
  • Filtering logs by trace or span ID
  • Understanding log sequences within request flows
  • Root cause analysis with full context

Logs are exported via OTLP to any compatible backend. Popular integrations include:

Via OpenTelemetry Collector:

receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 exporters: loki: endpoint: http://loki:3100/loki/api/v1/push labels: attributes: service_name: "service.name" level: "severity_text" service: pipelines: logs: receivers: [otlp] exporters: [loki]

Via OpenTelemetry Collector:

exporters: elasticsearch: endpoints: [http://elasticsearch:9200] logs_index: nexus-logs

Via AWS OpenTelemetry Collector:

exporters: awscloudwatchlogs: region: us-east-1 log_group_name: /aws/nexus log_stream_name: production

Search for all error-level logs:

  • Filter: severity_text = "ERROR"
  • Time range: Last 1 hour
  • Group by: code.namespace

Find all logs for a trace:

  • Filter: trace_id = "7a5d2e3f8b9c1d4e6f8a9b0c1d2e3f4a"
  • Sort: By timestamp ascending

Filter logs by module:

  • Filter: code.namespace starts_with "nexus::mcp"
  • Severity: DEBUG or higher

Control log volume with appropriate filtering:

# Production: INFO and above nexus --log info # Development: DEBUG for nexus, INFO for dependencies nexus --log "nexus=debug,info" # Minimal: Only warnings and errors nexus --log warn # Disable all logging for minimal overhead nexus --log off

When telemetry is not configured, the logging layer has zero overhead - logs are not collected or processed.

  1. Use Appropriate Log Levels

    • ERROR: System failures requiring intervention
    • WARN: Anomalies that need investigation
    • INFO: Key business events and milestones
    • DEBUG: Detailed operational information
    • TRACE: Very verbose debugging data
  2. Leverage Structured Logging

    • Logs include consistent structured attributes
    • Use trace correlation for debugging
    • Filter by module namespace for targeted analysis
  3. Configure for Your Environment

    • Production: --log info with --log-style json for structured logging
    • Staging: --log "nexus=debug,info" for detailed Nexus logs
    • Development: --log debug or --log trace with --log-style color
  1. Verify Configuration: Ensure logs export is enabled in telemetry configuration

  2. Check Log Level:

    # Ensure log level allows desired logs nexus --log info
  3. Validate Export: Monitor Nexus startup logs for export confirmation

  • Ensure tracing is enabled in configuration
  • Verify trace propagation headers are being sent
  • Check that spans are active when logs are emitted
  • Adjust --log flag to filter unnecessary logs
  • Use module-specific log levels
  • Configure batch export for better throughput
© Grafbase, Inc.