Lunos logoLunos

Integration: OpenClaw with Lunos

This guide shows how to connect OpenClaw to Lunos using Lunos as a custom provider. Because Lunos is OpenAI-compatible, OpenClaw can use Lunos model IDs through a single endpoint at https://api.lunos.tech/v1.

Prerequisites

  • OpenClaw installed on your machine
  • A Lunos account and a Lunos API key
  • A model ID available in your Lunos dashboard

Simple instruction (CLI onboard)

Onboard Lunos via CLI

Run these commands in order. Each one maps to a step in the manual config.


Step 1 — Set your API key

export LUNOS_API_KEY="lns-sk-your-key-here"

Replace lns-sk-your-key-here with the key from your Lunos Dashboard.


Step 2 — Register Lunos as a provider

openclaw onboard \
  --auth-choice apiKey \
  --token-provider lunos \
  --token "$LUNOS_API_KEY"

--token-provider is the provider name that will appear in your config (lunos).
--token pulls the key you exported in Step 1.


Step 3 — Set a default model

openclaw models set lunos/openai/gpt-4o

Replace openai/gpt-4o with whichever model you want as primary.
Full format is lunos/<provider>/<model-id> — check available IDs at lunos.tech/models.


Step 4 — Restart the gateway

openclaw gateway restart

Always required after provider changes. Config won't apply until you do this.


Step 5 — Verify

openclaw models list
openclaw models status

list should show your lunos/* models.
status confirms auth is healthy.

Detailed instruction (manual config)

Follow the steps below in order.

1) Grab your Lunos API key

Generate a secret key in the Lunos Dashboard. Keep it somewhere safe; treat it like a password.

You can also verify connectivity with a quick balance check:

curl -X GET "https://api.lunos.tech/v1/balance" \
  -H "Authorization: Bearer YOUR_LUNOS_API_KEY" \
  -H "Content-Type: application/json"

2) Set your API key as an environment variable

Store the secret so it does not end up committed to git.

macOS / Linux (zsh/bash)

export LUNOS_API_KEY="lns-sk-your-key-here"

Windows (PowerShell)

$env:LUNOS_API_KEY = "lns-sk-your-key-here"

3) Open your openclaw.json config

On most systems the config lives at:

  • ~/.openclaw/openclaw.json

Some installs may use:

  • ~/.openclaw/clawdbot.json

Create the file if it does not exist yet.

4) Add Lunos as a custom provider

Add a lunos provider inside models.providers. The api: "openai-completions" field is critical because it tells OpenClaw to use the OpenAI protocol when routing requests to Lunos.

{
  "models": {
    "mode": "merge",
    "providers": {
      "lunos": {
        "baseUrl": "https://api.lunos.tech/v1",
        "apiKey": "${LUNOS_API_KEY}",
        "api": "openai-completions",
        "models": [
          {
            "id": "openai/gpt-4o",
            "name": "GPT-4o via Lunos",
            "contextWindow": 128000,
            "maxTokens": 4096
          },
          {
            "id": "anthropic/claude-sonnet-4",
            "name": "Claude Sonnet 4 via Lunos",
            "contextWindow": 200000,
            "maxTokens": 8192
          },
          {
            "id": "google/gemini-2.0-flash",
            "name": "Gemini 2.0 Flash via Lunos",
            "contextWindow": 1000000,
            "maxTokens": 8192
          }
        ]
      }
    }
  }
}

5) Allowlist your models and set aliases (two-step rule)

OpenClaw uses fully-qualified model references in the format provider/model-id. In addition to defining the provider in providers.lunos.models, you must allowlist the models under agents.defaults.models (and set agents.defaults.model.primary and fallbacks). Otherwise you will get a model not allowed error.

Use this complete example as your openclaw.json:

{
  "models": {
    "mode": "merge",
    "providers": {
      "lunos": {
        "baseUrl": "https://api.lunos.tech/v1",
        "apiKey": "${LUNOS_API_KEY}",
        "api": "openai-completions",
        "models": [
          {
            "id": "openai/gpt-4o",
            "name": "GPT-4o via Lunos",
            "contextWindow": 128000,
            "maxTokens": 4096
          },
          {
            "id": "anthropic/claude-sonnet-4",
            "name": "Claude Sonnet 4 via Lunos",
            "contextWindow": 200000,
            "maxTokens": 8192
          },
          {
            "id": "google/gemini-2.0-flash",
            "name": "Gemini 2.0 Flash via Lunos",
            "contextWindow": 1000000,
            "maxTokens": 8192
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "lunos/openai/gpt-4o",
        "fallbacks": [
          "lunos/anthropic/claude-sonnet-4",
          "lunos/google/gemini-2.0-flash"
        ]
      },
      "models": {
        "lunos/openai/gpt-4o": {
          "alias": "gpt-4o"
        },
        "lunos/anthropic/claude-sonnet-4": {
          "alias": "sonnet"
        },
        "lunos/google/gemini-2.0-flash": {
          "alias": "gemini-flash"
        }
      }
    }
  }
}

6) Restart the gateway and verify

Config changes require a gateway restart. Then list your models to confirm Lunos entries show up.

openclaw gateway restart
openclaw models list
openclaw models status

If you still see model not allowed, ensure the id in providers.lunos.models[].id matches the allowlist reference in agents.defaults.models (for example lunos/openai/gpt-4o).

7) Switch models mid-session

Once the provider is wired up, you can switch models on the fly using either the fully-qualified model reference or the alias you defined:

openclaw models set lunos/openai/gpt-4o
openclaw models set gpt-4o
openclaw models set gemini-flash

Power move: track usage per project with X-App-ID

Lunos supports a header you can use to tag calls by app/project. When OpenClaw routes through Lunos, you can pass this header via the provider config.

{
  "models": {
    "providers": {
      "lunos": {
        "baseUrl": "https://api.lunos.tech/v1",
        "apiKey": "${LUNOS_API_KEY}",
        "api": "openai-completions",
        "headers": {
          "X-App-ID": "my-openclaw-agent"
        },
        "models": []
      }
    }
  }
}

After adding this, requests will be tagged and you can break down token usage and cost per app ID in your Lunos dashboard.

Troubleshooting

"model not allowed" error

This usually means you defined the model in providers.lunos.models, but you did not also add it to agents.defaults.models. Both entries are required. Restart the gateway after updating openclaw.json.

Requests return 401 Unauthorized

OpenClaw is not picking up your API key. Check that LUNOS_API_KEY is exported in the shell where you run OpenClaw:

echo $LUNOS_API_KEY

If the output is empty, re-export the variable or configure it permanently in your shell profile.

Old model still loads after config change

After any openclaw.json change:

  1. Run openclaw gateway restart
  2. Start a fresh session
  3. Verify with openclaw models list

Rate limiting (429 errors)

If you configure agents.defaults.model.fallbacks, OpenClaw can route to the next fallback model when one fails. Lunos rate limiting can return headers such as:

  • X-RateLimit-Limit
  • X-RateLimit-Remaining
  • X-RateLimit-Reset

Run openclaw models status after config changes to quickly confirm which models are live.

Security

  • Never commit API keys into repositories.
  • Prefer environment variables (like LUNOS_API_KEY) and reference them in openclaw.json as ${LUNOS_API_KEY}.
  • Use separate keys for dev/staging/prod when possible.

References