This guide shows how to connect OpenClaw to Lunos using Lunos as a custom provider. Because Lunos is OpenAI-compatible, OpenClaw can use Lunos model IDs through a single endpoint at https://api.lunos.tech/v1.
Run these commands in order. Each one maps to a step in the manual config.
export LUNOS_API_KEY="lns-sk-your-key-here"
Replace
lns-sk-your-key-herewith the key from your Lunos Dashboard.
openclaw onboard \
--auth-choice apiKey \
--token-provider lunos \
--token "$LUNOS_API_KEY"
--token-provideris the provider name that will appear in your config (lunos).--tokenpulls the key you exported in Step 1.
openclaw models set lunos/openai/gpt-4o
Replace
openai/gpt-4owith whichever model you want as primary.
Full format islunos/<provider>/<model-id>— check available IDs at lunos.tech/models.
openclaw gateway restart
Always required after provider changes. Config won't apply until you do this.
openclaw models list
openclaw models status
listshould show yourlunos/*models.statusconfirms auth is healthy.
Follow the steps below in order.
Generate a secret key in the Lunos Dashboard. Keep it somewhere safe; treat it like a password.
You can also verify connectivity with a quick balance check:
curl -X GET "https://api.lunos.tech/v1/balance" \
-H "Authorization: Bearer YOUR_LUNOS_API_KEY" \
-H "Content-Type: application/json"
Store the secret so it does not end up committed to git.
export LUNOS_API_KEY="lns-sk-your-key-here"
$env:LUNOS_API_KEY = "lns-sk-your-key-here"
openclaw.json configOn most systems the config lives at:
~/.openclaw/openclaw.jsonSome installs may use:
~/.openclaw/clawdbot.jsonCreate the file if it does not exist yet.
Add a lunos provider inside models.providers. The api: "openai-completions" field is critical because it tells OpenClaw to use the OpenAI protocol when routing requests to Lunos.
{
"models": {
"mode": "merge",
"providers": {
"lunos": {
"baseUrl": "https://api.lunos.tech/v1",
"apiKey": "${LUNOS_API_KEY}",
"api": "openai-completions",
"models": [
{
"id": "openai/gpt-4o",
"name": "GPT-4o via Lunos",
"contextWindow": 128000,
"maxTokens": 4096
},
{
"id": "anthropic/claude-sonnet-4",
"name": "Claude Sonnet 4 via Lunos",
"contextWindow": 200000,
"maxTokens": 8192
},
{
"id": "google/gemini-2.0-flash",
"name": "Gemini 2.0 Flash via Lunos",
"contextWindow": 1000000,
"maxTokens": 8192
}
]
}
}
}
}
OpenClaw uses fully-qualified model references in the format provider/model-id. In addition to defining the provider in providers.lunos.models, you must allowlist the models under agents.defaults.models (and set agents.defaults.model.primary and fallbacks). Otherwise you will get a model not allowed error.
Use this complete example as your openclaw.json:
{
"models": {
"mode": "merge",
"providers": {
"lunos": {
"baseUrl": "https://api.lunos.tech/v1",
"apiKey": "${LUNOS_API_KEY}",
"api": "openai-completions",
"models": [
{
"id": "openai/gpt-4o",
"name": "GPT-4o via Lunos",
"contextWindow": 128000,
"maxTokens": 4096
},
{
"id": "anthropic/claude-sonnet-4",
"name": "Claude Sonnet 4 via Lunos",
"contextWindow": 200000,
"maxTokens": 8192
},
{
"id": "google/gemini-2.0-flash",
"name": "Gemini 2.0 Flash via Lunos",
"contextWindow": 1000000,
"maxTokens": 8192
}
]
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "lunos/openai/gpt-4o",
"fallbacks": [
"lunos/anthropic/claude-sonnet-4",
"lunos/google/gemini-2.0-flash"
]
},
"models": {
"lunos/openai/gpt-4o": {
"alias": "gpt-4o"
},
"lunos/anthropic/claude-sonnet-4": {
"alias": "sonnet"
},
"lunos/google/gemini-2.0-flash": {
"alias": "gemini-flash"
}
}
}
}
}
Config changes require a gateway restart. Then list your models to confirm Lunos entries show up.
openclaw gateway restart
openclaw models list
openclaw models status
If you still see model not allowed, ensure the id in providers.lunos.models[].id matches the allowlist reference in agents.defaults.models (for example lunos/openai/gpt-4o).
Once the provider is wired up, you can switch models on the fly using either the fully-qualified model reference or the alias you defined:
openclaw models set lunos/openai/gpt-4o
openclaw models set gpt-4o
openclaw models set gemini-flash
Lunos supports a header you can use to tag calls by app/project. When OpenClaw routes through Lunos, you can pass this header via the provider config.
{
"models": {
"providers": {
"lunos": {
"baseUrl": "https://api.lunos.tech/v1",
"apiKey": "${LUNOS_API_KEY}",
"api": "openai-completions",
"headers": {
"X-App-ID": "my-openclaw-agent"
},
"models": []
}
}
}
}
After adding this, requests will be tagged and you can break down token usage and cost per app ID in your Lunos dashboard.
This usually means you defined the model in providers.lunos.models, but you did not also add it to agents.defaults.models. Both entries are required. Restart the gateway after updating openclaw.json.
OpenClaw is not picking up your API key. Check that LUNOS_API_KEY is exported in the shell where you run OpenClaw:
echo $LUNOS_API_KEY
If the output is empty, re-export the variable or configure it permanently in your shell profile.
After any openclaw.json change:
openclaw gateway restartopenclaw models listIf you configure agents.defaults.model.fallbacks, OpenClaw can route to the next fallback model when one fails. Lunos rate limiting can return headers such as:
X-RateLimit-LimitX-RateLimit-RemainingX-RateLimit-ResetRun openclaw models status after config changes to quickly confirm which models are live.
LUNOS_API_KEY) and reference them in openclaw.json as ${LUNOS_API_KEY}.No headings found on this page.
