OpenClaw Integration
Use Ask Sage as a custom model provider for the OpenClaw AI agent platform
About OpenClaw
OpenClaw is an open-source AI agent platform that orchestrates autonomous agents with tool access, memory, and multi-channel delivery. OpenClaw supports custom model providers via Anthropic, OpenAI, and Google API formats — making it fully compatible with Ask Sage's API passthrough.
By configuring Ask Sage as a provider, your OpenClaw agents gain access to 25+ models across all major providers through a single API key — no per-model billing, no separate accounts.
Table of Contents
Prerequisites
Before you begin, ensure you have the following:
Ask Sage API Key
Generate from your Account Settings
OpenClaw Installed
Install OpenClaw on your system
Recommended Plan
AI agents consume significantly more tokens than interactive chat. Autonomous tool loops, multi-step reasoning, large context windows, and background tasks add up fast. The 5M Ultimate plan provides 5 million tokens per month — enough headroom for sustained agent workflows without running into limits mid-task.
Ask Sage uses token-based pricing — there's no per-model billing and no separate API keys for each provider. One API key gives your OpenClaw agents access to 25+ models across Anthropic, OpenAI, and Google. Switch between Claude, GPT, and Gemini freely without managing multiple accounts or worrying about per-model charges.
Configuration
Ask Sage is registered with OpenClaw as a custom model provider. Ask Sage supports three API formats — Anthropic Messages, OpenAI Responses, and Google Generative AI — so you can pick the format that matches the model family you want to use, or register all three.
openclaw CLI (OpenClaw 2026.5+) Starting in OpenClaw 2026.5, the canonical way to register and switch models is the openclaw onboard + openclaw models CLI. Direct edits to openclaw.json still work but are now guarded by safety checks (clobber-protect, size-drop, protected-paths) — the CLI bypasses all of those because it uses the same atomic write path the gateway trusts. The JSON snippets below are still shown for reference and for environments that pre-bake their config (e.g. containers, IaC), but for day-to-day use the CLI is faster and won't trip safety guards.
Quickstart — Register Ask Sage with One Command
The fastest path is the non-interactive openclaw onboard flow. This single command registers Ask Sage as a custom Anthropic-compatible provider and sets a default model:
export ASKSAGE_API_KEY="YOUR_ASK_SAGE_API_KEY"
openclaw onboard \
--flow quickstart \
--auth-choice custom-api-key \
--custom-provider-id asksage-anthropic \
--custom-base-url "https://api.asksage.ai/server/anthropic" \
--custom-compatibility anthropic \
--custom-model-id google-claude-47-opus \
--custom-text-input \
--custom-image-input \
--custom-api-key "$ASKSAGE_API_KEY" \
--accept-risk \
--non-interactive
# Set Opus 4.7 as the default model
openclaw models set asksage-anthropic/google-claude-47-opus
# Add a fallback chain (recommended)
openclaw models fallbacks add asksage-anthropic/google-claude-46-sonnet
# Verify
openclaw models listRepeat the openclaw onboard step with --custom-provider-id asksage-openai + --custom-base-url "https://api.asksage.ai/server/openai/v1" + --custom-compatibility openai to add GPT models, and once more with asksage-google + https://api.asksage.ai/server/google/v1beta for Gemini. Each call is additive — it does not overwrite previously-registered providers.
Useful follow-up commands
# See every model OpenClaw can route to (default, fallbacks, configured)
openclaw models list
# Switch the default model later
openclaw models set asksage-anthropic/google-claude-46-sonnet
# Manage fallback order
openclaw models fallbacks list
openclaw models fallbacks add asksage-google/google-gemini-2.5-pro
openclaw models fallbacks remove asksage-openai/gpt-4.1
openclaw models fallbacks clear
# Re-discover a provider's catalog after Ask Sage adds new models
openclaw models auth login --provider asksage-anthropic
# Show effective config + auth state
openclaw models statusReference: Equivalent JSON Configuration
If you prefer to bake the configuration into openclaw.json directly — for example, in a container image, a Helm chart, or an Infrastructure-as-Code template — the same setup looks like this. Edit the file at ~/.openclaw/openclaw.json (or use openclaw config file to print the active path).
If the gateway is running while you edit, it may reject the change with one of: clobbered (file changed under the gateway), size-drop (new file is >20% smaller than the last good snapshot), or protected-paths (an API key, model id, or other locked field changed without going through the proper command). Stop the gateway first (openclaw gateway stop) for hand edits, or use openclaw config set --batch-file ops.json for scripted changes — it goes through the same trusted writer that the gateway accepts.
Anthropic-Style Provider (Claude Models)
Use the Anthropic Messages API format for Claude models:
{
"models": {
"providers": {
"asksage-anthropic": {
"baseUrl": "https://api.asksage.ai/server/anthropic",
"apiKey": "YOUR_ASK_SAGE_API_KEY",
"auth": "api-key",
"api": "anthropic-messages",
"models": [
{
"id": "google-claude-47-opus",
"name": "Claude Opus 4.7 (Ask Sage)",
"reasoning": true,
"input": ["text", "image"],
"contextWindow": 1000000,
"maxTokens": 128000
},
{
"id": "google-claude-46-sonnet",
"name": "Google Claude 4.6 Sonnet (Ask Sage)",
"reasoning": true,
"input": ["text", "image"],
"contextWindow": 800000,
"maxTokens": 32768
}
]
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "asksage-anthropic/google-claude-47-opus"
}
}
}
}OpenAI-Style Provider (GPT Models)
Use the OpenAI Responses API format for GPT models:
{
"models": {
"providers": {
"asksage-openai": {
"baseUrl": "https://api.asksage.ai/server/openai/v1",
"apiKey": "YOUR_ASK_SAGE_API_KEY",
"auth": "api-key",
"api": "openai-responses",
"models": [
{
"id": "gpt-4.1",
"name": "GPT 4.1 (Ask Sage)",
"reasoning": false,
"input": ["text", "image"],
"contextWindow": 1047576,
"maxTokens": 32768
}
]
}
}
}
}Google-Style Provider (Gemini Models)
Use the Google Generative AI format for Gemini models:
{
"models": {
"providers": {
"asksage-google": {
"baseUrl": "https://api.asksage.ai/server/google/v1beta",
"apiKey": "YOUR_ASK_SAGE_API_KEY",
"auth": "api-key",
"api": "google-generative-ai",
"models": [
{
"id": "google-gemini-2.5-pro",
"name": "Gemini 2.5 Pro (Ask Sage)",
"reasoning": true,
"input": ["text", "image"],
"contextWindow": 1048576,
"maxTokens": 65536
}
]
}
}
}
}Full Multi-Provider Configuration
You can configure all three providers simultaneously, giving your OpenClaw agents access to Claude, GPT, and Gemini models through a single Ask Sage API key:
{
"models": {
"providers": {
"asksage-anthropic": {
"baseUrl": "https://api.asksage.ai/server/anthropic",
"apiKey": "YOUR_ASK_SAGE_API_KEY",
"auth": "api-key",
"api": "anthropic-messages",
"models": [
{
"id": "google-claude-47-opus",
"name": "Claude Opus 4.7 (Ask Sage)",
"reasoning": true,
"input": ["text", "image"],
"contextWindow": 1000000,
"maxTokens": 128000
},
{
"id": "google-claude-46-sonnet",
"name": "Google Claude 4.6 Sonnet (Ask Sage)",
"reasoning": true,
"input": ["text", "image"],
"contextWindow": 800000,
"maxTokens": 32768
}
]
},
"asksage-openai": {
"baseUrl": "https://api.asksage.ai/server/openai/v1",
"apiKey": "YOUR_ASK_SAGE_API_KEY",
"auth": "api-key",
"api": "openai-responses",
"models": [
{
"id": "gpt-4.1",
"name": "GPT 4.1 (Ask Sage)",
"reasoning": false,
"input": ["text", "image"],
"contextWindow": 1047576,
"maxTokens": 32768
},
{
"id": "o3-mini",
"name": "O3 Mini (Ask Sage)",
"reasoning": true,
"input": ["text"],
"contextWindow": 800000,
"maxTokens": 100000
}
]
},
"asksage-google": {
"baseUrl": "https://api.asksage.ai/server/google/v1beta",
"apiKey": "YOUR_ASK_SAGE_API_KEY",
"auth": "api-key",
"api": "google-generative-ai",
"models": [
{
"id": "google-gemini-2.5-pro",
"name": "Gemini 2.5 Pro (Ask Sage)",
"reasoning": true,
"input": ["text", "image"],
"contextWindow": 1048576,
"maxTokens": 65536
},
{
"id": "google-gemini-2.5-flash",
"name": "Gemini 2.5 Flash (Ask Sage)",
"reasoning": true,
"input": ["text", "image"],
"contextWindow": 1048576,
"maxTokens": 65536
}
]
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "asksage-anthropic/google-claude-47-opus"
}
}
}
}Change the agents.defaults.model.primary value to switch your default agent model. For example, set it to asksage-openai/gpt-4.1 or asksage-google/google-gemini-2.5-pro to use a different provider as your default.
Available Models
Below are key models available through Ask Sage, organized by provider API format. Use the Model ID value in your openclaw.json configuration.
Anthropic Models (anthropic-messages)
| Model ID | Name | Context Window | Max Tokens | Reasoning |
|---|---|---|---|---|
google-claude-47-opus | Claude 4.7 Opus (via GCP) | 1M | 128,000 | |
google-claude-46-opus | Claude 4.6 Opus (via GCP) | 800K | 32,768 | |
google-claude-46-sonnet | Claude 4.6 Sonnet (via GCP) | 800K | 32,768 | |
google-claude-45-opus | Claude 4.5 Opus (via GCP) | 135K | 32,768 | |
google-claude-45-sonnet | Claude 4.5 Sonnet (via GCP) | 135K | 32,768 | |
google-claude-45-haiku | Claude 4.5 Haiku (via GCP) | 135K | 32,768 | |
google-claude-4-opus | Claude 4 Opus (via GCP) | 167K | 32,768 | |
google-claude-4-sonnet | Claude 4 Sonnet (via GCP) | 135K | 32,768 |
OpenAI Models (openai-responses)
| Model ID | Name | Context Window | Max Tokens | Reasoning |
|---|---|---|---|---|
gpt-o4-mini | GPT o4 Mini | 120K | 100,000 | |
gpt-o3 | GPT o3 | 120K | 100,000 | |
gpt-o3-mini | GPT o3 Mini | 120K | 65,536 | |
gpt-o1 | GPT o1 | 100K | 32,768 | |
gpt-5.4 | GPT 5.4 | 170K | 32,768 | |
gpt-5.4-nano | GPT 5.4 Nano | 170K | 32,768 | |
gpt-5.2 | GPT 5.2 | 170K | 32,768 | |
gpt-5.1 | GPT 5.1 | 170K | 32,768 | |
gpt-5 | GPT 5 | 170K | 32,768 | |
gpt-5-mini | GPT 5 Mini | 170K | 32,768 | |
gpt-5-nano | GPT 5 Nano | 170K | 32,768 | |
gpt-4.1 | GPT 4.1 | 1,047K | 32,768 | |
gpt-4.1-mini | GPT 4.1 Mini | 1,047K | 32,768 | |
gpt-4.1-nano | GPT 4.1 Nano | 1,047K | 32,768 |
Google Models (google-generative-ai)
| Model ID | Name | Context Window | Max Tokens | Reasoning |
|---|---|---|---|---|
google-gemini-2.5-pro | Gemini 2.5 Pro | 924K | 65,536 | |
google-gemini-2.5-flash | Gemini 2.5 Flash | 924K | 65,536 | |
google-gemini-2.5-flash-image | Gemini 2.5 Flash (Image) | 32K | 65,536 | |
google-imagen-4 | Imagen 4 (Image Generation) | — | — | |
google-veo-3-fast | Veo 3 Fast (Video Generation) | — | — |
Government & DoD Models
Ask Sage provides access to government-authorized models that meet FedRAMP High and DoD IL5/IL6 compliance requirements. These models use dedicated government endpoints and are suitable for CUI and classified workloads.
Government Model IDs
| Model ID | Description | Provider |
|---|---|---|
gpt-4.1-gov | GPT 4.1 (Azure Government) | OpenAI |
gpt-o3-mini-gov | O3 Mini (Azure Government) | OpenAI |
aws-bedrock-claude-45-sonnet-gov | Claude Sonnet 4.5 (AWS GovCloud) | Anthropic |
aws-bedrock-claude-37-sonnet-gov | Claude Sonnet 3.7 (AWS GovCloud) | Anthropic |
aws-bedrock-claude-35-haiku-gov | Claude Haiku 3.5 (AWS GovCloud) | Anthropic |
Government models require a government tenant Ask Sage account and use government-specific base URLs (e.g., https://api.genai.army.mil/server/openai/v1). The base URL will be displayed when you generate your API key. Contact your organization's Ask Sage administrator for access.
Verification
After registering your providers (via CLI or JSON), verify everything is wired up correctly:
List Configured Models
The fastest check — shows the active default, fallback chain, and any aliases:
openclaw models listYou should see rows like asksage-anthropic/google-claude-47-opus text+image 1024k no yes default,configured. The default tag confirms the primary model and fallback#N tags confirm the failover order.
Check Auth + Provider Health
# Show auth profiles for all providers
openclaw models auth list
# Full status summary (gateway, runtime, model, usage)
openclaw models statusYour Ask Sage providers should appear with auth: yes and a populated last seen timestamp.
Test a Conversation
Start an interactive session to confirm the model responds correctly:
# Start OpenClaw and send a test message
openclaw startIf the agent responds, your Ask Sage integration is working correctly.
Troubleshooting
Solutions:
- Verify your API key is correct — copy it again from your Account Settings
- Check that the
apiKeyfield inopenclaw.jsonhas no extra spaces or line breaks - Ensure your Ask Sage subscription is active and includes API access
Solutions:
- Verify the
idin your model definition matches a valid Ask Sage model ID - Check that you're using the correct API format for the model (e.g., Claude models use
anthropic-messages, notopenai-responses) - Government models require a government tenant — commercial API keys cannot access
-govmodels
Solutions:
- Check your network connectivity to
api.asksage.ai - Verify no firewall rules are blocking outbound HTTPS connections
- Reasoning models (e.g.,
google-claude-46-sonnetwith extended thinking) may take longer — increase timeout values in OpenClaw's configuration if needed
openclaw status Solutions:
- Validate your
openclaw.jsonis valid JSON (usejq . openclaw.jsonto check), or just useopenclaw config validate - Ensure the provider is under
models.providers, not at the root level - Most changes hot-reload automatically. If they don't, restart the gateway:
openclaw gateway restart
openclaw config set rejects with size-drop, clobbered, or protected-paths (2026.5+) Why: OpenClaw 2026.5 added safety guards around the model-provider section because the gateway auto-discovers and enriches model metadata at runtime. Replacing a whole provider block with a smaller hand-written one will look like a drop. Editing the file directly while the gateway is running will look like a clobber. Renaming a model id, swapping an API key, or rewriting an api compatibility string in-place trips protected-paths.
Solutions:
- Use the
openclaw modelsCLI for default/fallback/alias changes — it never trips guards - Use
openclaw onboard --auth-choice custom-api-keyto add or re-auth a provider — bypasses protected-paths - For larger refactors, stop the gateway first (
openclaw gateway stop), edit, then restart - Rejected payloads are saved to
~/.openclaw/openclaw.json.rejected.<timestamp>and the previous good copy toopenclaw.json.last-good— diff them to see what the guard caught
Additional Resources
- OpenClaw Documentation
- Ask Sage Platform
- Ask Sage OpenAI API Compatibility Guide
- Codex Integration — Another tool compatible with Ask Sage
- Claude Code Integration — Another tool compatible with Ask Sage