Works everywhere
you already use AI
One API key. Every tool that accepts a custom OpenAI endpoint. No special plugins, no adapter code — just change the base URL.
OpenClaw
The open-source personal AI agent with 247k GitHub stars. Runs as a daemon on your machine and connects to WhatsApp, Slack, Discord, Telegram, iMessage, Teams, and 10+ other messaging apps. Add OneForAll as a provider and every agent task gets access to all supported models through one key.
Install OpenClaw
— requires Node 22.16+ or Node 24npm install -g openclaw@latestStart the daemon
— runs OpenClaw as an always-on background serviceopenclaw onboard --install-daemonExport your API key as an environment variable
export ONEFORALL_API_KEY=ofa_YOUR_KEY_HEREAdd this to your ~/.zshrc or ~/.bashrc to persist across sessions. Replace with your actual key from API Keys.
Add OneForAll to your OpenClaw config
~/.openclaw/openclaw.json
{
"models": {
"mode": "merge",
"providers": {
"oneforall": {
"baseUrl": "https://getoneforall.com/api/v1",
"apiKey": {
"$secretRef": {
"provider": "env",
"key": "ONEFORALL_API_KEY"
}
},
"api": "openai-completions",
"models": [
{
"id": "claude-opus-4-6",
"name": "Claude Opus 4.6",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 200000,
"maxTokens": 32000
},
{
"id": "claude-sonnet-4-6",
"name": "Claude Sonnet 4.6",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 200000,
"maxTokens": 64000
},
{
"id": "claude-haiku-4-5",
"name": "Claude Haiku 4.5",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 200000,
"maxTokens": 8096
},
{
"id": "gpt-4o",
"name": "GPT-4o",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 128000,
"maxTokens": 16384
},
{
"id": "gpt-4o-mini",
"name": "GPT-4o Mini",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 128000,
"maxTokens": 16384
},
{
"id": "o3-mini",
"name": "o3-mini",
"reasoning": true,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 200000,
"maxTokens": 100000
},
{
"id": "gemini-2.5-pro-preview-03-25",
"name": "Gemini 2.5 Pro",
"reasoning": true,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 1048576,
"maxTokens": 65536
},
{
"id": "gemini-2.0-flash",
"name": "Gemini 2.0 Flash",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 1048576,
"maxTokens": 8192
},
{
"id": "gemini-2.0-flash-lite",
"name": "Gemini 2.0 Flash Lite",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 1048576,
"maxTokens": 8192
},
{
"id": "gemini-1.5-pro",
"name": "Gemini 1.5 Pro",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 2097152,
"maxTokens": 8192
},
{
"id": "gemini-1.5-flash",
"name": "Gemini 1.5 Flash",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 1048576,
"maxTokens": 8192
}
]
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "oneforall/claude-sonnet-4-6"
},
"models": {
"oneforall/claude-opus-4-6": {},
"oneforall/claude-sonnet-4-6": {},
"oneforall/claude-haiku-4-5": {},
"oneforall/gpt-4o": {},
"oneforall/gpt-4o-mini": {},
"oneforall/o3-mini": {},
"oneforall/gemini-2.5-pro-preview-03-25": {},
"oneforall/gemini-2.0-flash": {},
"oneforall/gemini-2.0-flash-lite": {},
"oneforall/gemini-1.5-pro": {},
"oneforall/gemini-1.5-flash": {}
}
}
}
}The config uses "mode": "merge" so it merges with any existing config without overwriting it. The $secretRef pattern reads your key from the environment variable set in step 3 — your key is never stored in plaintext.
Verify the connection
openclaw doctorRuns a full diagnostic — config syntax, provider connectivity, model availability, and auth status. All OneForAll models should show green. Also try openclaw models list --provider oneforall to confirm all supported models are registered.
AI Coding Assistants
Cursor
docs ↗AI-powered code editor. Set a custom OpenAI base URL in Settings → Models → OpenAI API Key.
# Cursor Settings → Models → OpenAI API Key Base URL: https://getoneforall.com/api/v1 API Key: ofa_YOUR_KEY_HERE # Then use any model ID in the model selector
Continue.dev
docs ↗Open-source VS Code & JetBrains AI plugin. Add OneForAll as a custom LLM provider in config.yaml.
# ~/.continue/config.yaml
models:
- name: Claude Sonnet 4.6
provider: openai
model: claude-sonnet-4-6
apiKey: ofa_YOUR_KEY_HERE
apiBase: https://getoneforall.com/api/v1Cline
docs ↗Autonomous coding agent for VS Code. Select 'OpenAI Compatible' in the provider dropdown.
# Cline Settings Provider: OpenAI Compatible Base URL: https://getoneforall.com/api/v1 API Key: ofa_YOUR_KEY_HERE Model: claude-sonnet-4-6
Aider
docs ↗CLI coding agent. Set environment variables before running.
export OPENAI_API_KEY=ofa_YOUR_KEY_HERE export OPENAI_API_BASE=https://getoneforall.com/api/v1 aider --model openai/claude-sonnet-4-6
Frameworks & SDKs
LangChain
docs ↗Python and JavaScript agent framework. Use the ChatOpenAI class with a custom base URL.
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="claude-sonnet-4-6",
api_key="ofa_YOUR_KEY_HERE",
base_url="https://getoneforall.com/api/v1",
)Vercel AI SDK
docs ↗Build AI-powered Next.js and React apps. Use the createOpenAI provider with a custom base URL.
import { createOpenAI } from "@ai-sdk/openai";
const oneforall = createOpenAI({
apiKey: process.env.ONEFORALL_API_KEY,
baseURL: process.env.NEXT_PUBLIC_APP_URL + "/api/v1",
});
const model = oneforall("claude-sonnet-4-6");LlamaIndex
docs ↗Data framework for LLM apps. Use OpenAI class with custom api_base.
from llama_index.llms.openai import OpenAI
llm = OpenAI(
model="claude-sonnet-4-6",
api_key="ofa_YOUR_KEY_HERE",
api_base="https://getoneforall.com/api/v1",
)Chat Interfaces
Open WebUI
docs ↗Self-hosted ChatGPT alternative. Add a new OpenAI connection under Settings → Connections.
# Settings → Admin → Connections → OpenAI API API URL: https://getoneforall.com/api/v1 API Key: ofa_YOUR_KEY_HERE
LibreChat
docs ↗Open-source ChatGPT clone. Add an endpoint in librechat.yaml.
# librechat.yaml
endpoints:
custom:
- name: "OneForAll"
apiKey: "ofa_YOUR_KEY_HERE"
baseURL: "https://getoneforall.com/api/v1"
models:
default: ["claude-sonnet-4-6", "gpt-4o", "gemini-2.0-flash"]Automation
n8n
docs ↗No-code workflow automation. Use the OpenAI node with a custom credential.
# n8n → Credentials → OpenAI API API Key: ofa_YOUR_KEY_HERE Base URL: https://getoneforall.com/api/v1 # Then use any OpenAI node — select model by ID
Flowise
docs ↗Drag-and-drop LLM agent builder. Use ChatOpenAI node with custom base path.
# ChatOpenAI node settings Model Name: claude-sonnet-4-6 OpenAI API Key: ofa_YOUR_KEY_HERE BasePath: https://getoneforall.com/api/v1
Ready to start?
One API key. No juggling providers. Works in every tool above — and anything else that is OpenAI-compatible.