API Reference & Setup
One API key for Claude Code, OpenCode, OpenClaw, Codex CLI, Gemini CLI, Cursor, and direct API access. Up to 40% off official pricing.
1. Getting Started
Zenn.ceo is a drop-in API proxy for Anthropic, OpenAI, and Google AI models. You use a single ck_-prefixed API key across all supported tools and SDKs. No code changes required — just set the base URL and API key.
Top up credits, then create a key from your dashboard.
Point your tool to https://zenn.ceo/api/v1
Works instantly with Claude Code, OpenCode, OpenClaw, and more.
Base URLs
| Provider | Base URL |
|---|---|
| Claude (Anthropic) | https://zenn.ceo/api/v1 |
| Codex (OpenAI) | https://zenn.ceo/api/v1/codex |
| Gemini (Google) | https://zenn.ceo/api/v1/gemini |
2. Claude Code
Anthropic's official CLI for Claude. Set two environment variables and it works as a drop-in replacement — same CLI, same models, up to 30% cheaper.
Step 1: Set environment variables
Add to your shell profile (~/.zshrc or ~/.bashrc):
export ANTHROPIC_BASE_URL=https://zenn.ceo/api/v1 export ANTHROPIC_API_KEY=ck_YOUR_API_KEY
Step 2: Restart terminal & run
# Default model (Sonnet 4.5) claude # Use Opus 4.6 claude --model claude-opus-4-6
How it works
Claude Code sends the API key via the x-api-key header (Anthropic SDK native behavior) and appends /messages to the base URL automatically. The anthropic-version and anthropic-beta headers are forwarded to the upstream API.
3. OpenCode
Multi-provider AI coding agent. Configure once — access Claude, Codex, and Gemini models through one JSON config.
Step 1: Install
npm i -g opencode-ai
Step 2: Create config
Create or edit ~/.config/opencode/opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"anthropic": {
"options": {
"baseURL": "https://zenn.ceo/api/v1",
"apiKey": "ck_YOUR_API_KEY"
},
"models": {
"claude-sonnet-4-6": { "name": "Claude Sonnet 4.5" },
"claude-opus-4-6": { "name": "Claude Opus 4.6" }
}
},
"zenn-codex": {
"npm": "@ai-sdk/openai-compatible",
"name": "Zenn Codex",
"options": {
"baseURL": "https://zenn.ceo/api/v1/codex",
"apiKey": "ck_YOUR_API_KEY"
},
"models": {
"gpt-5.3-codex": { "name": "GPT-5.3 Codex" }
}
},
"zenn-gemini": {
"npm": "@ai-sdk/openai-compatible",
"name": "Zenn Gemini",
"options": {
"baseURL": "https://zenn.ceo/api/v1/gemini",
"apiKey": "ck_YOUR_API_KEY"
},
"models": {
"gemini-3-pro-preview": { "name": "Gemini 3.0 Pro" },
"gemini-3-flash-preview": { "name": "Gemini 3.0 Flash" }
}
}
}
}Step 3: Start coding
cd your-project opencode # Switch models with /model
How it works
OpenCode sends the API key via the Authorization: Bearer header. The anthropic provider uses the native Anthropic SDK format, while zenn-codex and zenn-gemini use the OpenAI-compatible provider adapter.
4. OpenClaw
Open-source autonomous AI coding agent. Configure Zenn as a custom provider for Claude, Codex, and Gemini access through one config file.
Step 1: Install
curl -fsSL https://openclaw.ai/install.sh | bash
Step 2: Configure providers
Create or edit ~/.openclaw/openclaw.json:
{
"models": {
"providers": {
"zenn-claude": {
"baseUrl": "https://zenn.ceo/api/v1",
"apiKey": "ck_YOUR_API_KEY",
"api": "anthropic-messages",
"models": [
{ "id": "claude-sonnet-4-6", "name": "Claude Sonnet 4.5" },
{ "id": "claude-opus-4-6", "name": "Claude Opus 4.6" }
]
},
"zenn-codex": {
"baseUrl": "https://zenn.ceo/api/v1/codex",
"apiKey": "ck_YOUR_API_KEY",
"api": "openai-responses",
"models": [
{ "id": "gpt-5.3-codex", "name": "GPT-5.3 Codex" }
]
},
"zenn-gemini": {
"baseUrl": "https://zenn.ceo/api/v1/gemini",
"apiKey": "ck_YOUR_API_KEY",
"api": "openai-completions",
"models": [
{ "id": "gemini-3-pro-preview", "name": "Gemini 3.0 Pro" },
{ "id": "gemini-3-flash-preview", "name": "Gemini 3.0 Flash" }
]
}
}
}
}Step 3: Start coding
cd your-project openclaw
API format reference
| Provider | api value | Description |
|---|---|---|
| Claude | anthropic-messages | Anthropic Messages API format |
| Codex | openai-responses | OpenAI Responses API format |
| Gemini | openai-completions | OpenAI Chat Completions format |
5. Codex CLI
OpenAI's Codex CLI for GPT-5.x code generation, routed through Zenn.ceo at 40% off.
Set environment variables
export OPENAI_BASE_URL=https://zenn.ceo/api/v1/codex export OPENAI_API_KEY=ck_YOUR_API_KEY
Run
codex --model gpt-5.3-codex "refactor this function"
6. Gemini CLI
Google's Gemini CLI with full streaming support, routed through Zenn.ceo at 40% off.
Set environment variables
export GEMINI_API_BASE_URL=https://zenn.ceo/api/v1/gemini export GEMINI_API_KEY=ck_YOUR_API_KEY
Run
gemini --model gemini-3-pro-preview "explain this code"
7. Cursor IDE
Use Claude models in Cursor IDE via Zenn.ceo.
Configure in Cursor Settings
Go to Cursor Settings → Models → Anthropic
Anthropic API Key
ck_YOUR_API_KEYOverride Anthropic Base URL
https://zenn.ceo/api/v1Available models
Select claude-sonnet-4-6, claude-haiku-4-5, or claude-opus-4-6 in the model picker.
8. Direct API Usage
The Zenn.ceo API is fully compatible with the Anthropic Messages API. Use it directly with cURL, the Anthropic SDK (Node.js/Python), or any HTTP client.
cURL
curl -X POST https://zenn.ceo/api/v1/messages \
-H "Authorization: Bearer ck_YOUR_API_KEY" \
-H "Content-Type: application/json" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-sonnet-4-6",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Hello, Claude!"}
]
}'Node.js / TypeScript
import Anthropic from '@anthropic-ai/sdk';
const client = new Anthropic({
apiKey: 'ck_YOUR_API_KEY',
baseURL: 'https://zenn.ceo/api/v1',
});
const message = await client.messages.create({
model: 'claude-sonnet-4-6',
max_tokens: 1024,
messages: [
{ role: 'user', content: 'Hello, Claude!' }
],
});Python
import anthropic
client = anthropic.Anthropic(
api_key="ck_YOUR_API_KEY",
base_url="https://zenn.ceo/api/v1",
)
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello, Claude!"}
],
)9. Models & Pricing
All pricing is credit-based (100 credits = $1.00 USD). Prices shown are per million tokens (MTok). Anthropic & OpenAI models are up to 40% off official pricing.
Claude (Anthropic) — up to 40% off
| Model ID | Input / MTok | Output / MTok | Savings | Tier |
|---|---|---|---|---|
| claude-sonnet-4-6 | $1.80 | $9.00 | 40% off | Starter |
| claude-haiku-4-5 | $0.60 | $3.00 | 40% off | Starter |
| claude-opus-4-6 | $3.00 | $15.00 | 40% off | Opus |
OpenAI / GPT — up to 40% off
| Model ID | Input / MTok | Output / MTok | Savings | Tier |
|---|---|---|---|---|
| gpt-5 | $0.76 | $6.00 | ~40% off | Starter |
| gpt-5.1 | $0.76 | $6.00 | ~40% off | Starter |
| gpt-5.2 | $1.06 | $8.40 | 15-40% off | Starter |
| gpt-5-codex | $0.76 | $6.00 | ~40% off | Starter |
| gpt-5-codex-mini | $0.14 | $1.20 | ~40% off | Starter |
| gpt-5.1-codex | $0.76 | $6.00 | ~40% off | Starter |
| gpt-5.1-codex-mini | $0.14 | $1.20 | ~40% off | Starter |
| gpt-5.1-codex-max | $0.76 | $6.00 | ~40% off | Starter |
| gpt-5.2-codex | $1.06 | $8.40 | 15-40% off | Starter |
| gpt-5.3-codex | $1.06 | $8.40 | 15-40% off | Starter |
| gpt-5.3-codex-spark | $1.06 | $8.40 | 15-40% off | Starter |
Gemini (Google) — competitive pricing
| Model ID | Input / MTok | Output / MTok | Savings | Tier |
|---|---|---|---|---|
| gemini-3-pro-official | $1.76 | $10.56 | 12% off | Starter |
| gemini-3-pro-preview-official | $1.76 | $10.56 | 12% off | Starter |
| gemini-3-flash-official | $0.44 | $2.64 | 12% off | Starter |
| gemini-3-flash-preview-official | $0.44 | $2.64 | 12% off | Starter |
| gemini-3.1-pro | $0.05 | $0.30 | Nominal | Starter |
| gemini-3.1-pro-preview-official | $1.76 | $10.56 | 12% off | Starter |
| gemini-3.1-fast | $0.55 | $3.30 | Low cost | Starter |
| gemini-3.1-thinking | $0.55 | $3.30 | Low cost | Starter |
| gemini-3.1-flash-lite-preview-official | $0.22 | $1.32 | Low cost | Starter |
| gemini-2.5-pro-official | $1.10 | $8.80 | 12% off | Starter |
| gemini-2.5-flash-official | $0.26 | $2.20 | Low cost | Starter |
| gemini-2.5-flash-lite-official | $0.09 | $0.35 | Ultra low | Starter |
| gemini-2.0-flash-official | $0.13 | $0.53 | Ultra low | Starter |
| gemini-2.0-flash-lite-official | $0.07 | $0.26 | Ultra low | Starter |
Other Models (DeepSeek, Qwen, GLM, Kimi, MiniMax)
| Model ID | Provider | Input / MTok | Output / MTok | Tier |
|---|---|---|---|---|
| deepseek-v3.2 | DeepSeek | $0.44 | $1.76 | Starter |
| glm-5 | Zhipu | $0.44 | $1.98 | Starter |
| qwen3.5-plus | Alibaba | $0.44 | $2.64 | Starter |
| qwen3.5-flash | Alibaba | $0.13 | $1.32 | Starter |
| qwen3-max | Alibaba | $0.77 | $3.08 | Starter |
| kimi-k2.5 | Moonshot | $0.44 | $2.31 | Starter |
| MiniMax-M2.5 | MiniMax | $0.23 | $0.92 | Starter |
Free / Promotional
| Model ID | Input / MTok | Output / MTok | Tier |
|---|---|---|---|
| nano-banana-2 | $0.05 | $0.30 | Starter |
10. Authentication
All API keys use the ck_ prefix. The proxy accepts multiple authentication header formats:
| Header | Format | Used By |
|---|---|---|
| x-api-key | ck_... | Claude Code, Anthropic SDK |
| Authorization | Bearer ck_... | OpenCode, OpenAI SDK, cURL |
| anthropic-api-key | ck_... | Alternative Anthropic header |
| x-goog-api-key | ck_... | Gemini CLI compatibility |
Forwarded headers
The proxy forwards anthropic-version (defaults to 2023-06-01) and anthropic-beta headers to the upstream Anthropic API. Streaming is fully supported via Server-Sent Events (SSE).
11. Rate Limits & Errors
Rate limits
Requests per hour
1,000
Minimum credit balance
200 credits ($2.00)
Streaming timeout
300s (5 min)
Rate limit info is returned in response headers: x-ratelimit-limit, x-ratelimit-remaining, x-ratelimit-reset.
Error responses
All errors return JSON:
{
"error": {
"type": "error_type",
"message": "Human-readable description"
}
}| Status | Type | Cause |
|---|---|---|
| 400 | invalid_request_error | Missing required parameters |
| 401 | authentication_error | Invalid or missing API key |
| 402 | insufficient_credits | Credit balance below $2.00 |
| 403 | access_denied | Model not available at your tier |
| 429 | rate_limit_error | Rate limit exceeded (1,000/hr) |
12. Tiers
Tiers are determined by cumulative top-up amount. Higher tiers unlock additional models.
Unlocks:
- Claude Sonnet 4.5
- GPT-5.2 / GPT-5.3 Codex
- Gemini 3.0 Pro & Flash
- API key creation
Everything in Starter, plus:
- Claude Opus 4.6
- Priority queue
13. Audio Generation (Fish Audio)
Text-to-speech, voice cloning, and speech recognition powered by Fish Audio. Requires Starter tier ($19.99+).
Endpoint
POST https://zenn.ceo/api/v1/audio/generations GET https://zenn.ceo/api/v1/audio/generations (list models)
Available Models
| Model ID | Type | Credits | Price | Required Input |
|---|---|---|---|---|
| audio-tts | Text to Speech | 2 | $0.020 | text |
| audio-clone | Voice Clone | 2 | $0.020 | text + audio |
| audio-asr | Speech Recognition | 1 | $0.010 | audio |
Text to Speech
curl -X POST https://zenn.ceo/api/v1/audio/generations \
-H "Authorization: Bearer ck_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "audio-tts",
"text": "Hello, welcome to Zenn!",
"format": "mp3"
}'Optional parameters: format (mp3, wav, opus), temperature, reference_id (existing voice model ID).
Voice Clone
curl -X POST https://zenn.ceo/api/v1/audio/generations \
-H "Authorization: Bearer ck_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "audio-clone",
"text": "Say this in the cloned voice",
"audio": "https://example.com/reference-voice.mp3"
}'The audio field is a URL to the reference voice sample for zero-shot cloning.
Speech Recognition (ASR)
curl -X POST https://zenn.ceo/api/v1/audio/generations \
-H "Authorization: Bearer ck_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "audio-asr",
"audio": "https://example.com/speech.mp3",
"language": "en"
}'Response Format
TTS / Voice Clone returns an audio URL:
{
"model": "audio-tts",
"created": 1709654321,
"data": [{ "url": "https://storage.zenn.ceo/generated/audio/..." }]
}ASR returns transcribed text:
{
"model": "audio-asr",
"created": 1709654321,
"text": "Hello, welcome to Zenn!",
"duration": 2.5
}Ready to start?
One key works across Claude Code, OpenCode, OpenClaw, Codex CLI, Gemini CLI, and Cursor. Top up credits and create your API key.
