Documentation
ApiLink is a drop-in OpenAI-compatible API gateway. Access 200+ models from OpenAI, Anthropic, Google, DeepSeek, and more through a single endpoint.
Quickstart
Get your API key from the Dashboard, then replace OPENAI_BASE_URL and OPENAI_API_KEY in any OpenAI-compatible client.
Authentication
All requests require a Bearer token in the Authorization header. ApiLink keys always start with al-.
Endpoints
| Method | Path | Description |
|---|---|---|
| POST | /v1/chat/completions | Chat completions (streaming supported) |
| GET | /v1/models | List all available models |
Listing Models
Fetch the full list of available models programmatically:
You can also browse all models with pricing on the Models page.
Error Codes
| HTTP | error.type | Cause |
|---|---|---|
| 401 | invalid_request_error | Missing or invalid API key |
| 402 | insufficient_quota | Balance is $0.00 — recharge in the dashboard |
| 404 | invalid_request_error | Model not found or not active |
| 429 | rate_limit_exceeded | Over 60 requests/min per key (default limit) |
| 500 | api_error | Internal server error |
| 503 | api_error | All upstream providers unavailable, retry shortly |
Every error response includes a request_id field. Include it when contacting support:
Rate Limits
| Limit | Value | Scope |
|---|---|---|
| Requests per minute | 60 RPM | Per API key |
| Max request timeout | 120 seconds | Per request |
| Max context | Model-dependent (up to 1M tokens) | Per request |
Billing & Credits
ApiLink uses a prepaid credit model. Credits are deducted per request based on actual token usage.
| Item | Detail |
|---|---|
| Pricing unit | Per 1,000 tokens (input and output billed separately) |
| Minimum recharge | $5 USD |
| Unused credits | Roll over indefinitely — no expiry |
| Invoices | Available for every completed payment (Dashboard → Credits) |
| B2B invoices | Contact support@apilink.io for custom invoicing |
Client Setup Guides
Settings → Models → OpenAI API Key → paste your al-... key. Under "Base URL" enter:
In config.json, add a provider with type "openai" and set baseUrl and apiKey:
Pass --openai-api-base and --openai-api-key on the command line, or set them as env vars:
Use the openai/ prefix for any model: