Documentation

ApiLink is a drop-in OpenAI-compatible API gateway. Access 200+ models from OpenAI, Anthropic, Google, DeepSeek, and more through a single endpoint.

Quickstart

Get your API key from the Dashboard, then replace OPENAI_BASE_URL and OPENAI_API_KEY in any OpenAI-compatible client.

typescript
import OpenAI from "openai";

const client = new OpenAI({
  apiKey:  "al-your-key-here",          // Your ApiLink key
  baseURL: "https://apilink.io/v1",
});

const response = await client.chat.completions.create({
  model:    "anthropic/claude-sonnet-4-5",  // any model from /v1/models
  messages: [{ role: "user", content: "Hello!" }],
});

console.log(response.choices[0].message.content);
python
from openai import OpenAI

client = OpenAI(
    api_key  = "al-your-key-here",
    base_url = "https://apilink.io/v1",
)

response = client.chat.completions.create(
    model    = "openai/gpt-4o-mini",
    messages = [{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)

Authentication

All requests require a Bearer token in the Authorization header. ApiLink keys always start with al-.

bash
curl https://apilink.io/v1/chat/completions \
  -H "Authorization: Bearer al-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"model":"openai/gpt-4o-mini","messages":[{"role":"user","content":"Hi"}]}'
Create and manage your API keys in the Dashboard → API Keys tab. Each key can be revoked independently.

Endpoints

MethodPathDescription
POST/v1/chat/completionsChat completions (streaming supported)
GET/v1/modelsList all available models
ApiLink implements the OpenAI Chat Completions API. The full OpenAI API surface (Assistants, Embeddings, etc.) is not yet supported — use the model-specific provider SDK for those.

Listing Models

Fetch the full list of available models programmatically:

bash
curl https://apilink.io/v1/models \
  -H "Authorization: Bearer al-your-key-here"
json
{
  "object": "list",
  "data": [
    {
      "id":             "anthropic/claude-sonnet-4-5",
      "object":         "model",
      "owned_by":       "anthropic",
      "context_window": 200000
    },
    {
      "id":             "openai/gpt-4o",
      "object":         "model",
      "owned_by":       "openai",
      "context_window": 128000
    }
  ]
}

You can also browse all models with pricing on the Models page.

Error Codes

HTTPerror.typeCause
401invalid_request_errorMissing or invalid API key
402insufficient_quotaBalance is $0.00 — recharge in the dashboard
404invalid_request_errorModel not found or not active
429rate_limit_exceededOver 60 requests/min per key (default limit)
500api_errorInternal server error
503api_errorAll upstream providers unavailable, retry shortly

Every error response includes a request_id field. Include it when contacting support:

json
{
  "error": {
    "message":    "Insufficient balance. Recharge at https://apilink.io/dashboard",
    "type":       "insufficient_quota",
    "request_id": "3f7a2b1c-..."
  }
}

Rate Limits

LimitValueScope
Requests per minute60 RPMPer API key
Max request timeout120 secondsPer request
Max contextModel-dependent (up to 1M tokens)Per request
Rate limit headers are not yet returned. If you need higher limits, contact support@apilink.io.

Billing & Credits

ApiLink uses a prepaid credit model. Credits are deducted per request based on actual token usage.

ItemDetail
Pricing unitPer 1,000 tokens (input and output billed separately)
Minimum recharge$5 USD
Unused creditsRoll over indefinitely — no expiry
InvoicesAvailable for every completed payment (Dashboard → Credits)
B2B invoicesContact support@apilink.io for custom invoicing
See per-model pricing on the Models page. Prices are shown per 1M tokens for readability.

Client Setup Guides

Cursor

Settings → Models → OpenAI API Key → paste your al-... key. Under "Base URL" enter:

bash
Base URL: https://apilink.io/v1
Continue (VS Code)

In config.json, add a provider with type "openai" and set baseUrl and apiKey:

json
{
  "models": [{
    "title":       "Claude via ApiLink",
    "provider":    "openai",
    "model":       "anthropic/claude-sonnet-4-5",
    "apiKey":      "al-your-key-here",
    "apiBase":     "https://apilink.io/v1"
  }]
}
Aider

Pass --openai-api-base and --openai-api-key on the command line, or set them as env vars:

bash
aider \
  --openai-api-base https://apilink.io/v1 \
  --openai-api-key  al-your-key-here \
  --model           anthropic/claude-sonnet-4-5
LiteLLM

Use the openai/ prefix for any model:

python
import litellm

response = litellm.completion(
    model     = "openai/anthropic/claude-sonnet-4-5",
    messages  = [{"role": "user", "content": "Hello"}],
    api_key   = "al-your-key-here",
    api_base  = "https://apilink.io/v1",
)

FAQ

Is ApiLink a reseller or the original provider?
ApiLink is a reseller. We route your requests to upstream providers (OpenAI, Anthropic, Google, etc.) and bill you at our listed rates. We are not affiliated with any upstream provider.
Which OpenAI features are supported?
Chat completions (streaming included). Embeddings, image generation, audio, Assistants API, and fine-tuning are not currently supported.
Can I get a B2B invoice for my company?
Yes. Download invoices for each payment from Dashboard → Credits → Payment History. For custom invoicing, net-30 terms, or purchase orders, contact support@apilink.io.
What happens if I run out of credits mid-request?
We pre-deduct an estimated amount before forwarding your request. If your balance is below the estimate, the request returns a 402 error before reaching the upstream provider.
How do I report a billing discrepancy?
Every API response includes an X-Request-Id header. Email support@apilink.io with the request ID and we will investigate.
Is my data stored?
We log request metadata (model, token counts, cost, timestamp) for billing and debugging. We do not store prompt or completion content. See our Privacy Policy for details.
Still have questions?
We usually reply within a few hours.
Contact support →