API Reference

Documentation

sylphx.ai exposes an OpenAI-compatible API. If your code already uses the OpenAI SDK, you need to change exactly one line.

Overview

The sylphx.ai gateway proxies requests to upstream model providers (Anthropic, ZAI, and more). We implement the OpenAI Chat Completions API, so any client library that targets OpenAI — in any language — works with sylphx.ai without modification beyond the base URL and key.

  • Full OpenAI Chat Completions API compatibility (/v1/chat/completions)
  • Server-sent events streaming
  • Multi-turn conversation support
  • System prompts, tool use, and vision inputs
  • Provider-side prompt cache affinity

Base URL

url
https://api.sylphx.ai/v1

All endpoints are relative to this base URL. For example, the chat completions endpoint is POST https://api.sylphx.ai/v1/chat/completions.

Authentication

Pass your sylphx.ai API key in the Authorization header as a Bearer token. You can generate and manage API keys from your dashboard.

http headers
Authorization: Bearer YOUR_SYLPHX_API_KEY
Content-Type: application/json
Note: Never expose your API key client-side. Set it as a server-side environment variable (e.g. SYLPHX_API_KEY).

Quick Start

Install the OpenAI SDK and point it at the sylphx.ai gateway.

typescript
import OpenAI from "openai"

const client = new OpenAI({
  apiKey: process.env.SYLPHX_API_KEY,
  baseURL: "https://api.sylphx.ai/v1",
})

async function main() {
  const response = await client.chat.completions.create({
    model: "claude-sonnet-4-6",
    messages: [
      {
        role: "user",
        content: "Explain the difference between LLMs and diffusion models.",
      },
    ],
  })

  console.log(response.choices[0].message.content)
}

main()

Streaming

sylphx.ai supports server-sent events (SSE) streaming via the standard OpenAI SDK. Pass stream: true to receive chunks as they arrive.

typescript
import OpenAI from "openai"

const client = new OpenAI({
  apiKey: process.env.SYLPHX_API_KEY,
  baseURL: "https://api.sylphx.ai/v1",
})

const stream = await client.chat.completions.create({
  model: "glm-5.1",
  messages: [
    { role: "user", content: "Write a recursive Fibonacci in TypeScript." }
  ],
  stream: true,
})

for await (const chunk of stream) {
  const content = chunk.choices[0]?.delta?.content
  if (content) {
    process.stdout.write(content)
  }
}

Model IDs

Pass the model ID as the model field in your request. No provider prefix is needed.

ModelModel ID
Claude Opus 4.6claude-opus-4-6
Claude Sonnet 4.6claude-sonnet-4-6
GLM-5.1glm-5.1
GLM-5 Turboglm-5-turbo
GLM-5glm-5
GLM-4.7glm-4.7
Auto (smart routing)auto

Error Codes

sylphx.ai returns standard OpenAI-format error responses. HTTP status codes follow REST conventions.

StatusMeaning
401Invalid or missing API key
403Key does not have access to this model
404Model not found
429Rate limit exceeded — retry with backoff
502Upstream provider error — automatic failover triggered
503All upstream providers unavailable