Client Libraries
Mixlayer exposes an OpenAI-compatible API at https://models.mixlayer.ai/v1. Any client library that targets the OpenAI Chat Completions API will work — point it at the Mixlayer base URL and pass your Mixlayer API key.
You can create an API key from the Mixlayer console.
Installation
No installation required — curl is preinstalled on most systems.
Basic chat completion
curl https://models.mixlayer.ai/v1/chat/completions \
-H "Authorization: Bearer $MIXLAYER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "qwen/qwen3.5-4b-free",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Tell me a fun fact about chihuahuas."}
]
}'Streaming
Pass stream: true to receive tokens as Server-Sent Events as they’re generated.
curl https://models.mixlayer.ai/v1/chat/completions \
-H "Authorization: Bearer $MIXLAYER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "qwen/qwen3.5-4b-free",
"stream": true,
"messages": [
{"role": "user", "content": "Tell me a fun fact about chihuahuas."}
]
}'See Chat Completions for the full list of supported request parameters and the streaming event shape.
Vercel AI SDK
You can also use Mixlayer via the OpenAI-compatible provider in the Vercel AI SDK.
npm install ai @ai-sdk/openaiimport { createOpenAI } from "@ai-sdk/openai";
import { streamText } from "ai";
import type { NextRequest } from "next/server";
const mixlayer = createOpenAI({
apiKey: process.env.MIXLAYER_API_KEY,
baseURL: "https://models.mixlayer.ai/v1",
});
export async function POST(req: NextRequest) {
const { prompt } = await req.json();
const result = streamText({
model: mixlayer.chat("qwen/qwen3.5-397b-a17b"),
prompt,
maxTokens: 1000,
});
return result.toDataStreamResponse();
}