API Reference
Complete type reference for all Skytells Inference sub-APIs — Chat, Responses, and Embeddings. Every request parameter, response object, and stream event is defined here.
Reference
This section is the canonical type and object reference for all Skytells Inference sub-APIs. Its purpose is precise cross-linking — throughout the rest of the docs, when a guide says "returns a ChatCompletion object" or "emits a ResponsesStreamEvent", those links point here.
Each sub-API has three pages:
- Overview (
index) — what the API does, its endpoint at a glance, and SDK access pattern. - Create (
create) — the full endpoint reference: every request parameter, the response type it returns, streaming behavior, and multi-client code examples. - Objects (
objects) — named, linkable definitions of every response type, schema object, and stream event the API emits.
Chat API Reference
POST /v1/chat/completions — Request params, ChatCompletion, ChatCompletionChunk, Message, ContentFilterResults.
Responses API Reference
POST /v1/responses — Request params, Response, OutputItem, ContentFilter, ResponsesStreamEvent.
Models API Reference
GET /v1/models — Request params, Model, ModelSchema.
Predictions API Reference
POST /v1/predictions — Request params, Prediction, PredictionUsage.
Language Model Objects
These objects are used to represent the response from v1/chat/completions, v1/responses, and v1/embeddings.
| Object | API | Description |
|---|---|---|
ChatCompletion | Chat | Non-streaming response from POST /v1/chat/completions |
ChatCompletionChunk | Chat | One SSE chunk in a streaming chat response |
ChatMessage | Chat | A single message inside choices[].message |
ChatCompletionUsage | Chat | Token counts for a chat completion |
ContentFilterResults | Chat | Per-choice safety evaluation (hate, violence, etc.) |
PromptFilterResults | Chat | Prompt-level safety evaluation including jailbreak detection |
Response | Responses | The full response object from POST /v1/responses |
OutputItem | Responses | One entry in response.output[] |
OutputTextContent | Responses | Text content within an OutputItem |
ContentFilter | Responses | Prompt or completion safety filter entry |
ResponsesUsage | Responses | Input/output token counts for a response |
ResponsesStreamEvent | Responses | Discriminated union of all 9 SSE event types |
EmbeddingResponse | Embeddings | Full response from POST /v1/embeddings |
Embedding | Embeddings | One vector entry inside EmbeddingResponse.data[] |
EmbeddingUsage | Embeddings | Token counts for an embedding request |
Prediction Objects
These objects are used to represent the response from v1/predictions.
| Object | API | Description |
|---|---|---|
Prediction | Predictions | The full response object from POST /v1/predictions |
Model Objects
These objects are used to represent the response from v1/models.
| Object | API | Description |
|---|---|---|
Model | Models | The full response object from GET /v1/models/{slug} |
ModelSchema | Models | The JSON Schema for the model's input and output |
SDK Clients
All Inference sub-APIs are covered by the Skytells TypeScript SDK and fully compatible with the OpenAI SDK (change base_url only).
| Client | Package | Access Pattern |
|---|---|---|
| Skytells SDK | npm install skytells | client.chat.completions.create() / client.responses.create() / client.embeddings.create() |
| OpenAI SDK | npm install openai | Same methods — set baseURL: "https://api.skytells.ai/v1" |
| REST | — | curl https://api.skytells.ai/v1/... with x-api-key header |
For more information on the SDKs, see the SDKs page.
How is this guide?