Chat Completions API Objects
Type definitions for every object the Chat Completions API emits — ChatCompletion, ChatCompletionChunk, ChatMessage, ContentFilterResults, PromptFilterResults, ChatCompletionUsage.
This page defines every named object returned by POST /v1/chat/completions. Reference these from guides and other docs using anchor links, for example:
"Returns a
ChatCompletionobject."
The ChatCompletion Object
The top-level response object returned by a non-streaming POST /v1/chat/completions call.
interface ChatCompletion {
id: string;
object: "chat.completion";
created: number; // Unix timestamp
model: string;
system_fingerprint: string | null;
choices: ChatCompletionChoice[];
prompt_filter_results: PromptFilterResults[]; // Skytells safety field
usage: ChatCompletionUsage;
}
interface ChatCompletionChoice {
index: number;
message: ChatMessage;
finish_reason: "stop" | "length" | "tool_calls" | "content_filter";
logprobs: ChatCompletionLogprobs | null;
content_filter_results: ContentFilterResults; // Skytells safety field
}| Field | Type | Description |
|---|---|---|
id | string | Unique completion ID, prefixed chatcmpl- |
object | "chat.completion" | Object type discriminator |
created | number | Unix timestamp when the completion was created |
model | string | The resolved model identifier (may differ from requested namespace after routing) |
system_fingerprint | string | null | Server fingerprint — useful for reproducibility debugging |
choices | ChatCompletionChoice[] | One entry per n. Normally one element unless you set n > 1 |
choices[].message | ChatMessage | The model's response message |
choices[].finish_reason | string | Why generation stopped. See values below |
choices[].content_filter_results | ContentFilterResults | Per-choice safety evaluation of the completion |
prompt_filter_results | PromptFilterResults[] | Safety evaluation of the input prompt — one entry per prompt |
usage | ChatCompletionUsage | Token counts |
finish_reason values:
| Value | Meaning |
|---|---|
"stop" | Natural end — the model produced a stop token or reached a stop sequence |
"length" | Reached max_tokens limit before a natural stop |
"tool_calls" | The model is requesting one or more tool calls |
"content_filter" | Output was blocked by a safety filter |
The ChatCompletionChunk Object
One SSE frame emitted during a streaming POST /v1/chat/completions call (stream: true). The stream begins with a chunk containing an empty delta and prompt_filter_results, then emits content deltas, and ends with a chunk where finish_reason: "stop" and delta: {}.
interface ChatCompletionChunk {
id: string;
object: "chat.completion.chunk";
created: number;
model: string;
system_fingerprint: string | null;
choices: ChatCompletionChunkChoice[];
prompt_filter_results?: PromptFilterResults[]; // present on the first chunk only
usage: null; // always null mid-stream
}
interface ChatCompletionChunkChoice {
index: number;
delta: ChatCompletionDelta;
finish_reason: "stop" | "length" | "tool_calls" | "content_filter" | null;
logprobs: null;
content_filter_results: ContentFilterResults;
}
interface ChatCompletionDelta {
role?: "assistant";
content?: string; // the incremental text chunk
tool_calls?: ToolCallDelta[];
refusal?: string | null;
}| Field | Type | Description |
|---|---|---|
id | string | Same chatcmpl- ID across all chunks in the stream |
object | "chat.completion.chunk" | Type discriminator |
choices[].delta.content | string | undefined | The incremental text token(s). Concatenate these to reconstruct the full response |
choices[].delta.tool_calls | ToolCallDelta[] | undefined | Incremental tool call fragments |
choices[].finish_reason | string | null | null mid-stream; set to a terminal value on the last chunk |
choices[].content_filter_results | ContentFilterResults | Safety evaluation of this delta |
prompt_filter_results | PromptFilterResults[] | Only present on the first chunk |
The ChatMessage Object
A single message in the conversation — returned as choices[].message in a ChatCompletion.
interface ChatMessage {
role: "assistant";
content: string | null;
refusal: string | null;
annotations: Annotation[];
tool_calls?: ToolCall[];
}
interface ToolCall {
id: string;
type: "function";
function: {
name: string;
arguments: string; // JSON string — parse with JSON.parse()
};
}| Field | Type | Description |
|---|---|---|
role | "assistant" | Always "assistant" for model responses |
content | string | null | The full text response. null when the model made a tool call instead |
refusal | string | null | If the model declined the request, this contains the refusal message; otherwise null |
annotations | Annotation[] | Structured annotation metadata (citations, links) — may be empty |
tool_calls | ToolCall[] | undefined | Present when finish_reason is "tool_calls". Each entry has the function name and JSON-encoded arguments |
The ContentFilterResults Object
Skytells safety evaluation attached to each choices[] entry. Reports whether the completion text was evaluated against each category.
interface ContentFilterResults {
hate: ContentFilterCategory;
self_harm: ContentFilterCategory;
sexual: ContentFilterCategory;
violence: ContentFilterCategory;
protected_material_code: ContentFilterDetection;
protected_material_text: ContentFilterDetection;
}
interface ContentFilterCategory {
filtered: boolean; // true = content was blocked
severity: "safe" | "low" | "medium" | "high";
}
interface ContentFilterDetection {
detected: boolean; // true = category was detected
filtered: boolean; // true = was blocked
}| Field | Type | Description |
|---|---|---|
hate | ContentFilterCategory | Hateful, demeaning, or discriminatory content |
self_harm | ContentFilterCategory | Content related to self-harm or suicide |
sexual | ContentFilterCategory | Sexually explicit content |
violence | ContentFilterCategory | Graphic or incitement-to-violence content |
protected_material_code | ContentFilterDetection | Copyrighted code patterns detected |
protected_material_text | ContentFilterDetection | Copyrighted text patterns detected |
A filtered: true result means the request was blocked. A severity: "safe" result with filtered: false means the category was evaluated and found safe.
These fields are Skytells additions to the OpenAI schema. They are always present and never cause OpenAI SDK parsing to fail — the SDK simply ignores unknown fields.
The PromptFilterResults Object
Skytells safety evaluation of the input prompt — returned in prompt_filter_results[] on the ChatCompletion object and in the first streaming chunk.
interface PromptFilterResults {
prompt_index: number;
content_filter_results: PromptContentFilterResults;
}
interface PromptContentFilterResults {
hate: ContentFilterCategory;
self_harm: ContentFilterCategory;
sexual: ContentFilterCategory;
violence: ContentFilterCategory;
jailbreak: ContentFilterDetection; // only on prompt evaluation
}| Field | Type | Description |
|---|---|---|
prompt_index | number | Index of the evaluated prompt (always 0 for single-prompt requests) |
content_filter_results.hate | ContentFilterCategory | Hate speech in the input |
content_filter_results.self_harm | ContentFilterCategory | Self-harm content in the input |
content_filter_results.sexual | ContentFilterCategory | Sexual content in the input |
content_filter_results.violence | ContentFilterCategory | Violence content in the input |
content_filter_results.jailbreak | ContentFilterDetection | Whether a jailbreak attempt was detected in the prompt |
The ChatCompletionUsage Object
Token consumption breakdown for a completion.
interface ChatCompletionUsage {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
completion_tokens_details: {
reasoning_tokens: number;
audio_tokens: number;
accepted_prediction_tokens: number;
rejected_prediction_tokens: number;
};
prompt_tokens_details: {
cached_tokens: number;
audio_tokens: number;
};
}| Field | Type | Description |
|---|---|---|
prompt_tokens | number | Tokens in the input messages |
completion_tokens | number | Tokens generated by the model |
total_tokens | number | Sum of prompt_tokens + completion_tokens |
completion_tokens_details.reasoning_tokens | number | Tokens used for internal chain-of-thought reasoning (reasoning models only) |
completion_tokens_details.accepted_prediction_tokens | number | Speculative decoding: accepted tokens |
prompt_tokens_details.cached_tokens | number | Tokens served from the prompt cache (not re-processed) |
How is this guide?