API Error Objects
Unified Skytells REST error envelope (`status`, `response`, `error`), Inference OpenAI-compatible errors, failed predictions, and webhooks.
Start from the Errors reference overview if you are unsure which of these shapes your endpoint returns.
Unified API error (Skytells REST)
The standard Skytells API v1 error body is this envelope. Use top-level status: false, human-readable response, and a nested error object. Branch on error.error_id — never on raw message strings.
Authentication example:
{
"status": false,
"response": "Invalid API key",
"error": {
"http_status": 401,
"message": "Invalid or missing API key, Please obtain a valid API key from the dashboard",
"details": "Invalid API key",
"error_id": "UNAUTHORIZED"
}
}| Field | Type | Description |
|---|---|---|
status | boolean | Always false for errors |
response | string | Short summary (logging, UI) |
error | object | Machine-readable failure — see table below |
HTTP: The response status code matches error.http_status.
Nested error object
| Field | Type | Description |
|---|---|---|
error_id | string | Stable Skytells identifier — API errors; Predictions routes also list codes in Prediction errors |
http_status | number | Same as the HTTP response status for this error |
message | string | Human-readable reason — do not branch on exact wording |
details | string | object | null | Extra context when present |
details may be a string or an object depending on the error.
More examples (unified envelope)
Same status / response / error shape for validation, credits, prediction serving, etc. (not the same document shape as Inference).
Example: 502 — fetch prediction
Typical shape when GET /v1/predictions/{uuid} returns a failed prediction result — error_id identifies the failure:
{
"status": false,
"response": "Error fetching prediction",
"error": {
"http_status": 502,
"message": "Prediction serving returned an error.",
"details": {},
"error_id": "PREDICTION_FAILED"
}
}Validation error — missing model
{
"status": false,
"response": "The model field is required.",
"error": {
"http_status": 422,
"message": "The model field is required.",
"details": "The model field is required.",
"error_id": "VALIDATION_ERROR"
}
}Validation error — missing input
Rules on create: model is a required string; input is a required array.
{
"status": false,
"response": "The input field is required.",
"error": {
"http_status": 422,
"message": "The input field is required.",
"details": "The input field is required.",
"error_id": "VALIDATION_ERROR"
}
}INSUFFICIENT_CREDITS — details object
Insufficient balance for the priced operation — typically HTTP 402 on create (same error_id as other credit errors; disambiguate with PAYMENT_REQUIRED for account balance in middleware).
{
"status": false,
"response": "User does not have enough credits",
"error": {
"http_status": 402,
"message": "User has 1.5 credits, but 2 are required",
"details": {
"current_balance": 1.5,
"required_amount": 2
},
"error_id": "INSUFFICIENT_CREDITS"
}
}Prediction Serving Error — 502 (generic)
When prediction serving returns a failure (inference / runtime), the API often responds with HTTP 502. error_id is stable; message is for display.
{
"status": false,
"response": "Prediction failed",
"error": {
"http_status": 502,
"message": "Prediction serving returned an error.",
"details": {},
"error_id": "PREDICTION_FAILED"
}
}The route-by-route error_id catalog for Predictions is on Prediction errors. General codes (auth, models, rate limits, etc.) are on API errors.
The Inference error response
Returned by Inference sub-APIs on failure: POST /v1/chat/completions, POST /v1/responses, POST /v1/embeddings. The payload is OpenAI-compatible: a single top-level error object (no top-level status / response wrapper).
{
"error": {
"message": "Human readable description of the error",
"type": "invalid_request_error",
"code": "invalid_parameter",
"error_id": "INVALID_PARAMETER",
"status": 400,
"param": "model",
"request_id": "req_abc123xyz",
"details": {
"category": "request"
}
}
}Use error.error_id for branching. See Inference API errors for the full catalog.
Fields on the nested error object
| Field | Type | Description |
|---|---|---|
message | string | Human-readable explanation — do not parse programmatically |
type | string | OpenAI-compatible error type (e.g. invalid_request_error, authentication_error) |
code | string | OpenAI-compatible error code (e.g. invalid_parameter, model_not_found) |
error_id | string | Skytells stable identifier — use for branching in code |
status | integer | HTTP status code for the error |
param | string | null | Parameter name when applicable |
request_id | string | Unique id for support — include when contacting Skytells |
details | object | undefined | Optional structured context; sanitized, no infrastructure secrets |
type and code mirror OpenAI for drop-in SDK compatibility. Upstream failures are sanitized.
The failed prediction object
Prediction-level failures occur in prediction serving (model execution, Prediction Serving Gateway, GPUs) after the job was accepted — not at API validation time. When inference fails there, the HTTP layer still returns a normal prediction document: GET /v1/predictions/{id} returns a prediction resource whose status is "failed" and whose error field is a short human-readable reason (model error, serving/runtime issue, policy, etc.). This is not the unified API error envelope — there is no top-level status: false wrapper around the whole response.
See Prediction errors — Prediction-level errors for when this applies and how it differs from error_id errors on the envelope.
{
"completed_at": "2026-02-27T18:58:45.819474Z",
"created_at": "2026-02-27T18:58:44.725000Z",
"data_removed": true,
"error": "Failed to generate image.",
"id": "pred_id....",
"input": { "...": "..." },
"metrics": {
"predict_time": 1.082823234,
"total_time": 1.094474004
},
"model": "truefusion",
"output": null,
"source": "api",
"started_at": "2026-02-27T18:58:44.736550Z",
"status": "failed",
"urls": {}
}Fields (failed prediction)
| Field | Type | Description |
|---|---|---|
status | string | Always "failed" for these records |
error | string | Human-readable description of the inference failure |
output | null | Always null — no output was produced |
data_removed | boolean | true when input/output data has been purged after the retention window |
metrics.predict_time | number | Seconds spent in model inference before the error occurred |
The error string is human-readable and may change between model versions. Do not branch your error-handling logic on its exact value.
The prediction failed webhook payload
When a prediction fails, Skytells can emit a webhook with type: prediction.failed. The envelope is separate from the HTTP error shapes above: type, created_at, and data holding the same fields as a failed prediction (often a subset in examples).
{
"type": "prediction.failed",
"created_at": "2026-02-27T18:58:45.000000Z",
"data": {
"id": "pred_id....",
"status": "failed",
"error": "Failed to generate image.",
"output": null,
"model": "truefusion",
"metrics": {
"predict_time": 1.082823234
}
}
}| Field | Type | Description |
|---|---|---|
type | string | Always "prediction.failed" |
created_at | string | ISO 8601 timestamp for the event |
data | object | Failed prediction fields — same semantics as failed prediction |
Even when data_removed is true on the full resource, the error field remains available on GET /v1/predictions/{id} after the retention window for inspection.
When to use which catalog:
- API errors —
error_idtables and HTTP layers (unified envelope on REST). - Inference API errors — Chat / Responses / Embeddings
error_idvalues. - Prediction errors — Predictions route-specific
error_idmeanings and fixes.
How is this guide?