Errors

API Error Objects

Unified Skytells REST error envelope (`status`, `response`, `error`), Inference OpenAI-compatible errors, failed predictions, and webhooks.

Start from the Errors reference overview if you are unsure which of these shapes your endpoint returns.


Unified API error (Skytells REST)

The standard Skytells API v1 error body is this envelope. Use top-level status: false, human-readable response, and a nested error object. Branch on error.error_id — never on raw message strings.

Authentication example:

401 — invalid API key (unified envelope)
{
  "status": false,
  "response": "Invalid API key",
  "error": {
    "http_status": 401,
    "message": "Invalid or missing API key, Please obtain a valid API key from the dashboard",
    "details": "Invalid API key",
    "error_id": "UNAUTHORIZED"
  }
}
FieldTypeDescription
statusbooleanAlways false for errors
responsestringShort summary (logging, UI)
errorobjectMachine-readable failure — see table below

HTTP: The response status code matches error.http_status.

Nested error object

FieldTypeDescription
error_idstringStable Skytells identifier — API errors; Predictions routes also list codes in Prediction errors
http_statusnumberSame as the HTTP response status for this error
messagestringHuman-readable reason — do not branch on exact wording
detailsstring | object | nullExtra context when present

details may be a string or an object depending on the error.

More examples (unified envelope)

Same status / response / error shape for validation, credits, prediction serving, etc. (not the same document shape as Inference).

Example: 502 — fetch prediction

Typical shape when GET /v1/predictions/{uuid} returns a failed prediction result — error_id identifies the failure:

502 — fetch prediction / prediction serving error
{
  "status": false,
  "response": "Error fetching prediction",
  "error": {
    "http_status": 502,
    "message": "Prediction serving returned an error.",
    "details": {},
    "error_id": "PREDICTION_FAILED"
  }
}

Validation error — missing model

VALIDATION_ERROR — missing model
{
  "status": false,
  "response": "The model field is required.",
  "error": {
    "http_status": 422,
    "message": "The model field is required.",
    "details": "The model field is required.",
    "error_id": "VALIDATION_ERROR"
  }
}

Validation error — missing input

Rules on create: model is a required string; input is a required array.

VALIDATION_ERROR — missing input
{
  "status": false,
  "response": "The input field is required.",
  "error": {
    "http_status": 422,
    "message": "The input field is required.",
    "details": "The input field is required.",
    "error_id": "VALIDATION_ERROR"
  }
}

INSUFFICIENT_CREDITSdetails object

Insufficient balance for the priced operation — typically HTTP 402 on create (same error_id as other credit errors; disambiguate with PAYMENT_REQUIRED for account balance in middleware).

INSUFFICIENT_CREDITS (shape may vary)
{
  "status": false,
  "response": "User does not have enough credits",
  "error": {
    "http_status": 402,
    "message": "User has 1.5 credits, but 2 are required",
    "details": {
      "current_balance": 1.5,
      "required_amount": 2
    },
    "error_id": "INSUFFICIENT_CREDITS"
  }
}

Prediction Serving Error — 502 (generic)

When prediction serving returns a failure (inference / runtime), the API often responds with HTTP 502. error_id is stable; message is for display.

502 — Prediction Serving Error
{
  "status": false,
  "response": "Prediction failed",
  "error": {
    "http_status": 502,
    "message": "Prediction serving returned an error.",
    "details": {},
    "error_id": "PREDICTION_FAILED"
  }
}

The route-by-route error_id catalog for Predictions is on Prediction errors. General codes (auth, models, rate limits, etc.) are on API errors.


The Inference error response

Returned by Inference sub-APIs on failure: POST /v1/chat/completions, POST /v1/responses, POST /v1/embeddings. The payload is OpenAI-compatible: a single top-level error object (no top-level status / response wrapper).

Inference error response (document)
{
  "error": {
    "message": "Human readable description of the error",
    "type": "invalid_request_error",
    "code": "invalid_parameter",
    "error_id": "INVALID_PARAMETER",
    "status": 400,
    "param": "model",
    "request_id": "req_abc123xyz",
    "details": {
      "category": "request"
    }
  }
}

Use error.error_id for branching. See Inference API errors for the full catalog.

Fields on the nested error object

FieldTypeDescription
messagestringHuman-readable explanation — do not parse programmatically
typestringOpenAI-compatible error type (e.g. invalid_request_error, authentication_error)
codestringOpenAI-compatible error code (e.g. invalid_parameter, model_not_found)
error_idstringSkytells stable identifier — use for branching in code
statusintegerHTTP status code for the error
paramstring | nullParameter name when applicable
request_idstringUnique id for support — include when contacting Skytells
detailsobject | undefinedOptional structured context; sanitized, no infrastructure secrets

type and code mirror OpenAI for drop-in SDK compatibility. Upstream failures are sanitized.


The failed prediction object

Prediction-level failures occur in prediction serving (model execution, Prediction Serving Gateway, GPUs) after the job was accepted — not at API validation time. When inference fails there, the HTTP layer still returns a normal prediction document: GET /v1/predictions/{id} returns a prediction resource whose status is "failed" and whose error field is a short human-readable reason (model error, serving/runtime issue, policy, etc.). This is not the unified API error envelope — there is no top-level status: false wrapper around the whole response.

See Prediction errors — Prediction-level errors for when this applies and how it differs from error_id errors on the envelope.

Failed prediction (resource)
{
  "completed_at": "2026-02-27T18:58:45.819474Z",
  "created_at": "2026-02-27T18:58:44.725000Z",
  "data_removed": true,
  "error": "Failed to generate image.",
  "id": "pred_id....",
  "input": { "...": "..." },
  "metrics": {
    "predict_time": 1.082823234,
    "total_time": 1.094474004
  },
  "model": "truefusion",
  "output": null,
  "source": "api",
  "started_at": "2026-02-27T18:58:44.736550Z",
  "status": "failed",
  "urls": {}
}

Fields (failed prediction)

FieldTypeDescription
statusstringAlways "failed" for these records
errorstringHuman-readable description of the inference failure
outputnullAlways null — no output was produced
data_removedbooleantrue when input/output data has been purged after the retention window
metrics.predict_timenumberSeconds spent in model inference before the error occurred

The prediction failed webhook payload

When a prediction fails, Skytells can emit a webhook with type: prediction.failed. The envelope is separate from the HTTP error shapes above: type, created_at, and data holding the same fields as a failed prediction (often a subset in examples).

prediction.failed webhook
{
  "type": "prediction.failed",
  "created_at": "2026-02-27T18:58:45.000000Z",
  "data": {
    "id": "pred_id....",
    "status": "failed",
    "error": "Failed to generate image.",
    "output": null,
    "model": "truefusion",
    "metrics": {
      "predict_time": 1.082823234
    }
  }
}
FieldTypeDescription
typestringAlways "prediction.failed"
created_atstringISO 8601 timestamp for the event
dataobjectFailed prediction fields — same semantics as failed prediction

When to use which catalog:

How is this guide?

On this page