Prediction Errors
Inference-level errors returned when a prediction fails during model execution.
Prediction errors are inference-level failures — they occur after the API has accepted your request and begun running the model, but the model itself encountered a problem during execution. These are distinct from HTTP API errors like 400 Bad Request or 401 Unauthorized.
A failed prediction always returns "status": "failed" with a non-null error field and "output": null.
Failed Prediction Object
{
"completed_at": "2026-02-27T18:58:45.819474Z",
"created_at": "2026-02-27T18:58:44.725000Z",
"data_removed": true,
"error": "Failed to generate image.",
"id": "pred_id....",
"input": { "...": "..." },
"metrics": {
"predict_time": 1.082823234,
"total_time": 1.094474004
},
"model": "truefusion",
"output": null,
"source": "api",
"started_at": "2026-02-27T18:58:44.736550Z",
"status": "failed",
"urls": {}
}Key Fields
| Field | Type | Description |
|---|---|---|
status | string | Always "failed" for errored predictions |
error | string | Human-readable description of the inference failure |
output | null | Always null — no output was produced |
data_removed | boolean | true when input/output data has been purged after the 5-minute retention window |
metrics.predict_time | number | Seconds spent in model inference before the error occurred |
The error string is human-readable and may change between model versions. Do not branch your error-handling logic on its exact value.
Common Inference Errors
| Error message | Cause | Resolution |
|---|---|---|
"Failed to generate image." | The model could not produce output from the given prompt or input parameters. | Adjust your prompt or input values and retry. |
"Input resolution too high." | The provided image exceeds the model's maximum supported resolution. | Downscale your input image before submission. |
"NSFW content detected." | The input or intended output was flagged by content safety filters. | Revise the prompt to comply with the usage policy. |
"Context length exceeded." | The prompt or combined input exceeds the model's token/context limit. | Shorten the prompt or reduce auxiliary input fields. |
"Model is currently unavailable." | The model is temporarily offline for maintenance. | Retry after a short delay or subscribe to status updates. |
Webhook Delivery for Failed Predictions
When a prediction fails, Skytells fires a prediction.failed webhook event if you registered a webhook URL on the prediction. The payload is identical to the failed prediction object above.
{
"type": "prediction.failed",
"created_at": "2026-02-27T18:58:45.000000Z",
"data": {
"id": "pred_id....",
"status": "failed",
"error": "Failed to generate image.",
"output": null,
"model": "truefusion",
"metrics": {
"predict_time": 1.082823234
}
}
}Even when data_removed is true, the error field is always preserved and accessible via GET /v1/predictions/{id} so you can inspect the failure reason after the retention window.
Handling Failed Predictions
Handle prediction.failed in your webhook
app.post('/webhook', (req, res) => {
const { type, data } = req.body;
if (type === 'prediction.failed') {
console.error(`Prediction ${data.id} failed: ${data.error}`);
// notify your user, trigger a retry, or log to your error tracker
}
res.sendStatus(200);
});How is this guide?