Your First Prediction
Make your first real API call — generate an AI image with the Prediction API, understand the full async lifecycle, and read outputs correctly.
What you'll be able to do after this module
Make Prediction API calls from cURL, Python, or TypeScript. Understand the full async lifecycle from request to output URL. Read the Prediction Object schema confidently — you already studied it in Module 3.
The prediction lifecycle
Every request to Skytells — image, video, audio — follows the same pattern:
- Fast models (e.g.,
truefusion-edge): oftensucceededin the initial response — no polling needed. - Standard models (e.g.,
truefusion-pro): ~5–15 seconds. PollGET /v1/predictions/:id. - Video/audio models: 30 seconds to several minutes. Use webhooks (covered in Building Production Apps).
Make your first call
Make sure your API key is in your environment:
export SKYTELLS_API_KEY="sk-your-key-here"Now send your first prediction:
curl -X POST https://api.skytells.ai/v1/predictions \
-H "x-api-key: $SKYTELLS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "truefusion-pro",
"input": {
"prompt": "A photorealistic mountain lake at sunrise, 4K",
"width": 1024,
"height": 1024
}
}'The official SDKs handle polling automatically. The cURL example below shows you how polling works under the hood.
Reading the response
A freshly-created prediction looks like this:
{
"id": "pred_abc123",
"status": "processing",
"model": "truefusion-pro",
"created_at": "2025-01-01T00:00:00Z",
"output": null
}When status is processing, poll until succeeded:
curl https://api.skytells.ai/v1/predictions/pred_abc123 \
-H "x-api-key: $SKYTELLS_API_KEY"The completed response:
{
"id": "pred_abc123",
"status": "succeeded",
"output": [
"https://cdn.skytells.ai/outputs/pred_abc123/output.png"
],
"metrics": {
"predict_time": 8.3,
"total_time": 9.1
}
}Output URLs expire after 24 hours. If you need to keep the images permanently, download them to your own storage immediately after the prediction succeeds — don't store just the CDN URL.
Common input parameters
Most image models accept a consistent set of parameters:
| Parameter | Type | Description |
|---|---|---|
prompt | string | Required. What you want to generate. |
width | int | Output width in pixels. Common: 512, 768, 1024. |
height | int | Output height in pixels. Common: 512, 768, 1024. |
num_inference_steps | int | Quality/speed tradeoff. 4 = fast preview, 30 = high quality. |
guidance_scale | float | How closely to follow the prompt. 7.0–8.5 is a good range. |
seed | int | Set this for reproducible outputs. Same prompt + same seed = same image. |
negative_prompt | string | What to avoid in the output. |
Error handling
Always check response.ok (or catch exceptions in SDKs) before trusting the output.
# HTTP errors return a JSON body with a detail field
# Example 401 response:
# { "detail": "Invalid API key" }
# Example 422 response:
# { "detail": "width must be a multiple of 8" }HTTP status codes
| Status | Meaning | Action |
|---|---|---|
200 | Success | Read the prediction object |
401 | Invalid or missing API key | Check your x-api-key header |
422 | Invalid input parameters | Fix the request body |
429 | Rate limit exceeded | Back off and retry after the Retry-After header |
5xx | Server error | Retry with exponential backoff |
Complete working example
Here's a self-contained script that creates a prediction and downloads the output:
import os
import urllib.request
import skytells
client = skytells.Client(api_key=os.environ["SKYTELLS_API_KEY"])
# 1. Generate
prediction = client.predictions.create(
model="truefusion-pro",
input={
"prompt": "A photorealistic red fox in a snowy forest, golden hour, cinematic",
"width": 1024,
"height": 1024,
"num_inference_steps": 30,
},
)
assert prediction.status == "succeeded"
image_url = prediction.output[0]
print(f"Generated: {image_url}")
# 2. Download before CDN expiry (24h)
urllib.request.urlretrieve(image_url, "output.png")
print("Saved to output.png")Summary
You just made your first AI prediction. You now understand the complete request-response-output cycle that powers every Skytells integration — from a 2-second image to a 5-minute video.
Key things to remember:
- Every request uses
POST /v1/predictionswithmodel+input - Predictions have a lifecycle:
queued → processing → succeeded/failed - SDKs poll automatically; with raw HTTP you poll
GET /v1/predictions/:id - Output URLs expire in 24 hours — save them to your own storage
x-api-keyis the required header for the Prediction API
Everything above is about the Prediction API — Skytells' media generation layer. Module 5 covers the Inference API hands-on: LLM chat, streaming, stateful conversations, and embeddings.
Up next: Module 5 — make your first Inference API call and use Skytells as a drop-in OpenAI-compatible LLM gateway.
The Two APIs — Prediction & Inference Schemas
Understand the Prediction API and Inference API side by side — their request schemas, response schemas, lifecycles, error formats, and exactly when to use each one.
Your First Inference Call
Use the Inference API hands-on — make LLM chat completions, stream tokens to a UI, hold stateful conversations, and generate embeddings with the OpenAI-compatible interface.