Python SDK

Predictions & Models

Running predictions, the Prediction object, waiting and polling, queue and dispatch, streaming, the Predictions API, and the Models API.

Running predictions


Basic run

The run() method takes a model slug and an input dict, creates the prediction, waits until completion, and returns a Prediction object. When no on_progress callback is given, it sets await=True and blocks until the prediction finishes.

Parameters

modelstr

Model slug (e.g. "flux-pro", "truefusion"). Browse models at skytells.ai/explore/models.

inputdict[str, Any] | None

Key-value input parameters for the model. For input schemas per model, see the Predictions API.

on_progressCallable[[dict], None] | None

Called on each poll with the latest prediction state dict. When given, the SDK creates the prediction in the background and polls every 5 seconds.

webhookdict | None

{"url": "...", "events": ["completed", "failed"]} — receive a POST when the prediction reaches an event.

streambool

Enable streaming. Populates urls["stream"] in the response.

Returns: Prediction

Raises: SkytellsError — on API error or status == "failed" (error_id="PREDICTION_FAILED")

client.run()

Basic
prediction = client.run("flux-pro", input={"prompt": "A cat wearing sunglasses"})

print(prediction.output)   # "https://..." or ["https://...", ...]
print(prediction.id)       # "pred_abc123"
print(prediction.status)   # "succeeded"

Progress tracking

Tracking progress

Pass an on_progress callback to run(). The SDK creates the prediction in the background and polls every 5 seconds, invoking the callback on each poll with the latest state dict.

def on_progress(p):
  status = p["status"]
  progress = p.get("metrics", {}).get("progress")
  if progress is not None:
      print(f"  [{status}] {progress:.0f}%")
  else:
      print(f"  [{status}]")

prediction = client.run(
  "flux-pro",
  input={"prompt": "A detailed oil painting of a forest"},
  on_progress=on_progress,
)
print("Done:", prediction.output)

Run with webhook

Webhook notifications

Pass a webhook option to receive a POST request when the prediction completes or fails.

prediction = client.run(
  "flux-pro",
  input={"prompt": "A robot dancing"},
  webhook={
      "url": "https://your-server.com/skytells-webhook",
      "events": ["completed", "failed"],
  },
)

Low-level: predict()

Direct predict

For fire-and-forget or full control, use client.predict() directly. Returns the raw PredictionResponse dict — no Prediction wrapper.

predict()

Fire-and-forget
# Returns immediately with status "pending"
response = client.predict({
  "model": "flux-pro",
  "input": {"prompt": "A landscape"},
})
print(response["id"], response["status"])  # "pred_...", "pending"

The Prediction object

client.run() returns a Prediction instance — an enhanced, modelled wrapper around the raw prediction object returned by the API. It exposes the same underlying data through typed properties and adds lifecycle methods for managing the prediction from Python.

Properties

.idstr

Unique prediction ID (e.g. "pred_abc123").

.statusstr

Current lifecycle status. See PredictionStatus.

.outputstr | list[str] | None

Raw output — matches the API JSON. Can be a string, a list of strings, or None if not yet complete.

.responsedict

Full raw API response dict (same as raw()).

Methods

.outputs()str | list[str] | None

Normalised output — unwraps single-element lists. See table below.

.raw()dict

Full raw response dict — useful for logging, serialization, or accessing metrics and billing.

.cancel()dict

Cancels the prediction. Returns the updated prediction dict.

.delete()dict

Deletes the prediction and its stored output/assets.

outputs() behaviour

.output value.outputs() returns
NoneNone
"https://...""https://..."
["https://..."]"https://..." (unwrapped)
["a", "b"]["a", "b"] (kept as-is)
[][]

Prediction object

Properties
prediction = client.run("flux-pro", input={"prompt": "A cat"})

prediction.id        # "pred_abc123"
prediction.status    # "succeeded"
prediction.output    # "https://..." or ["https://...", ...]
prediction.response  # full dict

Background predictions

Creating background predictions

Use client.predictions.create() to start a prediction without waiting for it to finish. Returns immediately with status: "pending". Poll it later with client.wait().

# Create in the background (returns immediately)
prediction = client.predictions.create({
  "model": "flux-pro",
  "input": {"prompt": "A sunset over mountains"},
})
print(prediction["id"], prediction["status"])  # "pred_...", "pending"

# Poll until complete
result = client.wait(prediction)
print(result["output"])

Waiting & polling

client.wait() polls a prediction until it reaches a terminal status: succeeded, failed, or cancelled.

WaitOptions

intervalint

Polling interval in milliseconds.

max_waitint | None

Maximum wait time (ms). Raises SkytellsError with error_id="WAIT_TIMEOUT" if exceeded.

Terminal statuses

StatusMeaning
succeededPrediction completed with output
failedPrediction completed with error
cancelledPrediction was cancelled

wait()

Basic
bg = client.predictions.create({
  "model": "flux-pro",
  "input": {"prompt": "A landscape"},
})

# Polls every 5 seconds by default
result = client.wait(bg)
print(result["status"])  # "succeeded"
print(result["output"])

wait() — progress

With progress callback
result = client.wait(
  bg,
  options={"interval": 3000},
  on_progress=lambda p: print(
      f"  {p['status']}{p.get('metrics', {}).get('progress', '?')}%"
  ),
)

Queue & dispatch

Queue multiple prediction requests locally, then send them all with dispatch(). Items are not sent until dispatch() is called.

client.queue(payload)None

Adds a prediction request to the local in-memory queue.

client.dispatch()list[dict]

Sends all queued predictions and clears the queue. Returns a list of initial prediction response dicts (status "pending").

Queue & dispatch

Basic batch
client.queue({"model": "flux-pro", "input": {"prompt": "Cat"}})
client.queue({"model": "flux-pro", "input": {"prompt": "Dog"}})
client.queue({"model": "flux-pro", "input": {"prompt": "Bird"}})

results = client.dispatch()
for pred in results:
  print(pred["id"], pred["status"])  # all "pending" initially

# Wait for all to complete
completed = [client.wait(r) for r in results]
for r in completed:
  print(r["output"])

Predictions API

The client.predictions namespace provides direct access to prediction CRUD operations. For detailed endpoint documentation, see the Predictions API reference.

Prediction operations

predictions.create(payload)dict

Create a background prediction (always await=False). Returns immediately with status: "pending".

predictions.get(id)dict

Fetch a prediction by ID. Raises SkytellsError if not found.

predictions.list(options?)PaginatedResponse

List predictions with optional filters. Returns .data (list of dicts) and .pagination.

client.cancel_prediction(id)dict

Cancel a running prediction by ID.

client.delete_prediction(id)dict

Delete a prediction and its assets by ID.

client.stream_prediction(id)dict

Get the streaming endpoint for a prediction.

PredictionsListOptions

pageint | None

Page number.

sincestr | None

Include predictions from this date (YYYY-MM-DD).

untilstr | None

Include predictions up to this date (YYYY-MM-DD).

modelstr | None

Filter by model slug (e.g. "flux-pro").

Predictions API

Create
prediction = client.predictions.create({
  "model": "flux-pro",
  "input": {"prompt": "An astronaut"},
})
print(prediction["id"], prediction["status"])  # "pred_...", "pending"

# Wait for it
result = client.wait(prediction)
print(result["output"])

Predictions API — manage

List with filters
result = client.predictions.list(
  model="flux-pro",
  since="2026-01-01",
  until="2026-03-16",
  page=2,
)

for pred in result.data:
  print(pred["id"], pred["created_at"])

Models API

The client.models namespace provides access to model discovery and details. Use it to discover available models, inspect capabilities and pricing, and fetch input/output schemas before running a prediction.

For the full model catalog with all namespaces, input schemas, and pricing, see the Model Catalog.

Model operations

models.list(fields?, options?)list[dict]

List all available models on the Skytells platform.

models.get(slug, fields?, options?)dict

Fetch a single model by its slug. Raises SkytellsError with error_id="MODEL_NOT_FOUND" if the namespace doesn't exist.

ModelFieldsOptions

fieldslist[str] | None

Extra fields to include in the response: "input_schema", "output_schema".

Model object shape

namestr
Model display name (e.g. "TrueFusion").
namespacestr
Slug used in API calls (e.g. "truefusion").
typestr

"image" · "video" · "audio" · "music" · "text" · "code" · "multimodal"

vendordict

name, description, image_url, verified, slug, metadata.

pricingdict | None

amount, currency, unit (e.g. "image", "second", "prediction").

capabilitieslist[str]

e.g. ["text-to-image", "image-to-image"]

statusstr
Model availability status.
input_schemadict | None

JSON Schema for model input. Only present when fields includes "input_schema".

output_schemadict | None

JSON Schema for model output. Only present when fields includes "output_schema".

Discover models

List all models
models = client.models.list()

for model in models:
  print(model["name"], model["type"], model["vendor"]["name"])
  # e.g. "TrueFusion" "image" "Skytells"
  # e.g. "Mera"        "video" "Skytells"
  # e.g. "BeatFusion 2.0" "audio" "Skytells"

Image generation

TrueFusion Pro
# TrueFusion Pro — img2img, guidance, seed control
# Namespace: truefusion-pro | $0.05/image
prediction = client.run("truefusion-pro", input={
  "prompt": "A photorealistic futuristic city at sunset",
  "aspect_ratio": "16:9",
  "num_outputs": 1,
  "num_inference_steps": 28,
  "guidance": 3,
  "output_format": "webp",
})
print(prediction.outputs())

Video & Audio

Video — Mera
# Mera — Skytells flagship video model
# Namespace: mera | $3.42/prediction
prediction = client.run("mera", input={
  "prompt": "A cinematic timelapse of a city at night, flying drone perspective",
  "seconds": "8",
  "size": "1280x720",
})
print(prediction.outputs())  # video URL

Streaming

Stream predictions

For predictions created with stream: True, retrieve the streaming endpoint URL using client.stream_prediction().

Streaming

From background prediction
bg = client.predictions.create({
  "model": "flux-pro",
  "input": {"prompt": "A landscape"},
  "stream": True,
})

stream_info = client.stream_prediction(bg["id"])
stream_url = stream_info["urls"]["stream"]
print("Stream URL:", stream_url)

How is this guide?

On this page