TypeScript SDK

Predictions API

Run AI models, track prediction lifecycle, batch dispatch, and manage predictions through the SDK.

Predictions are the core primitive of the Skytells platform. Every time you ask an AI model to generate an image, a video, audio, or text, you're creating a prediction. The SDK wraps the Predictions REST API with high-level methods that handle polling, progress tracking, and lifecycle management automatically.

Skytells hosts 30+ models spanning image generation, video, music, text, and multimodal tasks. Every model is accessible through the same run() interface — the SDK resolves the model slug, validates compatibility, and returns a type-safe Prediction object. To discover available models and inspect their schemas, see Models.

For client setup and configuration, see Client.

Running Predictions

Basic Run

The run() method takes a model slug and options, creates a prediction, polls until completion, and returns a Prediction object.

Signature

client.run(model: string, options: RunOptions, onProgress?: OnProgressCallback): Promise<Prediction>

RunOptions

See RunOptions in Reference for the full type.

FieldTypeDefaultDescription
inputRecord<string, any>Model input parameters (required).
webhookWebhook | { url, events }Webhook config — see Webhooks.
streambooleanfalseEnable streaming output.
intervalnumber5000Poll interval (ms) when used with onProgress.
maxWaitnumberMax wait time (ms). Throws WAIT_TIMEOUT.
signalAbortSignalAbort polling. Throws ABORTED.

Returns: Promise<Prediction>

run()

Basic
import Skytells from 'skytells';

const skytells = Skytells('sk-your-api-key');

const prediction = await skytells.run('truefusion', {
  input: { prompt: 'A cat wearing sunglasses' },
});

console.log(prediction.output);    // "https://..." or ["https://...", ...]
console.log(prediction.output[0]); // first output if array
console.log(prediction.id);        // "pred_abc123"
console.log(prediction.status);    // "succeeded"

Progress Tracking

Tracking Progress

Pass an onProgress callback as the third argument to run(). The SDK creates the prediction in the background and polls every 5 seconds, invoking the callback on each poll.

const prediction = await skytells.run('truefusion',
  { input: { prompt: 'A detailed landscape painting' } },
  (p) => {
    console.log(`Status: ${p.status}`);
    if (p.metrics?.progress !== undefined) {
      console.log(`Progress: ${p.metrics.progress}%`);
    }
  },
);

console.log(prediction.outputs());

Run with Webhook

Webhook Notifications

Pass a webhook option to receive HTTP POST notifications when the prediction completes or fails, instead of polling. This is ideal for long-running models. See Webhooks for setting up inbound handlers, or the Webhooks REST API.

const prediction = await skytells.run('truefusion', {
  input: { prompt: 'A robot painting a sunset' },
  webhook: {
    url: 'https://your-server.com/webhook',
    events: ['completed', 'failed'],
  },
});

Low-Level: predict()

Direct Predict

For fire-and-forget or full control, use skytells.predict() directly. Returns the raw PredictionResponse (no Prediction wrapper). This maps directly to POST /v1/predictions.

Pass await: true to make the server hold the connection open until the prediction completes, returning the final result in a single HTTP call.

predict()

Fire-and-forget
// Returns immediately, status: "pending"
const response = await skytells.predict({
model: 'truefusion',
input: { prompt: 'A sunset' },
});
console.log(response.id, response.status); // "pred_...", "pending"

Server-Side Wait

When await: true is set on a PredictionRequest, the Skytells API holds the HTTP connection open until the prediction reaches a terminal status (succeeded, failed, or cancelled), then returns the final PredictionResponse in a single round-trip.

This is the simplest way to get a result — no polling, no webhooks, no progress callbacks. run() uses this path internally when you don't pass an onProgress callback.

Server-Side Wait vs Client-Side Polling

Aspectawait: true (server-side)predictions.create() + wait() (client-side)
HTTP calls1 blocking call1 POST + N GET polls
Where blocking happensServer holds connectionClient polls via setTimeout
Progress trackingNot possibleYes, via onProgress callback
Used by run() whenNo onProgress callbackonProgress callback provided
Timeout controlClientOptions.timeoutWaitOptions.maxWait + AbortSignal
Network overheadMinimal (one round-trip)Higher (multiple round-trips)

Server-Side Wait

predict() with await
// Single blocking call — server waits for result
const result = await skytells.predict({
model: 'truefusion',
input: { prompt: 'An astronaut riding a horse' },
await: true,
});

console.log(result.status);  // "succeeded"
console.log(result.output);  // "https://cdn.skytells.ai/..."

The Prediction Object

skytells.run() returns a Prediction object wrapping the raw API response with getters and lifecycle methods. This is a higher-level wrapper around the PredictionResponse returned by the REST API.

Properties (Getters)

.idstring

Unique prediction ID.

.statusPredictionStatus

Current lifecycle status.

.outputstring | string[] | undefined

Raw output (matches API JSON).

.responsePredictionResponse

Full API response object.

Methods

.outputs()string | string[] | undefined

Normalized output — unwraps single-element arrays.

.raw()PredictionResponse

Full raw response as plain object.

.cancel()Promise<PredictionResponse>

Cancel the prediction.

.delete()Promise<PredictionResponse>

Delete the prediction and its assets.

outputs() Behavior

API outputoutputs() returns
undefinedundefined
"https://...""https://..." (string)
["https://..."]"https://..." (unwrapped)
["a", "b"]["a", "b"] (kept as array)

Prediction Object

Properties
const prediction = await skytells.run('truefusion', {
  input: { prompt: 'A cat' },
});

// Properties
prediction.id;       // "pred_abc123"
prediction.status;   // "succeeded"
prediction.output;   // "https://..." or ["https://...", ...]
prediction.output[0] // first output when array
prediction.response; // full PredictionResponse

PredictionRequest

Payload for client.predict() and predictions.create(). See PredictionRequest in the Reference.

FieldTypeDefaultDescription
modelstringModel slug — see Models.
inputRecord<string, any>Model input parameters (required).
awaitbooleanfalseWhen true, the server holds the connection until the prediction completes. See Server-Side Wait.
streambooleanfalseEnable streaming output.
webhookWebhook | { url, events }Webhook config — see Webhooks.

PredictionRequest

Interface
interface PredictionRequest {
model: string;
input: Record<string, any>;
/**
 * If true, the API blocks until the prediction
 * completes and returns the final result.
 * If false (default), returns immediately
 * with status "pending".
 */
await?: boolean;
stream?: boolean;
webhook?: Webhook | {
  url: string;
  events: ReadonlyArray<string>;
};
}

Compatibility Check

Pass compatibilityCheck: true as PredictionSdkOptions to validate the model slug before submitting. The SDK calls GET /models/{slug} and throws SDK_ERROR if the model is OpenAI-chat-only (use client.chat instead). Results are cached per client.

const response = await skytells.predict(
  { model: 'flux-pro', input: { prompt: '...' } },
  { compatibilityCheck: true },
);

autoAwait: Automatic Server-Side Wait for Images

New in v1.0.5: Pass autoAwait: true in PredictionSdkOptions to let the SDK automatically set await: true for image models only. This requires compatibilityCheck: true (the SDK reuses the cached GET /models/{slug} fetch to check the model type).

When autoAwait: true is set:

  • Image models → SDK automatically sets await: true, server blocks and returns final output
  • Video/audio/text models → SDK keeps await: false, returns pending for polling
  • Explicit await in payload → Always takes priority over autoAwait

This is useful for polymorphic code that runs different model types — you don't need to branch on model type to decide whether to use server-side wait.

Priority Rules

  1. Explicit payload.await (if set) → always used
  2. autoAwait: true (if set + image model) → sets await: true
  3. Defaultawait: false

autoAwait

Basic usage
// Image model — returns final output immediately
const image = await skytells.predictions.create(
{ model: 'flux-pro', input: { prompt: 'A sunset' } },
{ compatibilityCheck: true, autoAwait: true },
);
console.log(image.status);  // 'succeeded'
console.log(image.output);  // 'https://...'

// Video model — returns pending, SDK skips await
const video = await skytells.predictions.create(
{ model: 'kling-video', input: { prompt: 'Ocean waves' } },
{ compatibilityCheck: true, autoAwait: true },
);
console.log(video.status);  // 'pending'
const result = await skytells.wait(video);

Background Predictions

Creating Background Predictions

Use skytells.predictions.create() to start a prediction without waiting for it to finish. The prediction runs in the background and you can poll it later with skytells.wait().

// Create in background (returns immediately)
const response = await skytells.predictions.create({
model: 'truefusion',
input: { prompt: 'A landscape painting' },
});
console.log(response.id, response.status); // "pred_..." "pending"

// Poll until complete
const result = await skytells.wait(response);
console.log(result.output);

Waiting & Polling

skytells.wait() polls a prediction until it reaches a terminal status (succeeded, failed, or cancelled).

WaitOptions

See WaitOptions in Reference for the full type.

FieldTypeDefaultDescription
intervalnumber5000Polling interval in milliseconds.
maxWaitnumberMax wait time (ms). Throws WAIT_TIMEOUT.
signalAbortSignalAbort polling. Throws ABORTED.

wait()

Basic
const bg = await skytells.predictions.create({
model: 'truefusion',
input: { prompt: 'A landscape' },
});

// Wait for completion (polls every 5 seconds by default)
const result = await skytells.wait(bg);
console.log(result.output);

Queue & Dispatch

Queue multiple predictions locally, then dispatch them all concurrently. Items are NOT sent until dispatch() is called.

skytells.queue(request)void

Adds a prediction request to the local queue.

skytells.dispatch()Promise<PredictionResponse[]>

Dispatches all queued predictions concurrently.

Queue & Dispatch

Basic batch
skytells.queue({ model: 'truefusion-pro', input: { prompt: 'Cat' } });
skytells.queue({ model: 'truefusion-x', input: { prompt: 'Dog' } });
skytells.queue({ model: 'FLUX-2.0', input: { prompt: 'Bird' } });

const results = await skytells.dispatch();
for (const pred of results) {
console.log(pred.id, pred.status); // all "pending" initially
}

// Wait for all to complete
const completed = await Promise.all(
results.map((r) => skytells.wait(r)),
);
for (const result of completed) {
console.log(result.output);
}

Predictions API

The skytells.predictions namespace provides direct access to prediction CRUD operations, mapping to the Predictions REST API. For information about prediction pricing per model, see Pricing.

Prediction Operations

predictions.create(payload)Promise<PredictionResponse>

Create a background prediction.

predictions.get(id)Promise<PredictionResponse>

Fetch a prediction by ID.

predictions.list(options?)Promise<PaginatedResponse<...>>

List predictions with optional filters.

skytells.cancelPrediction(id)Promise<PredictionResponse>

Cancel a running prediction by ID.

skytells.deletePrediction(id)Promise<PredictionResponse>

Delete a prediction by ID.

skytells.streamPrediction(id)Promise<PredictionResponse>

Get streaming endpoint for a prediction.

PredictionsListOptions

See PredictionsListOptions in the Reference.

pagenumber

Page number.

modelstring

Filter by model slug (e.g. "truefusion").

sincestring

Start date filter (ISO 8601 / YYYY-MM-DD).

untilstring

End date filter (ISO 8601 / YYYY-MM-DD).

Predictions API

Create
const prediction = await skytells.predictions.create({
model: 'truefusion',
input: { prompt: 'An astronaut' },
});
console.log(prediction.id, prediction.status); // "pred_..." "pending"

Models API

The skytells.models namespace provides access to model discovery and details. For complete documentation on listing, fetching, schemas, and the full Model type, see the dedicated Models page.

// Quick reference — full docs at /docs/sdks/ts/models
const models = await skytells.models.list();
const model  = await skytells.models.get('truefusion');
const withSchema = await skytells.models.get('truefusion', {
  fields: ['input_schema', 'output_schema'],
});

Browse available models in the Model Catalog.

The PredictionResponse Object

The raw API response returned by predict(), predictions.create(), predictions.get(), and wait(). The Prediction class wraps this with convenience accessors.

See PredictionResponse in the Reference for the full type.

FieldTypeDescription
idstringUnique prediction ID.
statusPredictionStatusLifecycle status.
typePredictionType'inference' or 'training'.
streambooleanWhether streaming was enabled.
inputRecord<string, any>Input parameters sent.
outputstring | string[] | undefinedOutput URL(s) or text.
responsestringHuman-readable message (e.g. error details).
created_atstringISO 8601 creation timestamp.
started_atstringISO 8601 processing start.
completed_atstringISO 8601 completion time.
updated_atstringISO 8601 last update.
privacystringPrediction privacy level.
sourcePredictionSource'api' · 'cli' · 'web'.
model{ name, type }Model display name and content type.
webhook{ url, events }Webhook config attached.
metricsobjectimage_count, predict_time, total_time, asset_count, progress.
metadataobjectbilling.credits_used, storage.files[], data_available.
urlsobjectget, cancel, stream, delete endpoint URLs.

PredictionResponse

Interface
interface PredictionResponse {
id: string;
status: PredictionStatus;
type: PredictionType;
stream: boolean;
input: Record<string, any>;
output?: string | string[];
response?: string;
created_at: string;
started_at: string;
completed_at: string;
updated_at: string;
privacy: string;
source?: PredictionSource;
model?: {
  name: string;
  type: string;
};
webhook?: {
  url: string | null;
  events: string[];
};
metrics?: {
  image_count?: number;
  predict_time?: number;
  total_time?: number;
  asset_count?: number;
  progress?: number;
};
metadata?: {
  billing?: { credits_used: number };
  storage?: {
    files: {
      name: string;
      type: string;
      size: number;
      url: string;
    }[];
  };
  data_available?: boolean;
};
urls?: {
  get?: string;
  cancel?: string;
  stream?: string;
  delete?: string;
};
}

Prediction Enums

These enums can be imported directly from the SDK.

import {
  PredictionStatus,
  PredictionType,
  PredictionSource,
} from 'skytells';

See the Reference for more.

Enums

PredictionStatus
enum PredictionStatus {
PENDING    = 'pending',
STARTING   = 'starting',
STARTED    = 'started',
PROCESSING = 'processing',
SUCCEEDED  = 'succeeded',
FAILED     = 'failed',
CANCELLED  = 'cancelled',
}

Streaming

Stream Predictions

For predictions created with stream: true, you can retrieve the streaming endpoint URL using skytells.streamPrediction().

const bg = await skytells.predictions.create({
model: 'truefusion',
input: { prompt: 'A landscape' },
stream: true,
});

const stream = await skytells.streamPrediction(bg.id);
console.log(stream.urls?.stream); // streaming endpoint URL
  • Models — Discover available models and their input schemas
  • Chat API — OpenAI-compatible chat completions
  • Responses API — Stateful multi-turn conversations
  • Webhooks — Receive prediction lifecycle events via HTTP POST
  • Safety — Evaluate prediction output for content moderation
  • ErrorsPREDICTION_FAILED, WAIT_TIMEOUT, ABORTED, and all error IDs
  • Reliability — Timeouts, retries, AbortSignal, and edge/serverless patterns
  • ClientClientOptions.timeout controls await: true HTTP timeout
  • Reference: Prediction types — Full PredictionResponse, RunOptions, WaitOptions definitions
  • Predictions REST API — Underlying REST endpoints
  • Model Catalog — Browse all available models
  • Pricing — Per-model prediction pricing

How is this guide?

On this page