Predictions
A live, unified log of every inference request across your API keys, SDKs, CLI, and Playgrounds — with status, latency, cost, source, and full output inspection.
The Predictions page is your real-time audit log of every model inference request associated with your account — aggregated from all clients into a single searchable list.
In the Skytells Console, go to AI → Predictions in the left sidebar.
Predictions list
The main view is a table of predictions in reverse-chronological order (most recent first). Each row represents a single model execution.
List columns
| Column | Description |
|---|---|
| Status | The current lifecycle state of the prediction. |
| ID | A truncated unique identifier for the prediction (e.g., 05bd8fe1-7646-4b3d-8...). Click to inspect the full record. |
| Model or Deployment | The model namespace that was invoked — e.g., FLUX.2-pro, truefusion-pro, flux-fast, beatfusion-2.0. |
| Source | Where the prediction originated: API (direct API call via key), WEB (from a Playground session in the Console). |
| Queued | Time the prediction spent in the queue before execution began. — if it started immediately. |
| Running | Actual execution time once the model started processing, in seconds (e.g., 5.51s, 1.37s). |
| Total | End-to-end elapsed time from submission to completion. |
Prediction statuses
| Status | Badge color | Meaning |
|---|---|---|
Succeeded | Green | The model ran successfully and output is available. |
Processing | Blue | The model is currently running. |
Queued | Grey | The prediction is waiting for capacity. |
Failed | Red | The model returned an error. Open the detail to see error. |
Cancelled | Orange | The prediction was cancelled before it completed. |
Filtering and search
The search bar at the top of the list filters predictions in real time. You can search by:
- Model name or namespace — e.g.,
flux,truefusion,beatfusion - Prediction ID — paste the full UUID or a prefix
- Status — filter by
succeeded,failed,processing, etc.
Use the refresh button (↻) to reload the list and pull the latest status for in-flight predictions.
Select Load more at the bottom of the list to paginate through older predictions.
Predictions from all clients — API, SDK, CLI, and Playground sessions — appear in this list. The Source column shows where each prediction originated.
Inspecting a prediction
Click any row to open the prediction detail sidebar on the right side of the page. The sidebar provides a quick summary without leaving the list.
Sidebar fields
| Field | Description |
|---|---|
| Prediction ID | The full UUID for this prediction, shown at the top. Copyable. |
| Model | The model namespace that was used. |
| Created at | The exact timestamp when the prediction was submitted. |
| Status | Current or final status of the prediction. |
| Predict time | The measured execution duration (e.g., 5.51s). |
| Source | API or WEB. |
| Cost | The dollar amount charged for this prediction (e.g., $0.02). |
Below the metadata, the sidebar shows two views:
- JSON — The Prediction object as returned by the Predictions API.
- Preview — A visual preview of the output (rendered image, playable video, or playable audio).
Opening the full prediction view
Select View full prediction at the bottom of the sidebar to open the dedicated prediction detail page.
Full prediction detail page
The dedicated prediction detail page gives you deep access to the complete prediction record.
Header bar
The header displays key metadata in a concise summary row:
- Status badge —
Succeeded,Failed, etc. - Source —
APIorWEB - Output type —
Image,Video,Audio,Text - Predict time — measured execution duration
- Date — when the prediction was created
- Cost — the dollar cost charged
An action button — Try in Playground — appears on the right, loading this exact prediction's input into the Inference Playground so you can re-run or iterate on it.
Input panel (left)
The input panel shows every parameter submitted for this prediction:
| Tab | Content |
|---|---|
| Input | All input fields as key-value pairs, as submitted at creation time. |
| JSON | The input field of the Prediction object as raw JSON. |
| Create | Opens the Playground pre-loaded with this model to create a new prediction. |
Output panel (right)
| Tab | Content |
|---|---|
| Preview | Rendered output — image inline, video with playback, audio with playback. |
| JSON | The output field of the Prediction object, including delivery URLs for generated assets. |
Output files
Below the output panel, all generated files are listed with filename and file size. Each file is downloadable directly from the prediction detail page.
Source column
The Source column tells you where each prediction originated:
| Label | Meaning |
|---|---|
API | Created via the Predictions API, TypeScript or Python SDK, or CLI. |
WEB | Submitted from the Inference Playground in the Console. |
You can also ask Eve to open a specific prediction or summarize your recent inference history. Say "Show me my last prediction" or "What did prediction 05bd8fe1 return?" and Eve will navigate directly.
Related
- Inference Playground — run new predictions interactively; every run appears here.
- Model Card — view model details and jump back to re-run from a prediction.
- API: List Predictions — programmatic predictions list.
- API: Get Prediction — retrieve a single prediction by ID.
- TypeScript SDK — Predictions —
client.predictionsnamespace reference. - Eve — ask Eve to surface predictions, navigate to a specific record, or summarize your inference activity.
How is this guide?
Model Card
The single-model view in the Skytells Console — pricing, SDK snippet, capability tags, playground access, and API schema in one place.
Playground Hub
The Skytells Playground Hub — discover and launch the Inference Playground for image, video, and audio models, or the LLM Playground for large language models.