Inference Playground
Test image, video, and audio generation models interactively with a schema-driven form, live preview, JSON output, and automatically generated SDK code snippets.
The Inference Playground is the interactive testing environment for Skytells' image, video, audio, and music generation models. It renders a schema-driven form for any model's input parameters, lets you run predictions in real time, and shows output as a visual preview, raw JSON, or execution logs — all without writing a single line of code.
Every run in the Inference Playground creates a real prediction, visible in your Predictions history with source: WEB.
Accessing the Inference Playground
From the Model Catalog:
- Go to AI → Models.
- Click any model row to open its card.
- Select Try in Playground.
From the Playground Hub:
- Go to AI → Playgrounds.
- Use the search bar or click a Featured Model card.
From a Prediction:
- Open any past prediction in Predictions.
- Select Try in Playground — the same input values are pre-loaded.
Each model's playground is accessible at /ai/playground/{vendor}/{model} in the Skytells Console.
Playground layout
The Inference Playground for a specific model has a top navigation bar, two main panels (Input and Output), and a set of tabs across the full page.
Top navigation tabs
| Tab | Description |
|---|---|
| Playground | The default interactive testing view described on this page. |
| API | The full input/output schema for this model — all parameters, types, and constraints. |
| Examples | Curated prompt examples with expected outputs for this model. |
| README | The model provider's original documentation for this model. |
Header
The header shows:
- Provider logo and model name (e.g.,
Black Forest Labs / Flux 2 Flex) - Status badges:
OFFICIAL·OPERATIONAL·MULTIPLE PRICING·H100·PUBLIC·IMAGE - API Key field: Your currently active API key (masked). Used for prediction authentication. If blank, select a key from the dropdown.
- Run button: Submits the current input configuration as a prediction.
Input panel
The left panel is where you configure your prediction. It has five sub-tabs:
Form tab (default)
The Form view renders every model parameter as a typed input field:
| Field type | UI control |
|---|---|
| string (prompt) | Multi-line text area |
| string (enum) | Dropdown select with listed options |
| string[] (URL array) | URL input with Add button, e.g., Input Images |
| integer / float | Number spinner with min/max constraints |
| boolean | Toggle switch |
Required fields (marked with *) must be filled before you can run. Prompt is always required.
Advanced options toggle: Click to expand additional parameters that have sensible defaults and are optional:
- Input Images — One or more reference image URLs (e.g., for style transfer or editing models).
- Aspect Ratio — Enum dropdown, e.g.,
1:1,16:9,4:3,9:16. Default:1:1. - Resolution — Enum dropdown, e.g.,
1 MP,4 MP. Default:1 MP. - Other model-specific parameters (step count, guidance scale, seed, safety tolerance, etc.).
JSON tab
Switch to JSON view to paste or edit raw JSON input. This is useful when you have a previously saved parameter set or want to work with complex nested inputs. Changes in JSON view are reflected in Form view and vice versa.
Node.js tab
Auto-generates a TypeScript/JavaScript snippet reflecting the current form values, using the TypeScript SDK. The snippet updates automatically as you change form values — copy it directly into your project.
Python tab
Equivalent to the Node.js tab, generated using the Python SDK.
HTTP tab
Generates the raw HTTP request for the current parameters. See Predictions API: Create for the full request contract.
Output panel
The right panel shows the result of your prediction after pressing Run.
Before running: a placeholder reads "Run a prediction to see output here."
After running, the panel updates in real time and three sub-tabs become available:
Preview tab
- Image models: The generated image is displayed inline at full resolution.
- Video models: A video player appears with playback controls.
- Audio/Music models: An audio player appears with playback controls.
Generated files are also downloadable from the output panel footer.
JSON tab
Returns the Prediction object from the Predictions API.
Logs tab
Execution logs from the model container — useful for debugging failures or understanding model-level errors. May include:
- Build-time messages (if the model is a custom deployment)
- Runtime inference steps
- Any error or warning output from the model
Running a prediction
Fill in the required fields
At minimum, enter a Prompt in the Form tab. Required fields are marked with an asterisk. For image editing or composition models, add reference images via the Input Images field in Advanced Options.
Optionally configure advanced options
Expand Advanced Options to set aspect ratio, resolution, seed, guidance scale, or any other model-specific parameters. Defaults are optimized for quality — adjust only what you need.
Select your API key
Ensure an API key is shown in the API Key field at the top right of the Input panel. If not, select one from the dropdown. The key must have prediction access.
Click Run
Press the Run button to submit the prediction. The Output panel shows a loading animation while the prediction is queued and executing.
Inspect the output
- Switch to Preview to see the visual result.
- Switch to JSON to see the full API response.
- Switch to Logs to inspect execution details.
Copy code
Switch the Input panel to Node.js, Python, or HTTP to get a production-ready snippet reflecting the exact parameters you used.
API tab — input schema reference
The API tab within the Inference Playground shows the complete parameter schema for the model:
| Column | Description |
|---|---|
| Parameter | Parameter name (same key used in the API). |
| Type | JSON type or format — string, integer, number, boolean, string[url][], enum. |
| Required | Whether the parameter must be provided. |
| Default | The value used if the parameter is omitted. |
| Description | What the parameter does. |
Use this view to understand exactly what the model accepts before integrating into production.
Examples tab
The Examples tab shows curated prompt-output pairs contributed by the Skytells team or the model provider. Each example shows:
- The input parameters used (prompt, aspect ratio, etc.)
- The visual output result
- A Load example button that pre-fills the Form with those parameters
Use examples as a starting point when working with an unfamiliar model, or as creative inspiration.
Prediction history
Every run from the Inference Playground is tracked in Predictions with:
source: WEB(distinguishing it from API calls)- Full input and output, including cost and timing
- A Try in Playground link to reload the same parameters
This means your playground experiments are fully auditable, replayable, and cost-tracked.
Related
- Playground Hub — navigate between the Inference Playground and LLM Playground.
- LLM Playground — the equivalent environment for large language models.
- Model Card — view model details and launch the playground from there.
- Predictions — see the history of all playground runs.
- API: Create Prediction — create predictions programmatically.
- TypeScript SDK — Predictions — full SDK integration reference.
How is this guide?
Playground Hub
The Skytells Playground Hub — discover and launch the Inference Playground for image, video, and audio models, or the LLM Playground for large language models.
LLM Playground
Chat with Skytells-hosted large language models — configure system prompts, adjust parameters, iterate on outputs, and compare model responses without writing code.