Playgrounds
The Skytells Playground Hub — discover and launch the Inference Playground for image, video, and audio models, or the LLM Playground for large language models.
The Playground Hub at console.skytells.ai/ai/playground is the gateway to Skytells' two interactive testing environments. Use it to discover models, launch a playground for a specific model, or navigate directly to the LLM Playground.
Accessing the Playground Hub
In the Skytells Console, go to AI → Playgrounds in the left sidebar.
Hub layout
Model search
The page opens with a full-width search bar labeled Search models... This searches across all inference-capable models. Typing a name (e.g., truefusion, flux, mera) filters in real time and shows matching models, which you can click to open directly in the Inference Playground.
Featured Models
Below the search bar, a grid of featured model cards gives you quick access to the most popular or recently updated models. Each card shows:
- Provider logo and model name
- Output type badge —
IMAGE,VIDEO,AUDIO, orTEXT - A description badge indicating the model family
Click any featured model card to open its Inference Playground directly. A Browse all models → link at the bottom takes you to the full Model Catalog.
Playgrounds section
Below the featured models, two interactive environment cards are shown:
Inference Playground
Test image, video, and audio generation models with a schema-driven form. View results in real time with preview, JSON, and logs output. Access via the Model Catalog or directly from any model card.
LLM Playground
Chat with large language models. Configure system prompts, adjust temperature and other parameters, and iterate on outputs — all in a conversational interface.
Two playgrounds, two paradigms
Skytells AI has two distinct inference architectures, and each has its own playground:
Inference Playground
Designed for generative models — image, video, audio, and music. These models take structured input parameters (prompt, dimensions, step count, reference images, etc.) and produce asset outputs. The Inference Playground uses a schema-driven form so every valid parameter is visible and editable, even for models with complex input schemas.
Open the Inference Playground for any specific model from:
- The Model Catalog (click a row → Model Card → Try in Playground)
- The Playground Hub search bar
- A Prediction detail page → Try in Playground
See Inference Playground for full documentation.
LLM Playground
Designed for large language models — GPT, Codex, instruction-following, and reasoning models. These models interact conversationally. The LLM Playground provides a chat interface with a system prompt panel, model selector, and output controls.
See LLM Playground for full documentation.
Related
- Inference Playground — schema-driven form for image, video, audio models.
- LLM Playground — chat interface for large language models.
- Model Catalog — browse all models and jump to a Playground.
- Predictions — every run from the Inference Playground creates a prediction record here.
- Foundations: Playground interface — conceptual overview of the playground interface design.
How is this guide?
Predictions
A live, unified log of every inference request across your API keys, SDKs, CLI, and Playgrounds — with status, latency, cost, source, and full output inspection.
Inference Playground
Test image, video, and audio generation models interactively with a schema-driven form, live preview, JSON output, and automatically generated SDK code snippets.