AI & Workflows
Skytells has been building AI infrastructure since 2012. This section covers the full AI surface in the Console — models, inference, predictions, playgrounds, and Eve.
Skytells has been building AI infrastructure since 2012 — well before generative AI became mainstream. What started as internal machine learning tooling evolved into a full-stack AI platform: proprietary model families, a multi-provider inference layer, developer SDKs, and the Console interface you are using today.
The AI & Workflows section of the Console is the operational surface for everything AI on Skytells: discovering models, running inference, reviewing results, and interacting with Eve — your in-console AI assistant.
Skytells AI since 2012
Skytells entered the AI space as an infrastructure company. Over the years, the platform expanded from internal ML pipelines into a production inference platform with:
- Proprietary model research — the TrueFusion image generation family, BeatFusion audio models, Mera video models, and more, developed and maintained by Skytells.
- Multi-provider inference routing — a unified API surface that runs models from OpenAI, Google, Nvidia, Black Forest Labs, and others alongside Skytells' own, with consistent authentication, billing, and response formats.
- A developer-first API — the Skytells API exposes a Predictions endpoint for generative models and an OpenAI-compatible Inference endpoint for LLMs, with official TypeScript and Python SDKs.
- The Console — a real-time interface for browsing models, running playground inference, reviewing prediction history, and delegating tasks to Eve.
What Skytells AI provides
Model families
Skytells develops and maintains its own model families across multiple modalities:
Image Generation
The TrueFusion family covers the full quality-cost spectrum — from TrueFusion Edge ($0.01/image, speed-optimized) to TrueFusion Ultra ($0.15/image, enterprise fidelity). Includes variants for panoramic output, style variation, and image-to-image editing.
Video Generation
TrueFusion Video and TrueFusion Video Pro generate video from text or image input. Mera is Skytells' high-fidelity video model for cinematic output.
Audio & Music
BeatFusion generates full music tracks from text descriptions. Audio models cover speech synthesis, sound design, and music composition.
Text & LLMs
Skytells hosts and routes large language models including OpenAI's GPT family, Codex, and instruction-following models — all accessible through the OpenAI-compatible Inference API.
Embeddings
Vector embedding models for semantic search, classification, and retrieval-augmented generation pipelines.
Third-party models
A curated set of third-party models from Black Forest Labs (Flux 2), Google (Imagen 3), Nvidia (Sana), and OpenAI — unified under the same API key, billing system, and Console interface.
For the full model list with namespaces, pricing, and schemas, see the Model Catalog API and List Models.
Inference infrastructure
Every model on Skytells is served through one of two API patterns:
| API | Endpoint | Best for |
|---|---|---|
| Predictions API | POST /v1/predictions | Image, video, audio, music — async generation with polling or webhooks. |
| Inference API | POST /v1/chat/completions | Text, LLMs, code, chat — synchronous, OpenAI-compatible. |
Both share the same authentication, rate limits, and billing. See the API reference for full documentation.
Guarantees
Zero Data Retention
Selected models offer a zero data retention guarantee — your inputs are not stored on the provider's infrastructure after inference completes.
No Prompt Training
Models with the No Prompt Training guarantee do not use your inputs to fine-tune or train any model, whether Skytells' own or a third-party provider's.
Both guarantees are visible per-model in the Model Catalog column view.
What's in this section
Models
Browse the full model catalog. Filter by type, provider, capability, and pricing. Open any model to view its card, pricing, and SDK snippet.
Predictions
A live log of every inference request across all clients — API, SDK, CLI, and Playground. View status, cost, inputs, and outputs for any prediction in your history.
Playgrounds
Two interactive testing environments: the Inference Playground for generative models, and the LLM Playground for large language models. Run prompts, inspect output, view generated code snippets.
Eve
Your context-aware AI assistant embedded in the Console. Eve can navigate pages, execute workflows, explain features, and surface predictions — all in natural language.
How is this guide?
Firewalls
Firewall groups are reusable sets of inbound and outbound rules that control which traffic can reach your Skytells instances. Attach a group to one or many instances to enforce a consistent access policy.
Model Catalog
Browse and search the full catalog of Skytells inference models — filter by type, provider, and capability. Open any model to view pricing, SDK snippets, and input schema.