Models API
Discover models, Model object shape, pricing fields, and optional JSON Schemas. Per-model listings and prices live in the Model Catalog.
Models API
The Models API returns metadata for each model: namespace, pricing, capabilities, status, and (via fields on get-one) optional JSON Schemas. It does not run inference or predictions — it only describes models. For a static write-up of names and prices, use the Model Catalog; for programmatic access and the canonical Model object, use the endpoints below.
- List:
GET /v1/models— models available to your API key (see Access and authentication) - Get one:
GET /v1/models/{slug}— same object as in the list;slugin the URL matchesnamespacein the JSON - Schemas: optional
fieldsonGET /v1/models/{slug}to include JSON Schemas (larger response — see Get model) - SDK:
client.models— TypeScript
Access and authentication
The Models API is a public, versioned HTTP surface: documented GET routes under https://api.skytells.ai/v1/models, intended for production use. Like all v1 routes, calls must include a valid API key in the x-api-key header. See Authentication and Making API requests.
Your key identifies the account and scopes what you get back. The table below separates public schema metadata from deployment-specific data Skytells only exposes when you are allowed to see it.
| Topic | What it is | Authentication / visibility |
|---|---|---|
| JSON Schemas (input & output) | Standard JSON Schemas for valid inputs and outputs — shape, types, and constraints. Public technical documentation, not routing secrets. | Include via the fields query on GET /v1/models/{slug} (Get model); responses are larger when schemas are included. Only for models your key can access. |
| Custom deployments & privacy | Skytells supports custom model deployments — dedicated stacks, user- or org-specific models, privacy tiers (privacy on the Model object), and namespaces that are not in the global catalog. | Key-scoped. GET /v1/models lists only models your account can use. GET /v1/models/{slug} returns 404 / 403 if the slug exists but is not visible to your key. |
| Deployment routing fields | Extra fields such as custom endpoints, internal base URLs, or deployment-specific routing — metadata for your private or managed deployment. | Authenticated and permission-gated. Returned only when your account is authorized for that deployment; omitted (or absent) for callers without access. Do not assume every model exposes these fields. |
For browsing without code, use the Model Catalog. For API access, create a key under API keys and never commit it to source control.
Listing and retrieving model metadata
GET /v1/models returns an array; GET /v1/models/{slug} returns one object — same Model shape. Cache the list or fetch a single row when you need metadata or schemas for forms and validation.
Calling a model is separate from listing metadata: type (e.g. text, image) does not fully determine whether you use Inference, Predictions, or both. Confirm per namespace in the Model Catalog and the Console.
When to use what
| Goal | Where |
|---|---|
| Compare models, read pricing in prose | Model Catalog |
| Which APIs (Inference, Predictions, …) apply to a namespace | Model Catalog + Console per model |
| Endpoint reference for list / get | List models, Get model |
| JSON Schema behavior on model objects | Model Schemas |
| Full Model JSON field-by-field | Model object (this page) |
Model Catalog
Browsable namespaces, vendors, pricing.
Model Schemas
Schema concepts by modality.
List models
GET /v1/models
Get model
GET /v1/models/{slug}
Quick example
List models and fetch one with schemas
curl https://api.skytells.ai/v1/models \
-H "x-api-key: $SKYTELLS_API_KEY"Each item matches the Model object. Use namespace from the response as model once you have confirmed the correct API for that model in the catalog or Console.
Model Object
OBJECT
The Model resource uses one JSON shape for every namespace; field values differ by type, vendor, and deployment.
GET /v1/models and GET /v1/models/{slug} return this structure. The sample below includes optional schema properties for illustration — list responses omit them unless you request them via fields on get-one.
{
"name": "GPT-5",
"description": "OpenAI's new model excelling at coding, writing, and reasoning.",
"namespace": "gpt-5",
"type": "text",
"privacy": "public",
"img_url": null,
"vendor": {
"name": "OpenAI",
"description": "Provider description",
"image_url": "https://example.com/vendor.png",
"verified": true,
"slug": "openai",
"metadata": null
},
"billable": true,
"pricing": {
"amount": 10,
"currency": "USD",
"unit": "million_token",
"criterias": [
{
"field": "token_type",
"description": "Input token pricing",
"operator": "==",
"value": "input",
"billable_price": 0.0005,
"unit": "token"
},
{
"field": "resolution",
"description": "Output megapixel pricing for 1 MP",
"operator": "==",
"value": "1 MP",
"billable_price": 0.02,
"unit": "image_megapixel"
}
],
"formula": {
"description": "How pricing works for this model",
"type": "linear",
"variables": {
"input_rate": 0.0005,
"output_rate": 0.00125,
"input_megapixel_rate": 0.02,
"output_megapixel_rate": 0.02
},
"terms": [
{
"token_type": "input",
"tokens_key": "input_tokens",
"rate_key": "input_rate"
},
{
"token_type": "output",
"tokens_key": "output_tokens",
"rate_key": "output_rate"
},
{
"megapixel_type": "input",
"megapixels_key": "input_megapixels",
"rate_key": "input_megapixel_rate"
},
{
"megapixel_type": "output",
"megapixels_key": "output_megapixels",
"rate_key": "output_megapixel_rate"
}
],
"result_key": "billable_price"
}
},
"capabilities": [
"text-to-text",
"coding",
"partner",
"quality"
],
"metadata": {
"edge_compatible": true,
"openai_compatible": true,
"cold_boot": false
},
"status": "operational",
"input_schema": {
"type": "object",
"properties": {
"prompt": {
"type": "string"
}
}
},
"output_schema": {
"type": "object"
}
}Field Reference
| Field | Type | Description |
|---|---|---|
name | string | Human-readable model name shown in docs and UI. |
description | string | Short explanation of what the model is optimized for. |
namespace | string | Canonical identifier used in API requests (for example, model in inference/prediction requests). |
type | string | High-level category such as text, image, video, or audio. |
privacy | string | Visibility tier, typically public or partner/private tiers. |
img_url | string | null | Optional thumbnail/preview image URL for the model. |
vendor | object | Publisher metadata for the model provider. |
vendor.name | string | Provider display name. |
vendor.description | string | Provider summary. |
vendor.image_url | string | null | Provider avatar/logo URL. |
vendor.verified | boolean | Indicates whether the vendor is verified on Skytells. |
vendor.slug | string | Stable vendor identifier. |
vendor.metadata | object | null | Optional vendor-specific metadata extension. |
billable | boolean | Whether requests to this model incur billing charges. |
pricing | object | Pricing descriptor including base amount/unit, conditional criterias, and formula. |
pricing.amount | number | Base display amount for the unit shown in the catalog. |
pricing.currency | string | Billing currency (for example, USD). |
pricing.unit | string | Billing unit (for example, million_token, image_megapixel, second, prediction). |
pricing.criterias | array | Conditional pricing rules based on fields like token type, resolution, aspect ratio, and so on. |
pricing.formula | object | Structured formula that defines how billable cost is calculated. |
capabilities | string[] | Feature tags describing what the model can do (for example, text-to-image, coding, quality). |
metadata | object | Platform/runtime flags that help with routing and client behavior. |
metadata.edge_compatible | boolean | Whether the model can run on edge infrastructure. |
metadata.openai_compatible | boolean | Whether model access is compatible with OpenAI-style inference clients. |
metadata.cold_boot | boolean | Whether first-request warmup behavior should be expected. |
status | string | Current service state, typically operational, degraded, or offline. |
input_schema, output_schema | object (optional) | JSON Schemas for input and output. Omitted by default; included when requested with the fields query on GET /v1/models/{slug} (see Get model). |
Pricing and Schema Variability
Model objects share the same top-level keys, but pricing and schema internals differ by workload:
- Text models are usually token-based (
unit: million_token) and pricing terms use token keys (for exampleinput_tokens,output_tokens). - Image/video/audio models often use media units (
image_megapixel,second,prediction) and criteria such asresolution,aspect_ratio, or duration. - Some models expose richer
pricing.criteriasandpricing.formularules than others. - JSON Schemas on the model object are omitted by default; request them with
fieldson get-one (Get model).
Expected Models API Errors
Errors fall into discovery (Models API — list/get metadata) and execution (inference or prediction calls). For execution, use the error guide for the API you called; modality alone does not determine it — see the Model Catalog and Console per namespace.
Model Discovery Errors
Listing or fetching metadata uses the standard API error envelope (error_id, message). Unknown or invalid slugs typically surface as MODEL_NOT_FOUND and related codes. See the Errors overview for the full catalog and HTTP status conventions.
Model Execution Errors
| You invoked | Error reference |
|---|---|
Inference API — e.g. /v1/chat/completions, /v1/responses, /v1/embeddings | Inference API errors — OpenAI-shaped error objects (error_id, param, …) |
Predictions API — POST /v1/predictions (any modality including text where supported) | Prediction-level errors — failed predictions (status: failed, error, …) and related codes |
Models API FAQs
Where is the full list of models and prices?
The Model Catalog. Use the Models API when you need a live, key-scoped list in code.
What is the difference between slug and namespace?
In
GET /v1/models/{slug}, the path segment is the same string as thenamespacefield on the model object — the stable ID you pass asmodelin prediction and inference calls.
When should I include JSON Schemas in the response?
When building UIs, validating payloads before submit, or generating forms. Schemas are omitted by default to keep responses small. See Get model for the
fieldsparameter and Model Schemas for structure.
How do I know whether a model uses Inference or Predictions?
Do not rely on
typealone. Check the Model Catalog and that model’s page in the Console for supported APIs and request shapes.
Where is the TypeScript type for Model?
Modelin the SDK reference — it mirrors the API JSON.
Which error doc applies to my model?
It depends on which endpoint you call. Inference routes → Inference API errors.
POST /v1/predictions→ Prediction-level errors (including text models run as predictions). Discovery errors forGET /v1/modelsare covered in the Errors overview. See Expected errors above.
How is this guide?