Models

Models API

Discover models, Model object shape, pricing fields, and optional JSON Schemas. Per-model listings and prices live in the Model Catalog.

Models API

The Models API returns metadata for each model: namespace, pricing, capabilities, status, and (via fields on get-one) optional JSON Schemas. It does not run inference or predictions — it only describes models. For a static write-up of names and prices, use the Model Catalog; for programmatic access and the canonical Model object, use the endpoints below.

  • List: GET /v1/models — models available to your API key (see Access and authentication)
  • Get one: GET /v1/models/{slug} — same object as in the list; slug in the URL matches namespace in the JSON
  • Schemas: optional fields on GET /v1/models/{slug} to include JSON Schemas (larger response — see Get model)
  • SDK: client.modelsTypeScript

Access and authentication

The Models API is a public, versioned HTTP surface: documented GET routes under https://api.skytells.ai/v1/models, intended for production use. Like all v1 routes, calls must include a valid API key in the x-api-key header. See Authentication and Making API requests.

Your key identifies the account and scopes what you get back. The table below separates public schema metadata from deployment-specific data Skytells only exposes when you are allowed to see it.

TopicWhat it isAuthentication / visibility
JSON Schemas (input & output)Standard JSON Schemas for valid inputs and outputs — shape, types, and constraints. Public technical documentation, not routing secrets.Include via the fields query on GET /v1/models/{slug} (Get model); responses are larger when schemas are included. Only for models your key can access.
Custom deployments & privacySkytells supports custom model deployments — dedicated stacks, user- or org-specific models, privacy tiers (privacy on the Model object), and namespaces that are not in the global catalog.Key-scoped. GET /v1/models lists only models your account can use. GET /v1/models/{slug} returns 404 / 403 if the slug exists but is not visible to your key.
Deployment routing fieldsExtra fields such as custom endpoints, internal base URLs, or deployment-specific routing — metadata for your private or managed deployment.Authenticated and permission-gated. Returned only when your account is authorized for that deployment; omitted (or absent) for callers without access. Do not assume every model exposes these fields.

For browsing without code, use the Model Catalog. For API access, create a key under API keys and never commit it to source control.

Listing and retrieving model metadata

GET /v1/models returns an array; GET /v1/models/{slug} returns one object — same Model shape. Cache the list or fetch a single row when you need metadata or schemas for forms and validation.

Calling a model is separate from listing metadata: type (e.g. text, image) does not fully determine whether you use Inference, Predictions, or both. Confirm per namespace in the Model Catalog and the Console.

When to use what

GoalWhere
Compare models, read pricing in proseModel Catalog
Which APIs (Inference, Predictions, …) apply to a namespaceModel Catalog + Console per model
Endpoint reference for list / getList models, Get model
JSON Schema behavior on model objectsModel Schemas
Full Model JSON field-by-fieldModel object (this page)

Quick example

List models and fetch one with schemas

List
curl https://api.skytells.ai/v1/models \
-H "x-api-key: $SKYTELLS_API_KEY"

Each item matches the Model object. Use namespace from the response as model once you have confirmed the correct API for that model in the catalog or Console.


Model Object

OBJECT The Model resource uses one JSON shape for every namespace; field values differ by type, vendor, and deployment.

GET /v1/models and GET /v1/models/{slug} return this structure. The sample below includes optional schema properties for illustration — list responses omit them unless you request them via fields on get-one.

Model Object
{
  "name": "GPT-5",
  "description": "OpenAI's new model excelling at coding, writing, and reasoning.",
  "namespace": "gpt-5",
  "type": "text",
  "privacy": "public",
  "img_url": null,
  "vendor": {
    "name": "OpenAI",
    "description": "Provider description",
    "image_url": "https://example.com/vendor.png",
    "verified": true,
    "slug": "openai",
    "metadata": null
  },
  "billable": true,
  "pricing": {
    "amount": 10,
    "currency": "USD",
    "unit": "million_token",
    "criterias": [
      {
        "field": "token_type",
        "description": "Input token pricing",
        "operator": "==",
        "value": "input",
        "billable_price": 0.0005,
        "unit": "token"
      },
      {
        "field": "resolution",
        "description": "Output megapixel pricing for 1 MP",
        "operator": "==",
        "value": "1 MP",
        "billable_price": 0.02,
        "unit": "image_megapixel"
      }
    ],
    "formula": {
      "description": "How pricing works for this model",
      "type": "linear",
      "variables": {
        "input_rate": 0.0005,
        "output_rate": 0.00125,
        "input_megapixel_rate": 0.02,
        "output_megapixel_rate": 0.02
      },
      "terms": [
        {
          "token_type": "input",
          "tokens_key": "input_tokens",
          "rate_key": "input_rate"
        },
        {
          "token_type": "output",
          "tokens_key": "output_tokens",
          "rate_key": "output_rate"
        },
        {
          "megapixel_type": "input",
          "megapixels_key": "input_megapixels",
          "rate_key": "input_megapixel_rate"
        },
        {
          "megapixel_type": "output",
          "megapixels_key": "output_megapixels",
          "rate_key": "output_megapixel_rate"
        }
      ],
      "result_key": "billable_price"
    }
  },
  "capabilities": [
    "text-to-text",
    "coding",
    "partner",
    "quality"
  ],
  "metadata": {
    "edge_compatible": true,
    "openai_compatible": true,
    "cold_boot": false
  },
  "status": "operational",
  "input_schema": {
    "type": "object",
    "properties": {
      "prompt": {
        "type": "string"
      }
    }
  },
  "output_schema": {
    "type": "object"
  }
}

Field Reference

FieldTypeDescription
namestringHuman-readable model name shown in docs and UI.
descriptionstringShort explanation of what the model is optimized for.
namespacestringCanonical identifier used in API requests (for example, model in inference/prediction requests).
typestringHigh-level category such as text, image, video, or audio.
privacystringVisibility tier, typically public or partner/private tiers.
img_urlstring | nullOptional thumbnail/preview image URL for the model.
vendorobjectPublisher metadata for the model provider.
vendor.namestringProvider display name.
vendor.descriptionstringProvider summary.
vendor.image_urlstring | nullProvider avatar/logo URL.
vendor.verifiedbooleanIndicates whether the vendor is verified on Skytells.
vendor.slugstringStable vendor identifier.
vendor.metadataobject | nullOptional vendor-specific metadata extension.
billablebooleanWhether requests to this model incur billing charges.
pricingobjectPricing descriptor including base amount/unit, conditional criterias, and formula.
pricing.amountnumberBase display amount for the unit shown in the catalog.
pricing.currencystringBilling currency (for example, USD).
pricing.unitstringBilling unit (for example, million_token, image_megapixel, second, prediction).
pricing.criteriasarrayConditional pricing rules based on fields like token type, resolution, aspect ratio, and so on.
pricing.formulaobjectStructured formula that defines how billable cost is calculated.
capabilitiesstring[]Feature tags describing what the model can do (for example, text-to-image, coding, quality).
metadataobjectPlatform/runtime flags that help with routing and client behavior.
metadata.edge_compatiblebooleanWhether the model can run on edge infrastructure.
metadata.openai_compatiblebooleanWhether model access is compatible with OpenAI-style inference clients.
metadata.cold_bootbooleanWhether first-request warmup behavior should be expected.
statusstringCurrent service state, typically operational, degraded, or offline.
input_schema, output_schemaobject (optional)JSON Schemas for input and output. Omitted by default; included when requested with the fields query on GET /v1/models/{slug} (see Get model).

Pricing and Schema Variability

Model objects share the same top-level keys, but pricing and schema internals differ by workload:

  • Text models are usually token-based (unit: million_token) and pricing terms use token keys (for example input_tokens, output_tokens).
  • Image/video/audio models often use media units (image_megapixel, second, prediction) and criteria such as resolution, aspect_ratio, or duration.
  • Some models expose richer pricing.criterias and pricing.formula rules than others.
  • JSON Schemas on the model object are omitted by default; request them with fields on get-one (Get model).

Expected Models API Errors

Errors fall into discovery (Models API — list/get metadata) and execution (inference or prediction calls). For execution, use the error guide for the API you called; modality alone does not determine it — see the Model Catalog and Console per namespace.

Model Discovery Errors

Listing or fetching metadata uses the standard API error envelope (error_id, message). Unknown or invalid slugs typically surface as MODEL_NOT_FOUND and related codes. See the Errors overview for the full catalog and HTTP status conventions.

Model Execution Errors

You invokedError reference
Inference API — e.g. /v1/chat/completions, /v1/responses, /v1/embeddingsInference API errors — OpenAI-shaped error objects (error_id, param, …)
Predictions APIPOST /v1/predictions (any modality including text where supported)Prediction-level errors — failed predictions (status: failed, error, …) and related codes

Models API FAQs

Where is the full list of models and prices?

The Model Catalog. Use the Models API when you need a live, key-scoped list in code.

What is the difference between slug and namespace?

In GET /v1/models/{slug}, the path segment is the same string as the namespace field on the model object — the stable ID you pass as model in prediction and inference calls.

When should I include JSON Schemas in the response?

When building UIs, validating payloads before submit, or generating forms. Schemas are omitted by default to keep responses small. See Get model for the fields parameter and Model Schemas for structure.

How do I know whether a model uses Inference or Predictions?

Do not rely on type alone. Check the Model Catalog and that model’s page in the Console for supported APIs and request shapes.

Where is the TypeScript type for Model?

Model in the SDK reference — it mirrors the API JSON.

Which error doc applies to my model?

It depends on which endpoint you call. Inference routes → Inference API errors. POST /v1/predictionsPrediction-level errors (including text models run as predictions). Discovery errors for GET /v1/models are covered in the Errors overview. See Expected errors above.

How is this guide?

On this page