Interfaces

APIs

Learn about the Skytells Standard API and Edge API — two endpoints for different workloads and latency requirements.

API Endpoints

Skytells provides two API gateways. Both accept the same authentication (x-api-key header) and return the same response formats — the difference is in routing, latency, and availability.

Standard APIEdge API
Base URLhttps://api.skytells.ai/v1https://edge.skytells.ai
AudienceGeneral-purpose, all usersEnterprise & edge deployments
NetworkGlobal distributed networkEdge nodes, nearest-point routing
Model supportAll models and servicesSelect models optimized for edge
LatencyLow (global PoPs)Ultra-low (edge-optimized)

Standard API

https://api.skytells.ai/v1

The Standard API is the primary gateway for all Skytells services. It supports every model and endpoint on the platform and is served on a global distributed network with points of presence worldwide — ensuring low latency and high availability regardless of where your users are.

Use this endpoint for:

  • General-purpose inference (text, image, audio, video)
  • Production workloads at scale
  • Accessing the full catalog of models
  • Global applications that need reliable, low-latency access from any region

Example Request

curl https://api.skytells.ai/v1/predictions \
  -H "x-api-key: $SKYTELLS_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "skytells/truefusion",
    "input": {
      "prompt": "A futuristic cityscape at sunset"
    }
  }'
import { createClient } from 'skytells';

const client = createClient(process.env.SKYTELLS_API_KEY);

const prediction = await client.predict({
  model: 'skytells/truefusion',
  input: { prompt: 'A futuristic cityscape at sunset' },
});

Edge API

https://edge.skytells.ai

The Edge API is designed for enterprise customers and latency-sensitive use cases. Requests are routed to the nearest edge node, minimizing round-trip time.

Use this endpoint when:

  • You need low-latency inference (real-time apps, interactive UIs)
  • You're deploying in edge environments (CDN workers, edge functions)
  • Your organization requires dedicated routing for compliance or performance

Not all models are available on the Edge API. Models and services that support edge deployment are marked in the model catalog. If a model isn't available on edge, use the Standard API instead.

Example Request

curl https://edge.skytells.ai/predictions \
  -H "x-api-key: $SKYTELLS_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "skytells/truefusion",
    "input": {
      "prompt": "A futuristic cityscape at sunset"
    }
  }'

Which API Should I Use?

Use the Standard API if...

You want access to every model, don't have strict latency requirements, or are just getting started.

Use the Edge API if...

You need the lowest possible latency, are building real-time applications, or have enterprise edge infrastructure.

If you use the TypeScript SDK, routing is handled automatically — the SDK selects the right API endpoint based on the model and service you're calling. See the SDKs page for details.


Common Headers

Both APIs share the same request format:

HeaderRequiredDescription
x-api-keyYesYour Skytells API key
Content-TypeYesapplication/json for all POST requests

Rate Limits

Rate limits apply equally across both endpoints. Current limits are returned in response headers:

HeaderDescription
X-RateLimit-LimitMax requests in the current window
X-RateLimit-RemainingRequests remaining
X-RateLimit-ResetUnix timestamp when the limit resets

If you hit the limit, you'll receive a 429 Too Many Requests response. Back off and retry after the reset time.

How is this guide?

On this page