Endpoints
Overview of the Skytells Standard API and Edge API gateways.
API Endpoints
Skytells provides two API endpoints.
- Edge API
- Standard API
Both endpoints accept the same authentication (x-api-key header) and return the same response formats.
The difference between the two endpoints is in routing, latency, and availability.
- The Standard API is served on a global distributed network with points of presence worldwide.
- The Edge API is served on a edge-optimized network with points of presence in major cities around the world.
While Standard API https://api.skytells.ai/v1 is the default base URL for Skytells API v1, As Skytells committed to provide the best possible latency for all users, We may reroute requests to the Edge API for some inference tasks, see Inference API for details.
Quick Comparison
| Standard API | Edge API | |
|---|---|---|
| Base URL | https://api.skytells.ai/v1 | https://edge.skytells.ai/v1 |
| Audience | All users | Enterprise & edge deployments |
| Network | Global distributed network | Edge nodes, nearest-point routing |
| Model support | All models and services | Select models optimized for edge |
| Latency | Low (global PoPs) | Ultra-low (edge-optimized) |
Both gateways share the same authentication, request format, and response schema — switching between them requires only changing the base URL. However, model availability differs between endpoints. The Standard API provides access to the full model catalog, while the Edge API is limited to models that have been specifically optimized for edge deployment.
Before routing traffic to the Edge API, verify that your target model supports edge deployment. Sending requests for unsupported models to edge.skytells.ai will return a 404 Not Found error. Check the model catalog for edge-compatible models.
Authentication Headers
| Header | Required | Description |
|---|---|---|
x-api-key | Yes | Your Skytells API key |
Content-Type | Yes (POST) | application/json for all POST requests |
See Authentication for details on how to obtain an API key and authenticate your requests.
Standard API
https://api.skytells.ai/v1This is the default base URL for Skytells API v1.
The api.skytells.ai endpoint serves all of our models and services and is available globally. Most integrations should start here.
Services like Predictions and Inference are conforming to this endpoint.
While our standard API is designed to provide the best possible latency for all users, services that require low-latency and less compute intensive inference are being re-routed to the Edge API automatically free of charge, see Inference API for details.
Example Requests
Generating a simple image using the TrueFusion model.
curl https://api.skytells.ai/v1/predictions \
-H "x-api-key: $SKYTELLS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "truefusion",
"input": {
"prompt": "A futuristic cityscape at sunset"
},
"wait": true
}'You may use
wait=trueparameter to wait for the prediction to complete.
import Skytells from 'skytells';
const client = Skytells(process.env.SKYTELLS_API_KEY);
const prediction = await client.predict({
model: 'truefusion',
input: { prompt: 'A futuristic cityscape at sunset' },
});For chat completion requests, the Standard API may automatically route to Edge for low-latency inference. See Inference API for details.
Edge API
https://edge.skytells.ai/v1The Edge API is designed for enterprise customers and latency-sensitive use cases. Requests are routed to the nearest edge node, minimizing round-trip time.
Use this endpoint when:
- You need low-latency inference (real-time apps, interactive UIs)
- You're deploying in edge environments (CDN workers, edge functions)
- Your organization requires dedicated routing for compliance or performance
Not all models are available on the Edge API. Models and services that support edge deployment are marked in the model catalog. If a model isn't available on edge, use the Standard API instead.
OpenAI-compatible Endpoints
Skytells provides two API gateways (Standard and Edge) that conform to the OpenAI API schema. Both support three inference paths:
- Chat Completions:
POST /v1/chat/completions— see Chat Completions - Responses:
POST /v1/responses— see Responses - Embeddings:
POST /v1/embeddings— see Embeddings
Both gateways are compatible with most OpenAI-style SDKs (including the OpenAI SDK and Vercel’s AI SDK) by setting base_url to either https://api.skytells.ai/v1 or https://edge.skytells.ai/v1.
For the full inference overview (models, auth, streaming), see Inference API.
Example Request
curl https://api.skytells.ai/v1/chat/completions \
-H "x-api-key: $SKYTELLS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "deepbrain-router",
"messages": [
{ "role": "user", "content": "Write a one-sentence tagline for Skytells." }
]
}'You may use
stream=trueparameter to enable streaming of the chat completion.
OpenAI Client Example
from openai import OpenAI
client = OpenAI(base_url="https://api.skytells.ai/v1")
response = client.chat.completions.create(
model="deepbrain-router",
messages=[{"role": "user", "content": "Write a one-sentence tagline for Skytells."}]
)
print(response.choices[0].message.content)Rate Limits
Rate limits apply equally across both endpoints. For headers, handling strategies, and best practices, see Rate Limits.
If you use the TypeScript SDK, routing is handled automatically — the SDK selects the right endpoint based on the model and service. See SDKs for details.
How is this guide?