APIs
Learn about the Skytells Standard API and Edge API — two endpoints for different workloads and latency requirements.
API Endpoints
Skytells provides two API gateways. Both accept the same authentication (x-api-key header) and return the same response formats — the difference is in routing, latency, and availability.
| Standard API | Edge API | |
|---|---|---|
| Base URL | https://api.skytells.ai/v1 | https://edge.skytells.ai |
| Audience | General-purpose, all users | Enterprise & edge deployments |
| Network | Global distributed network | Edge nodes, nearest-point routing |
| Model support | All models and services | Select models optimized for edge |
| Latency | Low (global PoPs) | Ultra-low (edge-optimized) |
Standard API
https://api.skytells.ai/v1The Standard API is the primary gateway for all Skytells services. It supports every model and endpoint on the platform and is served on a global distributed network with points of presence worldwide — ensuring low latency and high availability regardless of where your users are.
Use this endpoint for:
- General-purpose inference (text, image, audio, video)
- Production workloads at scale
- Accessing the full catalog of models
- Global applications that need reliable, low-latency access from any region
Example Request
curl https://api.skytells.ai/v1/predictions \
-H "x-api-key: $SKYTELLS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "skytells/truefusion",
"input": {
"prompt": "A futuristic cityscape at sunset"
}
}'import { createClient } from 'skytells';
const client = createClient(process.env.SKYTELLS_API_KEY);
const prediction = await client.predict({
model: 'skytells/truefusion',
input: { prompt: 'A futuristic cityscape at sunset' },
});Edge API
https://edge.skytells.aiThe Edge API is designed for enterprise customers and latency-sensitive use cases. Requests are routed to the nearest edge node, minimizing round-trip time.
Use this endpoint when:
- You need low-latency inference (real-time apps, interactive UIs)
- You're deploying in edge environments (CDN workers, edge functions)
- Your organization requires dedicated routing for compliance or performance
Not all models are available on the Edge API. Models and services that support edge deployment are marked in the model catalog. If a model isn't available on edge, use the Standard API instead.
Example Request
curl https://edge.skytells.ai/predictions \
-H "x-api-key: $SKYTELLS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "skytells/truefusion",
"input": {
"prompt": "A futuristic cityscape at sunset"
}
}'Which API Should I Use?
Use the Standard API if...
You want access to every model, don't have strict latency requirements, or are just getting started.
Use the Edge API if...
You need the lowest possible latency, are building real-time applications, or have enterprise edge infrastructure.
If you use the TypeScript SDK, routing is handled automatically — the SDK selects the right API endpoint based on the model and service you're calling. See the SDKs page for details.
Common Headers
Both APIs share the same request format:
| Header | Required | Description |
|---|---|---|
x-api-key | Yes | Your Skytells API key |
Content-Type | Yes | application/json for all POST requests |
Rate Limits
Rate limits apply equally across both endpoints. Current limits are returned in response headers:
| Header | Description |
|---|---|
X-RateLimit-Limit | Max requests in the current window |
X-RateLimit-Remaining | Requests remaining |
X-RateLimit-Reset | Unix timestamp when the limit resets |
If you hit the limit, you'll receive a 429 Too Many Requests response. Back off and retry after the reset time.
How is this guide?