TypeScript SDK Deep Dive
Master the Skytells TypeScript/JavaScript SDK — fully typed, tree-shakable, and ready for Node.js, Bun, Deno, Cloudflare Workers, and Vercel Edge.
What you'll be able to do after this module
Write type-safe TypeScript code that integrates Skytells into any runtime — Next.js, Express, Cloudflare Workers, or a plain Node.js script. Handle errors, run parallel generations, stream chat responses, and deploy to the Edge.
Installation
npm install @skytells/sdkYour first prediction
import Skytells from '@skytells/sdk';
const client = Skytells(process.env.SKYTELLS_API_KEY);
const prediction = await client.predictions.create({
model: 'truefusion-pro',
input: {
prompt: 'A futuristic city skyline at night, neon lights, rain',
width: 1024,
height: 1024,
},
});
console.log(prediction.output); // string[] — image URLsThe SDK automatically polls until status === 'succeeded' by default. You don't need to write polling loops for standard use cases.
Client configuration
import Skytells from '@skytells/sdk';
const client = Skytells(process.env.SKYTELLS_API_KEY, {
baseUrl: 'https://api.skytells.ai/v1', // default
timeout: 120_000, // milliseconds (default: 60s)
maxRetries: 3, // auto-retry on 5xx errors
fetch: globalThis.fetch, // override fetch (optional, for testing)
});For the Edge API, set baseUrl: 'https://edge.skytells.ai/v1'. Edge is available on Business and Enterprise plans for supported models (truefusion-edge, flux-1-edge) only.
Full TypeScript types
The SDK ships complete types for all inputs, outputs, and error objects:
import type { Prediction, Model, PredictionStatus } from '@skytells/sdk';
function logPrediction(prediction: Prediction): void {
const { id, status, model } = prediction;
console.log(`${id} [${status}] — ${model}`);
if (status === 'succeeded') {
prediction.output?.forEach((url: string) => {
console.log(' →', url);
});
}
if (status === 'failed') {
console.error(' ✗', prediction.error);
}
}Prediction patterns
Default — wait for completion
// Blocks until succeeded or throws on failure
const prediction = await client.predictions.create({
model: 'truefusion-pro',
input: { prompt: 'A mountain at sunrise', width: 1024, height: 1024 },
});
// prediction.status is always 'succeeded' here (or an exception was thrown)
const imageUrl = prediction.output![0];Non-blocking — use with webhooks
// Returns immediately with status: 'queued' or 'processing'
const pending = await client.predictions.create({
model: 'truefusion-video-pro',
input: { prompt: 'Ocean waves at sunset', duration_seconds: 10 },
webhook: 'https://yourapp.com/api/webhooks/skytells',
webhookEventsFilter: ['completed'],
wait: false,
});
console.log('Queued:', pending.id, pending.status);
// Webhook receives the result when doneManual polling (with progress)
let prediction = await client.predictions.get('pred_abc123');
while (!['succeeded', 'failed', 'canceled'].includes(prediction.status)) {
await new Promise(r => setTimeout(r, 2000));
prediction = await client.predictions.get(prediction.id);
console.log(`[${prediction.status}] ${prediction.id}`);
}
if (prediction.status === 'succeeded') {
console.log('Output:', prediction.output![0]);
} else {
throw new Error(`Failed: ${prediction.error}`);
}Parallel predictions
Use Promise.all to generate multiple variations concurrently — same wall-clock time as a single request:
// Generate 4 product shots in parallel
const [light, dark, studio, outdoor] = await Promise.all([
client.predictions.create({
model: 'truefusion-pro',
input: { prompt: 'Product on white, soft light', seed: 1 },
}),
client.predictions.create({
model: 'truefusion-pro',
input: { prompt: 'Product on black, dramatic', seed: 2 },
}),
client.predictions.create({
model: 'truefusion-pro',
input: { prompt: 'Product in studio, neutral', seed: 3 },
}),
client.predictions.create({
model: 'truefusion-pro',
input: { prompt: 'Product outdoors, natural light', seed: 4 },
}),
]);
const urls = [light, dark, studio, outdoor].map(p => p.output![0]);
console.log('Generated', urls.length, 'variations');Performance: 4 parallel predictions with Promise.all complete in roughly the same time as 1 sequential prediction. Always parallelize when you need multiple outputs.
Error handling
import Skytells, {
AuthenticationError,
RateLimitError,
InvalidInputError,
APIError,
} from '@skytells/sdk';
async function safeGenerate(prompt: string): Promise<string | null> {
try {
const prediction = await client.predictions.create({
model: 'truefusion-pro',
input: { prompt, width: 1024, height: 1024 },
});
return prediction.output![0];
} catch (err) {
if (err instanceof AuthenticationError) {
console.error('Invalid API key — check SKYTELLS_API_KEY');
return null;
}
if (err instanceof RateLimitError) {
const retryAfter = err.headers['retry-after'] ?? '5';
console.warn(`Rate limited — retry after ${retryAfter}s`);
await new Promise(r => setTimeout(r, parseInt(retryAfter) * 1000));
return safeGenerate(prompt); // retry once
}
if (err instanceof InvalidInputError) {
console.error('Bad input:', err.message);
return null;
}
if (err instanceof APIError) {
console.error(`API error ${err.status}:`, err.message);
}
throw err; // re-throw unexpected errors
}
}Always re-throw errors you don't explicitly handle. Silently swallowing errors makes debugging very difficult in production.
Streaming chat completions
For chat-type models, use the streaming API to display responses in real time:
const stream = await client.chat.completions.create({
model: 'skytells-chat',
messages: [
{ role: 'system', content: 'You are a helpful AI assistant.' },
{ role: 'user', content: 'Explain how diffusion models work in 3 sentences.' },
],
stream: true,
});
// Stream tokens as they arrive
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content ?? '';
process.stdout.write(content);
}
console.log(); // newline at endNext.js App Router integration
// app/api/generate/route.ts
import Skytells from '@skytells/sdk';
import { NextRequest } from 'next/server';
const client = Skytells(process.env.SKYTELLS_API_KEY);
export async function POST(req: NextRequest) {
const { prompt } = await req.json();
if (!prompt || typeof prompt !== 'string') {
return Response.json({ error: 'Prompt is required' }, { status: 400 });
}
try {
const prediction = await client.predictions.create({
model: 'truefusion-pro',
input: { prompt, width: 1024, height: 1024 },
});
return Response.json({ output: prediction.output });
} catch (err) {
if (err instanceof RateLimitError) {
return Response.json({ error: 'Rate limited' }, { status: 429 });
}
throw err;
}
}Edge Runtime support
The SDK works in Cloudflare Workers, Vercel Edge Functions, and Next.js Edge Runtime:
// app/api/preview/route.ts (Edge Runtime)
export const runtime = 'edge';
import Skytells from '@skytells/sdk';
export async function POST(req: Request) {
const { prompt } = await req.json();
const client = Skytells(process.env.SKYTELLS_API_KEY, {
baseUrl: 'https://edge.skytells.ai/v1', // Edge gateway — Business/Enterprise only
});
const prediction = await client.predictions.create({
model: 'truefusion-edge', // Must be an Edge-supported model
input: { prompt, width: 512, height: 512, num_inference_steps: 4 },
});
return Response.json({ output: prediction.output });
}The Edge API requires a Business or Enterprise plan, and only works with models that explicitly support Edge routing (truefusion-edge, flux-1-edge). Sending a non-Edge model returns 422.
Reusable React hook
// hooks/useGenerate.ts
'use client';
import { useState, useCallback } from 'react';
interface GenerateState {
loading: boolean;
output: string[];
error: string | null;
}
export function useGenerate() {
const [state, setState] = useState<GenerateState>({
loading: false,
output: [],
error: null,
});
const generate = useCallback(async (prompt: string) => {
setState({ loading: true, output: [], error: null });
try {
const res = await fetch('/api/generate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ prompt }),
});
if (!res.ok) {
const err = await res.json();
throw new Error(err.error ?? `HTTP ${res.status}`);
}
const data = await res.json();
setState({ loading: false, output: data.output ?? [], error: null });
} catch (err) {
setState({
loading: false,
output: [],
error: err instanceof Error ? err.message : 'Generation failed',
});
}
}, []);
return { ...state, generate };
}Managing predictions
// Fetch a specific prediction
const prediction = await client.predictions.get('pred_abc123');
// List recent predictions (useful for dashboards + audits)
const recent = await client.predictions.list({ limit: 20, status: 'succeeded' });
recent.forEach(p => {
console.log(`${p.id} ${p.status} $${p.metrics?.billing?.credits_used ?? '?'}`);
});
// Cancel — user navigated away
await client.predictions.cancel('pred_abc123');
// Delete — GDPR, data cleanup
await client.predictions.delete('pred_abc123');Summary
You now have a production-ready TypeScript/JavaScript integration that works in any runtime.
Key things to remember:
- Auto-polling —
create()returns a completed prediction by default wait: false— for webhooks and fire-and-forget patternsPromise.all— parallel generation at no extra cost in latency- Full types — everything is typed, including errors and model responses
- Any runtime — Node.js, Bun, Deno, Cloudflare Workers, Vercel Edge
Next: migrate an existing OpenAI app to Skytells in under 5 minutes.
Python SDK Deep Dive
Master the Skytells Python SDK — sync and async predictions, typed errors, concurrent generation, webhooks, and production patterns.
OpenAI Migration Guide
Migrate an existing OpenAI application to Skytells. Often just two lines of code — then unlock 27+ additional models on a unified API.