TypeScript SDK

Reliability

Timeouts, retries, polling, AbortSignal, edge/serverless, streaming reliability, and custom fetch.

This page covers timeout configuration, retry logic, polling options for wait(), AbortSignal support, and edge/serverless considerations.

Timeouts

Every request uses an AbortController-based timeout that is always cleared in a finally block — no timer leaks in serverless or edge environments.

ContextDefault timeout
Default / Node / Browser60 000 ms (60 s)
Edge runtime (runtime: 'edge')25 000 ms (25 s)
const client = Skytells(apiKey, {
  timeout: 30_000, // 30 seconds
});

When the timeout fires: errorId: 'REQUEST_TIMEOUT', httpStatus: 408. The SDK caps timeouts at 2_147_483_647 ms to prevent setTimeout overflow bugs.

Retries

Retries apply only to non-streaming requests. Streaming calls (requestStream(), requestNdjsonStream()) are never retried.

By default, retries: 0 — no automatic retries.

The SDK uses linear backoff: delay = retryDelay × (attempt + 1).

AttemptDelay (retryDelay=1000)
1st retry1 000 ms
2nd retry2 000 ms
3rd retry3 000 ms

Retries

Basic
const client = Skytells(apiKey, {
retry: {
  retries: 3,
  retryDelay: 1000,
  retryOn: [429, 500, 502, 503, 504],
},
});

wait() Polling

client.wait(prediction, options?) polls GET /predictions/{id} until the prediction reaches a terminal status (succeeded, failed, cancelled).

intervalnumber
Poll interval in ms.
maxWaitnumber
Total wait timeout in ms. Throws WAIT_TIMEOUT if exceeded.
signalAbortSignal
Abort polling. Throws ABORTED.

wait() Polling

Basic
const pending = await client.predictions.create({
model: 'flux-pro',
input: { prompt: '...' },
});

const result = await client.wait(pending, {
interval: 2000,     // poll every 2s
maxWait: 120_000,   // give up after 2 min
});

AbortSignal

Pass an AbortSignal to stop polling immediately. Throws SkytellsError('ABORTED').

AbortSignal

wait()
const controller = new AbortController();
setTimeout(() => controller.abort(), 10_000);

try {
const result = await client.wait(pending, {
  signal: controller.signal,
});
} catch (e) {
if (e instanceof SkytellsError && e.errorId === 'ABORTED') {
  console.log('Cancelled — prediction may still run server-side');
}
}

Edge and Serverless

runtime: 'edge'

Edge mode applies:

  1. Shorter default timeout: 25 000 ms (fits within ~30s wall-clock limits)
  2. Smaller compat cache: 16 slug entries (vs 64)
  3. Console hints: Logged once per process

Recommendations

  • Always set maxWait — edge functions have hard wall-clock limits
  • Always pass signal — connect it to the request lifecycle
  • Avoid wait() on long jobs — use predict() + webhooks instead
  • Keep retries low (0–2) — multiple retries can exceed 30s limits

Edge / Serverless

Next.js
// app/api/predict/route.ts
import { NextRequest } from 'next/server';
import Skytells from 'skytells';

const client = Skytells(process.env.SKYTELLS_API_KEY!, {
runtime: 'edge',
timeout: 20_000,
fetch: (url, opts) =>
  fetch(url, { ...opts, cache: 'no-store' }),
});

export async function POST(req: NextRequest) {
const { prompt } = await req.json();

const pending = await client.predictions.create({
  model: 'flux-pro',
  input: { prompt },
});

const result = await client.wait(pending, {
  maxWait: 15_000,
  signal: req.signal,
});

return Response.json({ output: result.output });
}

Streaming Reliability

Streaming calls have specific reliability characteristics:

  • Not retried: If a stream fails mid-way, the SDK will not restart it.
  • Cleanup on abandon: Breaking out of for await...of early still calls reader.cancel() — the response body is released.
  • Timeout applies: The same timeout setting applies. For long generations, increase it.
try {
  for await (const chunk of client.chat.completions.create({
    model: 'deepbrain-router',
    messages: [{ role: 'user', content: '...' }],
    stream: true,
  })) {
    process.stdout.write(chunk.choices[0]?.delta?.content ?? '');
  }
} catch (e) {
  if (e instanceof SkytellsError) {
    if (e.errorId === 'REQUEST_TIMEOUT') {
      // Stream took too long — increase timeout
    } else if (e.errorId === 'NETWORK_ERROR') {
      // Connection dropped — streams are not auto-retried
    }
  }
}

Custom Fetch

Inject a custom fetch for proxying, logging, or testing.

Custom Fetch

Proxy
const client = Skytells(apiKey, {
fetch: (url, opts) =>
  globalThis.fetch(
    url.toString().replace('api.skytells.ai', 'my-proxy.example.com'),
    opts,
  ),
});

Configuration Reference

OptionDefault (Node)Default (Edge)Notes
timeout60 000 ms25 000 msPer-request client timeout
retry.retries00Non-streaming only
retry.retryDelay1 000 ms1 000 msLinear: delay × attempt
retry.retryOn[429,500,502,503,504]sameStatus codes triggering retry
wait.interval5 000 ms5 000 msPoll frequency
wait.maxWaitundefinedset explicitly!Total wait budget
Cache TTL600 000 ms (10 min)sameModel compat cache
Cache max slugs6416Model compat cache size
  • Client — Client options including timeout, retry, and runtime
  • Predictionsrun() and wait() polling with timeout and abort
  • ErrorsREQUEST_TIMEOUT, WAIT_TIMEOUT, ABORTED, NETWORK_ERROR IDs
  • Rate Limits — API rate limiting behavior
  • Webhooks — Alternative to polling for long-running predictions
  • Chat API — Streaming reliability for chat completions
  • Responses API — Streaming reliability for responses
  • Reference: Client types — Full ClientOptions, RetryOptions, WaitOptions definitions

How is this guide?

On this page