Python SDK Guide
Official Python SDK for the Skytells AI platform. Run image, video, audio, music, text, and code models with a single function call. Zero dependencies — uses Python's stdlib urllib.
This SDK has zero external dependencies — it uses Python's standard library urllib for HTTP. Requires Python 3.8+.
Installation
Adding to your project
The Skytells Python SDK is distributed on PyPI and can be installed with pip or any compatible package manager. It has zero external dependencies.
After installation, import SkytellsClient and pass your API key. You can get your API key from the Skytells Dashboard.
Install
pip install skytellsQuick Start
Getting started with Skytells
The SDK provides a simple interface for running AI models. The main entry point is SkytellsClient, which gives you access to run(), predictions, models, and more.
The basic workflow:
- Import and initialize the client with your API key
- Call
client.run()with a model slug and input - Read
prediction.outputorprediction.outputs()
client.run() blocks until the prediction completes and returns a Prediction object.
Basic Usage
from skytells import SkytellsClient
client = SkytellsClient("sk-your-api-key")
# Run a model and get output
prediction = client.run("truefusion", input={"prompt": "An astronaut riding a rainbow unicorn"})
print(prediction.output) # "https://..."
# Clean up when done
prediction.delete()Basic Usage — env var
import os
from skytells import SkytellsClient
client = SkytellsClient(os.environ["SKYTELLS_API_KEY"])
prediction = client.run("flux-pro", input={"prompt": "A cat wearing sunglasses"})
print(prediction.outputs())Browse the full model catalog at skytells.ai/explore/models. For input and output schemas per model, see the Predictions API.
Skytells is committed to the responsible and ethical use of AI. All models are subject to our usage policies and comply with applicable laws and regulations. Learn more in Responsible AI.
Authentication
Setting up authentication
To access the Skytells API, you need an API key from the Skytells Dashboard. API keys start with sk- and authenticate all requests.
Never expose your API key in client-side code or commit it to source control. Always use environment variables or a secrets manager.
Authentication
from skytells import SkytellsClient
client = SkytellsClient("sk-your-api-key")Client configuration
Pass keyword arguments or a ClientOptions object to customize the client's behavior — including custom API URLs, timeouts, headers, and retry logic.
ClientOptions
api_keystr | None
Your Skytells API key (sk-...). Required for authenticated endpoints.
base_urlstr | None
API base URL. Override for custom endpoints or proxies.
timeoutint
Request timeout in milliseconds.
headersdict[str, str] | None
Extra headers sent with every request.
retryRetryOptions | dict | None
Retry configuration (see below).
RetryOptions
retriesint
Number of retry attempts.
retry_delayint
Base delay between retries (ms). Delay increases linearly: attempt 1 waits retry_delay × 1, attempt 2 waits retry_delay × 2, etc.
retry_onlist[int]
HTTP status codes that trigger a retry.
Client configuration
from skytells import SkytellsClient
client = SkytellsClient(
"sk-your-api-key",
base_url="https://api.skytells.ai/v1", # optional — override API URL
timeout=30_000, # ms, default: 60_000
headers={"X-Custom-Header": "value"}, # extra headers on every request
retry={
"retries": 3, # retry attempts (default: 0)
"retry_delay": 1000, # ms between retries (default: 1000)
"retry_on": [429, 500, 502, 503, 504],
},
)Framework integration
Using with popular frameworks
The Skytells SDK integrates with all popular Python web frameworks. Keep your API key on the server side and never expose it to clients.
Use AsyncSkytellsClient in async frameworks like FastAPI for the best performance. See the full async reference.
Framework examples
from fastapi import FastAPI
from skytells import AsyncSkytellsClient
app = FastAPI()
client = AsyncSkytellsClient("sk-key")
@app.post("/generate")
async def generate(prompt: str):
prediction = await client.run("flux-pro", input={"prompt": prompt})
return {"url": prediction.outputs()}What's next
- Predictions & workflows — Models API, running predictions, the Prediction object, waiting & polling, queue & dispatch, and streaming.
- Reference — Full API reference: error handling, async usage, configuration types, enums, constructor signatures, and deprecated methods.
- API errors — Complete list of error IDs and HTTP status codes.
- Prediction schema — Request and response shapes for every prediction endpoint.
How is this guide?