Swift SDK

The official Skytells SDK for Swift — run AI predictions from iOS, macOS, tvOS, and watchOS with a native, type-safe API.

The official Skytells SDK for Swift — access Skytells AI services natively from iOS, macOS, tvOS, and watchOS. Built with Swift concurrency (async/await), full Sendable conformance, and zero third-party dependencies.

Keep your SDK up to date

The Skytells Swift SDK is actively maintained. Always use the latest version to get bug fixes, new features, and security patches. Set your dependency rule to Up to Next Major Version so you receive updates automatically.


Requirements

RequirementMinimum Version
iOS15.0+
macOS12.0+
tvOS15.0+
watchOS8.0+
Swift5.9+
Xcode15.0+

Installation

The SDK is distributed as a Swift Package with zero external dependencies — it uses only Foundation.

  1. In Xcode, go to File → Add Package Dependencies…
  2. Enter the repository URL:
https://github.com/skytells/swift-sdk.git
  1. Set the dependency rule to Up to Next Major Version starting from 1.0.0.
  2. Select the Skytells library product and add it to your target.

Add the dependency to your Package.swift:

Package.swift
dependencies: [
    .package(url: "https://github.com/skytells/swift-sdk.git", from: "1.0.0")
]

Then add Skytells to your target's dependencies:

Package.swift
.target(
    name: "YourTarget",
    dependencies: [
        .product(name: "Skytells", package: "swift-sdk")
    ]
)

Always use the latest version

Pin to from: "1.0.0" with Up to Next Major Version to receive all minor and patch updates automatically. Avoid pinning to an exact version unless you have a specific compatibility reason.


Quick Start

The SDK provides two ways to create a client: the SkytellsClient initializer or the Skytells.createClient factory method. Both return an identical, thread-safe client instance.

QuickStart.swift
import Skytells

// Option 1: Direct initializer
let client = SkytellsClient(apiKey: "sk-your-api-key")

// Option 2: Factory method
let client = Skytells.createClient(apiKey: "sk-your-api-key")

// Make a prediction
let prediction = try await client.predict(.init(
    model: "vendor/model-name",
    input: ["prompt": "A golden sunset over the Pacific Ocean"]
))

print("Status: \(prediction.status)")  // e.g. .succeeded
print("ID: \(prediction.id)")

// Access the output
if let url = prediction.firstOutputURL {
    print("Output URL: \(url)")
}

Authentication

To access the Skytells API, you need an API key from your Skytells Dashboard. The key is sent as an x-api-key header on every request.

Never hard-code your API key in source code that ships to end users. Store it securely — for example via a server-side endpoint, Xcode build configurations, or a secrets manager.

import Skytells

let client = SkytellsClient(apiKey: "sk-your-api-key")
import Skytells

// Read from environment or a configuration file
guard let apiKey = ProcessInfo.processInfo.environment["SKYTELLS_API_KEY"] else {
    fatalError("SKYTELLS_API_KEY not set")
}

let client = SkytellsClient(apiKey: apiKey)
import Skytells

let client = SkytellsClient(
    apiKey: "sk-your-api-key",
    options: ClientOptions(
        baseURL: "https://api.skytells.ai/v1",
        timeout: 120  // seconds
    )
)

Client Configuration

The ClientOptions struct lets you customize the client's behavior:

PropertyTypeDefaultDescription
baseURLString?https://api.skytells.ai/v1Override the API base URL
timeoutTimeInterval?60 (seconds)Request timeout interval
let options = ClientOptions(
    baseURL: "https://api.skytells.ai/v1",
    timeout: 90
)
let client = SkytellsClient(apiKey: "sk-...", options: options)

The SkytellsClient class is declared as Sendable and final, making it safe to share across actors, tasks, and threads without any additional synchronization.


API Reference

Predictions

Predictions are the core functionality of the Skytells API. You send an input to a model and receive a generated output — text, images, audio, video, or other media.

Create a Prediction

Use predict(_:) to submit a prediction request. The PredictionRequest struct supports flexible input via [String: AnyCodableValue] dictionaries.

let prediction = try await client.predict(.init(
    model: "vendor/model-name",
    input: ["prompt": "A futuristic cityscape at sunset"]
))
PredictionRequest Parameters
ParameterTypeRequiredDescription
modelStringYesThe model identifier (e.g. "vendor/model-name")
input[String: AnyCodableValue]YesInput parameters — varies by model
webhookPredictionWebhook?NoWebhook configuration for lifecycle events
awaitBool?NoWhen true, blocks until the prediction completes
streamBool?NoWhen true, enables streaming of prediction events
let prediction = try await client.predict(.init(
    model: "vendor/model",
    input: ["prompt": "Hello, world!"]
))

print("Prediction ID: \(prediction.id)")
print("Status: \(prediction.status)")
// Block until the prediction completes
let prediction = try await client.predict(.init(
    model: "vendor/model",
    input: ["prompt": "Write a haiku about Swift"],
    await: true
))

// Output is available immediately
if let text = prediction.outputString {
    print(text)
}
let prediction = try await client.predict(.init(
    model: "vendor/model",
    input: ["prompt": "Generate an image"],
    webhook: PredictionWebhook(
        url: "https://your-server.com/webhook",
        events: ["prediction.succeeded", "prediction.failed"]
    )
))

Prediction Response

The PredictionResponse contains the full prediction lifecycle data:

PropertyTypeDescription
idStringUnique prediction identifier
statusPredictionStatusCurrent status (see below)
typePredictionType.inference or .training
modelPredictionModel?The model name and type
input[String: AnyCodableValue]?The input parameters sent
outputAnyCodableValue?The raw output value
responseString?Optional text response
streamBoolWhether streaming is enabled
sourcePredictionSource?.api, .cli, or .web
privacyStringPrivacy level
createdAtStringISO 8601 creation timestamp
startedAtString?When processing started
completedAtString?When processing completed
updatedAtStringLast update timestamp
metricsPredictionMetrics?Timing and count metrics
metadataPredictionMetadata?Billing and storage info
urlsPredictionURLs?API action URLs
webhookPredictionWebhook?Webhook configuration
Prediction Status
StatusMeaning
.pendingQueued, waiting to start
.startingInitializing
.startedRunning
.processingActively generating output
.succeededCompleted successfully
.failedFailed with an error
.cancelledWas cancelled
Output Convenience Accessors

The PredictionResponse provides typed accessors for common output shapes:

// Array of URL strings (e.g. generated images)
if let urls = prediction.outputURLs {
    for url in urls {
        print("Image: \(url)")
    }
}

// First URL in the output array
if let url = prediction.firstOutputURL {
    print("Primary output: \(url)")
}

// Single string output (e.g. generated text)
if let text = prediction.outputString {
    print("Text: \(text)")
}

// Array of dictionaries
if let objects = prediction.outputObjects {
    for obj in objects {
        print(obj)
    }
}

Get a Prediction

Retrieve a prediction by its ID to check its status or access its output after creation.

let prediction = try await client.getPrediction(id: "prediction-id")

switch prediction.status {
case .succeeded:
    print("Output: \(prediction.outputString ?? "N/A")")
case .processing, .pending, .starting, .started:
    print("Still in progress…")
case .failed:
    print("Prediction failed")
case .cancelled:
    print("Prediction was cancelled")
}

Stream a Prediction

Retrieve a prediction with stream metadata. Useful for predictions that were created with stream: true.

let prediction = try await client.streamPrediction(id: "prediction-id")

Cancel a Prediction

Cancel a running prediction that hasn't completed yet:

let cancelled = try await client.cancelPrediction(id: "prediction-id")
print("Cancelled: \(cancelled.status)")  // .cancelled

Once a prediction is cancelled, it cannot be resumed. You will need to create a new prediction.


Delete a Prediction

Permanently remove a prediction and its associated data:

let deleted = try await client.deletePrediction(id: "prediction-id")

Deleted predictions cannot be recovered. Save any important outputs before deletion.


Models

List Models

Retrieve all models available on the Skytells platform, including their capabilities, pricing, and vendor information:

let models = try await client.listModels()

for model in models {
    print("\(model.namespace)/\(model.name)\(model.description ?? "")")
    print("  Type: \(model.type)")            // .image, .text, .audio, etc.
    print("  Privacy: \(model.privacy)")       // .public or .private
    print("  Status: \(model.status)")
    print("  Capabilities: \(model.capabilities)")

    if let pricing = model.pricing {
        print("  Price: \(pricing.amount) \(pricing.currency)/\(pricing.unit)")
    }
}
Model Properties
PropertyTypeDescription
nameStringModel name
descriptionString?Human-readable description
namespaceStringVendor namespace
typeModelType.image, .text, .audio, .video, or .music
privacyModelPrivacy.public or .private
imgURLString?Model thumbnail URL
vendorVendorVendor info (name, slug, verified status)
billableBool?Whether predictions are billed
pricingPricing?Price per unit
capabilities[String]List of supported capabilities
statusStringCurrent status
serviceService?Service type and inference party

Error Handling

All API errors are thrown as SkytellsError, a Sendable struct that conforms to Error and CustomStringConvertible.

PropertyTypeDescription
errorIdStringMachine-readable error code
messageStringHuman-readable error message
detailsStringAdditional context
httpStatusIntHTTP status code (0 for network errors)

Error Codes

Error IDMeaning
INVALID_URLCould not construct a valid URL
NETWORK_ERRORNetwork connectivity issue
REQUEST_TIMEOUTRequest exceeded the timeout interval
INVALID_RESPONSEServer returned a non-HTTP response
SERVER_ERRORServer responded with non-JSON content
DECODE_ERRORResponse could not be decoded
HTTP_ERRORServer returned an HTTP error status
API_ERRORApplication-level error from the API
do {
    let prediction = try await client.predict(.init(
        model: "vendor/model",
        input: ["prompt": "test"]
    ))
    print(prediction.id)
} catch let error as SkytellsError {
    print("Error: \(error.message)")
} catch {
    print("Unexpected error: \(error)")
}
do {
    let prediction = try await client.predict(.init(
        model: "vendor/model",
        input: ["prompt": "test"]
    ))
    print(prediction.id)
} catch let error as SkytellsError {
    switch error.errorId {
    case "REQUEST_TIMEOUT":
        print("Request timed out — try increasing the timeout")
    case "NETWORK_ERROR":
        print("No network — check your connection")
    case "HTTP_ERROR" where error.httpStatus == 401:
        print("Invalid API key")
    case "HTTP_ERROR" where error.httpStatus == 429:
        print("Rate limited — slow down requests")
    default:
        print("API Error [\(error.errorId)]: \(error.message)")
        print("Details: \(error.details)")
        print("HTTP Status: \(error.httpStatus)")
    }
}
func predictWithRetry(
    client: SkytellsClient,
    request: PredictionRequest,
    maxRetries: Int = 3
) async throws -> PredictionResponse {
    var lastError: Error?

    for attempt in 1...maxRetries {
        do {
            return try await client.predict(request)
        } catch let error as SkytellsError
            where error.errorId == "REQUEST_TIMEOUT"
               || error.httpStatus == 429
               || error.httpStatus >= 500 {
            lastError = error
            let delay = Double(attempt) * 2.0  // exponential backoff
            try await Task.sleep(for: .seconds(delay))
        }
    }

    throw lastError!
}

Platform Integration

SwiftUI

ContentView.swift
import SwiftUI
import Skytells

struct ContentView: View {
    @State private var prompt = ""
    @State private var output = ""
    @State private var isLoading = false
    @State private var errorMessage: String?

    private let client = SkytellsClient(apiKey: "sk-your-api-key")

    var body: some View {
        VStack(spacing: 16) {
            TextField("Enter a prompt…", text: $prompt)
                .textFieldStyle(.roundedBorder)

            Button {
                Task { await generate() }
            } label: {
                if isLoading {
                    ProgressView()
                } else {
                    Text("Generate")
                }
            }
            .disabled(prompt.isEmpty || isLoading)

            if let errorMessage {
                Text(errorMessage)
                    .foregroundStyle(.red)
                    .font(.caption)
            }

            if !output.isEmpty {
                Text(output)
                    .padding()
                    .background(.gray.opacity(0.1), in: .rect(cornerRadius: 8))
            }
        }
        .padding()
    }

    private func generate() async {
        isLoading = true
        errorMessage = nil

        do {
            let prediction = try await client.predict(.init(
                model: "vendor/text-model",
                input: ["prompt": .string(prompt)],
                await: true
            ))
            output = prediction.outputString ?? "No output"
        } catch let error as SkytellsError {
            errorMessage = error.message
        } catch {
            errorMessage = error.localizedDescription
        }

        isLoading = false
    }
}

UIKit

ViewController.swift
import UIKit
import Skytells

class PredictionViewController: UIViewController {
    private let client = SkytellsClient(apiKey: "sk-your-api-key")
    private let outputLabel = UILabel()
    private let activityIndicator = UIActivityIndicatorView(style: .large)

    override func viewDidLoad() {
        super.viewDidLoad()

        Task {
            activityIndicator.startAnimating()
            defer { activityIndicator.stopAnimating() }

            do {
                let prediction = try await client.predict(.init(
                    model: "vendor/model",
                    input: ["prompt": "Describe Swift in one sentence"],
                    await: true
                ))
                outputLabel.text = prediction.outputString
            } catch let error as SkytellsError {
                outputLabel.text = "Error: \(error.message)"
            }
        }
    }
}

Server-Side Swift (Vapor)

routes.swift
import Vapor
import Skytells

func routes(_ app: Application) throws {
    let client = SkytellsClient(
        apiKey: Environment.get("SKYTELLS_API_KEY")
    )

    app.post("predict") { req async throws -> Response in
        struct PredictBody: Content {
            let model: String
            let prompt: String
        }
        let body = try req.content.decode(PredictBody.self)

        let prediction = try await client.predict(.init(
            model: body.model,
            input: ["prompt": .string(body.prompt)],
            await: true
        ))

        return try await prediction.encodeResponse(for: req)
    }
}

Concurrency & Thread Safety

The SkytellsClient is declared as final class ... : Sendable, meaning it is safe to:

  • Share across multiple Task instances
  • Pass between actors
  • Use from @MainActor and background contexts simultaneously
actor PredictionManager {
    private let client: SkytellsClient

    init(apiKey: String) {
        self.client = SkytellsClient(apiKey: apiKey)
    }

    func runBatch(prompts: [String], model: String) async throws -> [PredictionResponse] {
        try await withThrowingTaskGroup(of: PredictionResponse.self) { group in
            for prompt in prompts {
                group.addTask {
                    try await self.client.predict(.init(
                        model: model,
                        input: ["prompt": .string(prompt)],
                        await: true
                    ))
                }
            }
            var results: [PredictionResponse] = []
            for try await result in group {
                results.append(result)
            }
            return results
        }
    }
}

Metrics & Billing

Every PredictionResponse includes optional metrics and metadata for tracking performance and cost:

if let metrics = prediction.metrics {
    print("Predict time: \(metrics.predictTime ?? 0)s")
    print("Total time: \(metrics.totalTime ?? 0)s")
    print("Images generated: \(metrics.imageCount ?? 0)")
}

if let billing = prediction.metadata?.billing {
    print("Credits used: \(billing.creditsUsed ?? 0)")
}

if let files = prediction.metadata?.storage?.files {
    for file in files {
        print("\(file.name) (\(file.type), \(file.size) bytes): \(file.url)")
    }
}

SDK Version

You can check the current SDK version at runtime:

print("Skytells Swift SDK v\(Skytells.version)")  // "1.0.0"
print("API Base URL: \(Skytells.apiBaseURL)")      // "https://api.skytells.ai/v1"

Keeping the SDK Updated

Stay current

The Skytells Swift SDK receives regular updates with new features, performance improvements, and security fixes. We strongly recommend keeping your dependency on the latest version.

To update in Xcode:

  1. Go to File → Packages → Update to Latest Package Versions
  2. Xcode will fetch the newest compatible version automatically

To update via SPM CLI:

swift package update

Tip: Use the from: version requirement (e.g. from: "1.0.0") instead of pinning to an exact version — this way you automatically receive all compatible updates.


Additional Resources

How is this guide?

On this page