Responsible AI
Frameworks
Policies, guidelines, and how Skytells operationalizes responsible AI.
How We Operationalize Responsible AI
Skytells turns its commitment to responsible AI into concrete policies, guidelines, and processes. This page summarizes the frameworks we use and where to find more detail.
Policy and Guidelines
| Element | Description |
|---|---|
| Tiered safety framework | All models (Skytells and partner) are under Skytells safety guidelines with moderate moderation by default (Tier 1). Super-realistic models use Tier 3. See Safety Tiers. |
| Safety guidelines | Rules and guardrails for all models; enhanced for high-fidelity models (e.g., TrueFusion, Mera). Define acceptable use, content filters, and when prior approval is required. See Safety. |
| No-cloning rule | You may not create a likeness or clone of a real, identifiable person without consent and an authorized use case. Covers face, voice, and identity. See Responsible AI — Output Moderation. |
| Default output moderation | Outputs are moderated by default to reduce DeepFake risk and other misuse. Some content may be blocked or require approval. |
| Ethics and fairness | We apply fairness and bias mitigation in design and evaluation, and publish resources on ethics, fairness, and bias. See Ethics. |
Prior Approval and Access Control
- Some features or models require prior approval for safety reasons. Access may be gated by use case, account type, or compliance review.
- Approval requests are handled through Skytells support or your account team. Provide a clear description of your use case and how you will comply with our policies.
- We may update which capabilities require approval as risks and regulations evolve.
External Resources and Compliance
We align our practices with widely used norms and, where relevant, regulatory expectations:
- Public resources — Skytells Resources for AI ethics, security, and developer tools.
- Models and tools — Explore Models and Models & Tools (e.g., TrueFusion) with usage and safety documentation.
- Security & privacy — Security & Privacy resources for guidelines on protecting data and users.
Our Security and Privacy docs describe how we protect infrastructure and personal data; responsible AI frameworks complement these with content and use-case safety.
Staying Updated
Policies and frameworks may change as we respond to new risks, research, and regulation. Important updates are communicated via:
- Product and documentation updates on this site.
- Notifications to registered accounts where applicable.
- Skytells Resources and Contact Research Team for collaboration and feedback.
For questions about how these frameworks apply to your use case, contact Skytells support.
How is this guide?