Safety
Safety layers, smart checkers, and guidelines for high-fidelity and super-realistic models.
Safety at Skytells
Safety is a core requirement for every Skytells product. By default, all models — including partner models — are enforced under Skytells' safety guidelines with moderate safety moderation. We apply multiple layers of safety and smart safety checkers across the platform; models that deliver super-realistic outputs are subject to our Tier 3 Safety framework for the strongest guardrails. See Safety Tiers on the Responsible AI overview for the full tier definitions.
Safety Layers
Our safety approach is layered:
| Layer | Purpose |
|---|---|
| Input checks | Requests are evaluated before inference for policy violations, harmful intent, or restricted content. |
| Inference-time safeguards | Models and pipelines include built-in constraints to reduce unsafe or off-policy outputs. |
| Output moderation | Generated content is checked after creation. Content that violates our policies (e.g., DeepFake-style misuse, non-consensual likeness) is blocked or flagged. |
| Access controls | Certain models or features require prior approval or higher-trust accounts to limit misuse. |
These layers work together so that even when a single check might miss an edge case, others provide backup.
Smart Safety Checkers
We use automated safety checkers that:
- Scan inputs (prompts, reference images, parameters) for policy violations.
- Evaluate outputs (images, video, audio) for authenticity, consent, and prohibited use (e.g., generating a real person’s likeness or voice without authorization).
- Apply default moderation so that high-risk content is not delivered without review or approval where required.
Checkers are updated as new risks and abuse patterns emerge, and are tuned to balance safety with legitimate creative and enterprise use.
Models Under Safety Guidelines
All models on the platform — Skytells and partner — are subject to Skytells' safety guidelines with moderate safety moderation as the baseline (Tier 1). Models that produce super-realistic or high-fidelity outputs are elevated to our Tier 3 Safety framework. These include, but are not limited to:
- TrueFusion family — high-fidelity image and multimodal models.
- Mera and other photorealistic or lifelike media models.
For Tier 3 (super-realistic) models we:
- Apply the strictest input and output checks and moderation.
- Enforce rules against cloning or impersonating real people (see Responsible AI).
- Require prior approval for some use cases or features when risk is higher.
Prior Approval for Safety
Some features or models may require prior approval before you can use them. This helps us:
- Confirm the use case is legitimate and aligned with our policies.
- Reduce DeepFake, impersonation, and other high-impact misuse.
- Support compliance and accountability for enterprise customers.
If your use case is blocked or requires approval, you’ll see guidance in the API or console. To request access, contact Skytells support or your account team with a clear description of your use case and how you will comply with our no-cloning and content policies.
Approval is at Skytells’ discretion and may depend on account type, use case, and compliance with our Frameworks and Ethics commitments.
How is this guide?