Responsible AI

Responsible AI

Skytells' commitment to safety-first, ethical AI for enterprise-grade solutions.

Our Commitment to Responsible AI

As Skytells continues to offer enterprise-grade AI solutions, our commitment to responsible AI is non-negotiable. We build and operate models and tools that organizations can trust — with safety, ethics, and accountability built in from the start.


Safety First

We prioritize safety first by deploying:

  • Advanced safety layers — Multiple layers of safeguards across the inference pipeline.
  • Smart safety checkers — Automated systems that evaluate inputs and outputs for harmful or high-risk content before and after generation.

By default, all models on the Skytells platform — including models from our partners — are enforced under Skytells' safety guidelines with moderate safety moderation. This ensures a consistent baseline of input and output checks, content policies, and use-case alignment for every model available through our APIs and systems.

As Skytells continues to advance AI by offering advanced models and systems, models that deliver super-realistic outputs (e.g., photorealistic image, video, or audio) are subject to our Tier 3 Safety framework — the strictest level of guardrails, including enhanced moderation, prior-approval gates where appropriate, and stronger controls against DeepFake and impersonation risks.


Safety Tiers

Safety controls and moderation levels are applied in tiers according to model capability and risk profile:

TierScopeLevelDescription
T-1All models (Skytells and partner)ModerateBaseline enforcement of Skytells safety guidelines. Standard input and output checks, content policy enforcement, and use-case alignment. Applied by default to every model on the platform.
T-2Selected high-capability or sensitive modelsEnhancedStricter moderation and additional checks for models with elevated fidelity or sensitivity. May include expanded content filters and access controls.
T-3Super-realistic models (e.g., TrueFusion, Mera)MaximumStrongest guardrails for models capable of photorealistic or highly lifelike outputs. Enhanced input/output moderation, prior approval for select use cases where required, and the strictest controls against DeepFake, likeness cloning, and impersonation.

Tier assignment is determined by Skytells based on model characteristics and risk assessment. All tiers operate within the same safety guidelines and no-cloning and content policies.


Output Moderation and DeepFake Prevention

We moderate outputs by default to limit the impact of misuse, including DeepFakes and synthetic media that could harm individuals or society.

No Cloning of Real People

To prevent abuse, we enforce a clear rule: you may not use Skytells models to create a likeness or clone of a real, identifiable person without that person’s explicit consent and a legitimate, authorized use case. This includes:

ProhibitedDescription
Face / image cloningGenerating or altering images or video that depict a real person’s face or body in a way that could be mistaken for them, without consent.
Voice cloningSynthesizing speech that mimics a real person’s voice to impersonate them, without consent.
Identity appropriationUsing a real person’s name, likeness, or identity in generated content in a misleading or harmful way.

Allowed uses include: your own likeness with your consent, fictional or clearly non-identifiable characters, and use cases that have been explicitly approved by Skytells (e.g., verified media, accessibility, or authorized creative projects).

By applying these rules and default moderation, we aim to keep our platform safe for enterprise and creative use while reducing the risk of synthetic media being used to deceive or harm.


Learn More

For research, best practices, and educational materials on AI ethics, fairness, bias, and training, see Skytells’ public resources:

  • Skytells Resources — AI ethics & responsibility, security & privacy, and developer tools.
  • Explore Models — Official and community models, with clear documentation on usage and safety.

For questions about responsible AI, safety, or access to restricted features, reach out to the Skytells research and support teams.

How is this guide?

On this page