Content Moderation & Safety Policy

Content Moderation & Safety Policy

Fuse is a powerful team collaboration platform that facilitates form the to in communication, file sharing.

SVGverseAI is committed to maintaining a safe, compliant, and inclusive platform for all users.
We employ a combination of automated moderation tools and human review processes to prevent and remove prohibited or unsafe content.

Moderation Framework

Our content safety system includes multiple technologies that detect, filter, and block inappropriate text and image content:

  • OpenAI Moderation API for identifying unsafe or adult text prompts.

  • Google Cloud Vision SafeSearch for detecting explicit, violent, or sensitive images.

  • PicPurify and Sightengine for real-time post-generation image analysis and NSFW detection.

Prohibited Content

Users are strictly prohibited from generating, uploading, or sharing any of the following through SVGverseAI:

  • Adult, pornographic, or sexually explicit material

  • Nudity or suggestive imagery

  • Violent, gory, or disturbing visuals

  • Hate speech, harassment, or discriminatory content

  • Illegal activities, self-harm, or weapon promotion

  • Copyrighted or trademarked material without authorization

Violations may lead to immediate account suspension or permanent termination.

User Responsibility

Users are solely responsible for ensuring that their prompts and generated content comply with SVGverseAI’s Terms of Service, this policy, and all applicable laws.
SVGverseAI is not liable for user-generated content that violates these rules.

Enforcement

When prohibited content is detected, SVGverseAI may:

  • Automatically block or remove the material,

  • Temporarily suspend the account for review,

  • Terminate repeat offenders without prior notice,

  • Report illegal activities to relevant authorities if necessary.

Reporting Inappropriate Content

If you discover content that violates these standards, please report it to our support team at hello@svgverseai.com.
We review and act on all reports promptly.

Transparency

We are continuously improving our safety systems to balance creativity with compliance.
Our moderation tools are regularly updated to align with current AI ethics and legal requirements.

Policy Updates

This Content Moderation Policy may be updated periodically to reflect technological or regulatory changes.