Skip to main content

Brand Guardrails

Handing over your social media to AI requires trust. Our “Brand Guardrails” system is designed to ensure that trust is never broken.

Safety Layers

1. Content Moderation

We filter all generated text and imagery through enterprise-grade safety filters to prevent:
  • Offensive language.
  • Inappropriate imagery.
  • Competitor mentions.

2. “Never Show” List

During onboarding, you can define keywords or concepts to avoid.
  • Example: A vegan cafe can blacklist words like “meat”, “bacon”, “dairy”.
  • Example: A high-end salon can blacklist “cheap”, “discount”, “bargain”.

3. Visual Consistency

We digitally enforce your brand guidelines:
  • Color Locking: We ensure overlays match your hex codes.
  • Logo Safe Zones: We guarantee your logo is never obscured by captions or UI elements.

Human-in-the-Loop (Beta)

For select accounts or flagged content, we employ a “Human-in-the-Loop” system where a real human reviewer checks the video before it ever reaches your dashboard. This ensures an extra layer of quality control during our beta phase.

Feedback Mechanism

If a video doesn’t meet your standards, you can Reject it with a reason (e.g., “Music too loud”, “Wrong photo”).
  • This immediately stops the video from publishing.
  • It sends a signal to our AI to adjust future generations.
  • It triggers a regeneration so you don’t lose your content slot.

How to Configure Safety Settings

You can update your safety preferences at any time to refine what the AI produces.

Managing the “Never Show” List

  1. Navigate to Settings > Brand Guardrails.
  2. Scroll to the Negative Keywords section.
  3. Add Keywords: Type a word (e.g., “cheap”, “discount”) and press Enter.
  4. Save: The AI will now strictly avoid using these words in captions, text overlays, or voiceovers.

Reporting a Safety Issue

If a video slips through that you feel is inappropriate:
  1. Do NOT Publish.
  2. Click the Flag icon (instead of Reject).
  3. Select “Safety Concern” from the dropdown.
  4. Provide a brief detail.
    • Outcome: This creates a high-priority ticket for our Human Safety Team to review the generation logs and patch the filter.