Industry Solutions

Heimdall SafeSpace

12 Safety Plugins. 4 Moderation Tiers. One API.

The complete content safety stack: 12 moderation plugins spanning text (Detoxify RoBERTa toxicity, ML profanity with adversarial robustness, identity-attack hate speech), images (NudeNet v3 NSFW, CLIP zero-shot violence, SHA-256 + PhotoDNA + Thorn CSAM), video (ffmpeg keyframe + Whisper transcription + EasyOCR + CLIP), conversation (4-analyzer grooming detection), and URL scanning (Google Safe Browsing). All configurable across 4 audience tiers: COPPA (zero-tolerance), Teen, General, and Relaxed. Built on propaganda-grade AI.

Challenges You Face

CSAM Legal Compliance

18 USC 2258A requires detection and NCMEC reporting — our pipeline: SHA-256 hash → PhotoDNA → Thorn Safer → BLOCK_AND_REPORT

Grooming Detection

4 sub-analyzers: IntimacyAnalyzer (escalation), SecrecyDetector (hide from parents), AgeProbeDetector (age questions), StageClassifier — always flags for human review

Community Toxicity

Detoxify RoBERTa model: toxicity, severe_toxicity, obscene, threat, insult, identity_attack, sexual_explicit — 93.74% AUC

Content Volume

Dynamic batching: up to 32 texts per GPU pass. Video pipeline: ffmpeg keyframe extraction every 5 seconds, max 30 frames per video

False Positive Fatigue

ML profanity classifier with fuzzy matching resists l33tspeak and character substitution — not crude keyword lists

Multi-Audience Platforms

COPPA zone: toxicity BLOCK at 0.2, profanity at 0.1. Teen zone: toxicity BLOCK at 0.7, profanity FILTER at 0.5. Same platform, different rules.

Key Features

12-Plugin Safety Stack

Text: toxicity, profanity, hate speech, URL scanning. Image: NSFW, violence, CSAM. Video: keyframe + audio + OCR. Chat: grooming. Plus audit + analytics + rating.

4 Moderation Tiers

COPPA (zero-tolerance), Teen (blocks explicit), General (blocks heavy), Relaxed (flags only) — calibrated thresholds per safety category

CSAM Detection Pipeline

SHA-256 hash → PhotoDNA API → Thorn Safer API — hash-only (never stores images). BLOCK_AND_REPORT action with NCMEC-compliant data.

Grooming Detection

IntimacyAnalyzer + SecrecyDetector + AgeProbeDetector + StageClassifier. Combined risk boosted 0.2 when 2+ signals. Always FLAG for human review.

Video Moderation Pipeline

ffmpeg keyframe extraction (5s intervals, max 30 frames) → NudeNet + CLIP per frame → EasyOCR text → Whisper audio → text safety analysis

NSFW Image Detection

NudeNet v3: EXPOSED_BREAST_F, EXPOSED_GENITALIA_F/M, EXPOSED_BUTTOCKS, EXPOSED_ANUS. Threshold: 0.5. Plus CLIP zero-shot violence (threshold: 0.6).

Hate Speech Detection

Identity attack detection with category classification across 13+ categories — implicit and coded hate included

URL Safety Scanning

Regex extraction → local domain blocklist (instant) → Google Safe Browsing API: MALWARE, SOCIAL_ENGINEERING, UNWANTED_SOFTWARE, POTENTIALLY_HARMFUL_APPLICATION

Techniques We Detect

Radicalization Detection

56 span-level techniques catch recruitment narratives, extremist framing, and ideological manipulation

Coordinated Campaigns

Coordinated_Messaging and Astroturfing detection identifies organized harassment and inauthentic behavior

Grooming in Chat

4 specialized sub-analyzers detect intimacy escalation, secrecy requests, age probing, and grooming stage progression

Pricing

Community

$299/mo
  • 10K MAU
  • Text plugins (toxicity, profanity, hate)
  • Basic NSFW
  • 2 tiers
Get Started
Recommended

Platform

$1,499/mo
  • 100K MAU
  • Full text + image
  • CSAM pipeline
  • Grooming detection
  • All 4 tiers
Get Started

Enterprise

$4,999/mo
  • 1M MAU
  • Full 12-plugin stack
  • Video moderation
  • URL scanning
  • API
  • Audit logging
Get Started

Scale

Custom
  • Unlimited MAU
  • Dedicated GPU
  • Custom models
  • SLA
  • Priority support
Contact Us

12 Plugins. 4 Tiers. Sub-Second Latency.

Schedule a demo or start your free trial today.