PII Auto-Redaction

Vision AI scans every keyframe for sensitive data and applies FFmpeg blur before narration is ever written.

PII Auto-Redaction

Privacy built into the pipeline — not bolted on

When you record a demo in a real application, sensitive data will appear on screen. Email addresses, phone numbers, credit card numbers, API keys, session tokens, customer names — any of it could end up in a recording that gets shared publicly.

AI Screen Recorder’s post-edit stage solves this with a two-layer approach that runs before narration is ever written. That ordering is deliberate: the narration LLM physically cannot narrate what has already been blurred.

Layer 1: Vision AI scan

After each chunk is recorded, a vision-capable LLM reviews every keyframe alongside the action log. It’s prompted to identify:

  • Email addresses, phone numbers, credit card numbers, SSNs
  • API keys, session tokens, auth tokens, and secrets
  • Personal names that appear to be real customers (not “John Smith” / “Jane Doe” demo data)
  • Any text that could be a password field reveal

The model returns a structured blur_plan: a list of bounding boxes keyed to time ranges — {frame_range: [start_ms, end_ms], bbox: [x, y, w, h], reason}.

The prompt is calibrated to over-blur rather than under-blur. A false positive costs a blurry patch in a video. A false negative means real customer data goes out in a marketing asset.

Layer 2: Regex pass over OCR text

In addition to the vision scan, a deterministic regex pass runs over OCR-extracted frame text to catch patterns the model might miss:

  • Standard email regex
  • US and international phone formats
  • Luhn-valid credit card patterns
  • Common SSN formats
  • API key patterns (long hex strings, Bearer tokens)

Both layers produce bounding boxes. They’re merged (with deduplication) before FFmpeg applies the blur.

FFmpeg application

The final blur/trim pass uses drawbox filters for pixel-level blurring at the identified regions across the specified time ranges. Separately, the agent-identified “dead time” segments (loading spinners >2s, gaps >3s with no visible change) are trimmed via concat-based editing.

Output is chunk-N-edited.mp4 — cleaned, trimmed, and privacy-safe.

Why this matters for agencies

If you’re using AI Screen Recorder to produce demos for client apps, your clients’ end-customer data may appear in recordings. The automatic PII redaction layer means you don’t need a human QA pass on every video before delivery — though agency QA is still recommended for the first job on any new client app.

Ready to get started?

Start your free trial today. No credit card required.