What Makes Face and Plate Blurring "Strong Enough" - Practical Parameters and QA Checks

Mateusz Zimoch
Published: 1/27/2026
Updated: 3/10/2026

“Strong enough” blurring is not about making faces or plates look slightly unclear. It is about making identification impracticable using reasonable methods and tools, including common sharpening, super-resolution, re-encoding, and platform compression pipelines. In practice, that requires three things at the same time: complete coverage with a safety margin, intensity that removes both human and machine recognizability, and measurable quality assurance checks before a photo or video is released.

an anonymized face of a female model in a white tank top, black-and-white photo

Why “strong enough” matters for publishing images and videos?

Publishing photos and videos involves processing personal data when people are identifiable. In EU and UK frameworks, organisations may rely on a lawful basis such as legitimate interests when consent is not feasible, but this requires a documented balancing test and appropriate safeguards. Robust blurring can be one such safeguard, but it does not automatically make the output “anonymous” under Recital 26. In many practical contexts, blurred footage can still be personal data if re-identification remains reasonably possible [1].

For faces, in Poland the obligation to anonymize does not follow directly from the GDPR as a universal rule. Separately from data protection, the use and dissemination of a person’s image is governed primarily by the Polish Copyright and Related Rights Act, especially Art. 81, and by civil-law protection of personal interests under the Civil Code. As a rule, disseminating a person’s image requires consent unless an exception applies. Commonly cited exceptions (Art. 81(2)) include:

  • the person is a commonly known public figure and the image was made in connection with the performance of their public functions (e.g., political, social, professional),
  • the person constitutes only a detail of a whole such as a landscape, public event or mass gathering (e.g., concert, sports event, demonstration),
  • the person received agreed remuneration for posing and did not expressly reserve the right to approve dissemination.

For license plates, EU and EEA practice is not “mandatory everywhere by Western European law.” Whether a plate is personal data depends on context: if it can be reasonably linked to an identifiable person directly or indirectly, it can qualify as personal data. In Poland, there is no single uniform rule and there have been differing positions in practice and case law. A risk-based, well-documented approach to blurring plates when publishing is a common compliance strategy, particularly when footage is otherwise linkable to individuals [1][4].

In the United States, there is no single nationwide equivalent to EU GDPR, but “strong enough” blurring still matters in practice. If a published clip allows viewers to identify a person or a vehicle, that can escalate complaint risk, harassment risk, and exposure under state privacy regimes or common-law privacy claims. For creators and organisations publishing multi-platform content, the least-disclosure mindset aligns well with strong QA and consistent redaction decisions [2][3].

For additional practitioner-oriented background on publishing photos and videos, you can browse the Gallio PRO blog.

black and white photo of the front of a 'Plymouth' car has a blurred license plate

Defining “strong enough” in technical terms

Strong anonymization is achieved when: 1) the sensitive area is correctly detected with high recall and covered with a sufficient margin, 2) the applied blur or pixelation removes machine and human recognizability, and 3) the result is resilient to typical post-processing and platform compression. Organisations often validate these outcomes with automated tests and human spot checks before release.

black-and-white photo of a woman with a blurred animated face wearing a black hoodie with the word 'leadr'

Face blurring parameters that work in practice

Faces are the primary identifier in most publishing workflows. The goal is not simply to degrade detail but to remove identity signals in a way that remains robust after export, upload, and re-encoding.

Detection and coverage

The detected face region should be expanded by a margin to cover hairline, chin, and cheeks that could assist recognition. A common operational margin is 10-30 percent of bounding box size, context-dependent and scaled to face size and pose. Small faces under 20-24 pixels wide are high risk for missed detection; replacing such detections with a solid box is a practical fallback.

Blur method and intensity

Three families are widely used: Gaussian blur, pixelation, and solid boxes. For risk reduction, pixelation at sufficiently large block size or a high-sigma Gaussian blur is typically stronger than light blur. As a rule of thumb for 1080p footage, when a frontal face box is about 100-160 pixels wide, a Gaussian kernel with sigma in the 12-20 pixel range or pixelation blocks of 16-24 pixels usually prevents casual recognition, but this remains context-dependent. For 4K sources, scale intensity roughly proportionally. When faces are very small or heavily motion-blurred, a solid box is more robust than attempting fine-grain blur.

Machine recognizability checks

To move beyond visual inspection, many teams run an off-the-shelf face recognition model on original versus blurred crops and verify that match scores drop below an operating threshold. A common internal target is that the system fails to match blurred faces at a strict false match rate (e.g., 1e-3 or stricter), used as an internal benchmark and adjusted to the model and content. If a model can still match blurred faces at reasonable thresholds, increase blur intensity or switch to solid boxes.

black and white selfie portrait of a man with an anonymized face

License plate blurring parameters that block OCR and human reading

Plates are often readable in a narrow set of frames. That makes them a classic “single-frame failure” risk, especially in moving scenes, angle changes, or parking-lot footage.

Coverage and margins

Extend the detected plate box by 5-15 percent to include borders and screws that help OCR. Angle, motion blur, and reflections from retroreflective plates can cause misses. Tracking-based interpolation across frames reduces flicker and gaps.

Intensity and method

Practical settings target illegibility at native resolution and after common downscales. For EU plates at typical street-scene sizes in 1080p video, pixelation blocks of 12-20 pixels or a Gaussian blur with sigma around 10-16 pixels usually defeat consumer OCR and reading, but this remains context-dependent. When characters are small or partially occluded, a solid box is safer. After export, run OCR on blurred crops to confirm no characters are correctly read.

photo of the rear of a Porsche sports car with a blurred license plate

The table below provides starting points. Use it to define defaults, then validate against your own footage, your export settings, and the platforms where content will be published.

Scenario

Typical box size in px

Gaussian blur - sigma

Pixelation - block size

Fallback

QA check

Face in 1080p close-up

120-200

12-20

16-24

Solid box for fast motion

Face match test fails at target threshold

Face in 1080p crowd

24-80

10-16

14-20

Solid box under 24 px

Random human review of dense frames

EU plate in 1080p

100-180

10-16

12-20

Solid box for angle glare

OCR must fail post-export

4K sources - general

Scale from 1080p

1.8x - 2x of 1080p sigma

1.8x - 2x block size

Solid box for tiny targets

Re-test after platform compression

Values are indicative and context-dependent. Always validate on your typical footage and publishing workflows.

A black-and-white portrait photo of a woman with her hair tied up in a bun, wearing a black tank top, with her face anonymized.

Quality assurance checks that reduce publishing risk

Parameters alone do not guarantee safety. Quality assurance is what turns “looks blurred” into “is hard to re-identify after the full publishing pipeline.”

  1. Configure detection for recall. Use model confidence thresholds that prioritize recall over precision for faces and plates. Low-confidence cases can be routed to manual review.
  2. Track across frames. Interpolation prevents single-frame gaps during motion and panning.
  3. Automate coverage checks. Generate overlays showing every detected region per frame and sample statistically significant subsets, including worst cases like night, backlight, and rain.
  4. Attack-resistance testing. Apply consumer sharpening, super-resolution, and deblurring tools to blurred crops and verify failure of face matching and OCR.
  5. Export-path validation. Re-encode outputs with the same codecs and bitrates used by the website or social platforms to confirm recompression does not reveal features.
  6. Document results. Keep internal records of parameter settings, sample frames, and pass-fail metrics, and avoid unnecessary retention of personal data. Gallio PRO does not collect logs containing face or plate detections and does not collect logs with personal or sensitive data.

black-and-white photo of the back of a car with a blurred license plate

On-premise software and data flow control

On-premise software avoids external transfers and supports strict access control. If you want an on-premise workflow for visual data anonymization of photos and videos where only faces and license plates are automatically blurred, you can check out Gallio PRO. Gallio PRO does not perform real-time anonymization or video stream anonymization, and it does not blur entire silhouettes. It does not automatically detect company logos, tattoos, name badges, documents, or screens. Those elements can be redacted manually using a built-in editor.

black-and-white photo of the front of a white Volkswagen car with a blurred license plate

Operationalising “strong enough” with Gallio PRO

Teams typically implement a repeatable workflow that makes parameter choices auditable and keeps QA focused on the most common failure modes.

  1. Set detection thresholds to maximize recall for faces and plates.
  2. Apply size-based rules that switch to solid boxes for small or difficult targets.
  3. Select blur or pixelation intensities aligned with the table above.
  4. Run automated face-matching and OCR failure checks on samples.
  5. Complete a human review pass for edge cases such as crowded scenes or extreme lighting.
  6. Validate the final export path.

To try these steps on representative footage, you can download a demo. For implementation questions or to discuss enterprise deployment, you can contact us.

black-and-white portrait photo of a woman with medium-length brown hair, the person's face is anonymized

Notes on jurisdictional differences

Under UK GDPR and the Data Protection Act 2018, images that enable identification are personal data. ICO guidance highlights the need to assess necessity, proportionality, and safeguards, with anonymization as a protective measure when publishing [3][4]. In the EU, EDPB Guidelines 3/2019 confirm that faces and, depending on context, license plates can be personal data and emphasize privacy by design controls for video devices, which can include robust anonymization where appropriate [4][5]. Local civil and copyright laws can add image-right constraints, including the three exceptions listed earlier for using a person’s likeness without consent.

3D graphics with scattered white question marks

FAQ - What Makes Face and Plate Blurring "Strong Enough" - Practical Parameters and QA Checks

What makes face blurring “strong enough” for publishing?

Blurring is strong when the face is fully covered with a margin, cannot be recognized by people or face recognition models at reasonable thresholds, and remains effectively obfuscated after sharpening and platform compression.

Is pixelation better than Gaussian blur?

Both can be strong when sized correctly. Pixelation with large blocks and high-sigma blur perform similarly in many cases. For very small or problematic faces or plates, a solid box is more reliable.

What kernel or block sizes should be used?

There is no single value. For 1080p footage, sigma around 12-20 pixels or blocks of 16-24 pixels often work for typical face sizes, with proportional increases for 4K. Treat these as starting points and validate on your content.

How can teams test that plates are unreadable?

Run OCR on blurred plate crops and confirm no characters are correctly recognized. Repeat after re-encoding the video or image to the final publishing format.

Does anonymization survive social media compression?

Usually yes if parameters are strong and margins are sufficient, but always test by uploading private samples and reviewing downloads or streams.

Can Gallio PRO blur logos or tattoos automatically?

No. Gallio PRO automatically blurs faces and license plates only. Logos, tattoos, name badges, documents, or screens require manual redaction using the built-in editor.

Does Gallio PRO work in real time?

No. Gallio PRO does not perform real-time anonymization and does not provide video stream anonymization. It processes photos and videos with an export workflow.

References list

  1. [1] Regulation (EU) 2016/679 (GDPR), Art. 4 and Recital 26 (anonymous data and identifiability) - EUR-Lex: https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng
  2. [2] UK GDPR and Data Protection Act 2018 - resources via ICO: https://ico.org.uk/
  3. [3] UK ICO, Guide to the UK GDPR - What is personal data (photographs and video): https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/data-protection-basics/what-is-personal-data/what-is-personal-data/
  4. [4] UK ICO - CCTV and video surveillance guidance: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/cctv-and-video-surveillance/
  5. [5] EDPB, Guidelines 3/2019 on processing of personal data through video devices: https://www.edpb.europa.eu/our-work-tools/our-documents/guidelines/guidelines-32019-processing-personal-data-through-video_en