Deepfake Repeaters in the Wild: How Identity Verification Probes Become Evidence

How Project EITHOS helps police authorities detect and investigate pre-attack deepfake probes

What’s happening

A growing tactic in identity-crime operations is the use of deepfake repeaters: synthetic selfies or short “liveness” clips submitted repeatedly to onboarding and verification systems. These attempts are not intended to succeed; they function as probes: tests designed to learn thresholds (face-match scores, liveness prompts, error messages) before a larger fraud campaign. Offenders adjust resolution, compression, lighting, or face-swap parameters, monitor platform responses, and fine-tune their deepfakes accordingly.

Why it matters for LEAs

By the time a coordinated fraud wave strikes, adversaries may already have generated days of rehearsal artefacts: failed synthetic attempts dispersed across providers and jurisdictions. Those artefacts are evidence. Properly preserved and analysed, they can (1) provide early warning that a campaign is in rehearsal, (2) enable linkage between “benign” failures and later account takeovers, and (3) demonstrate intent and preparation in prosecutorial contexts. The difficulty is that these traces often remain buried in operational logs rather than case files, unless investigators know to request them and apply forensic discipline.

How EITHOS helps

Within Project EITHOS (European Identity THeft Observatory System), research teams are developing a toolset to support law enforcement against online identity theft, including elements directly relevant to deepfake repeaters:

  • Deepfake detection for image, video, and audio: research-grade modules designed to help assess the authenticity of media in identity-related contexts. These provide indicators to guide prioritisation and escalation to full forensic workflows; they do not replace expert judgement.
  • Detection of fake/bot activity on social networks: surfacing coordinated identities that may be linked to identity-theft campaigns and used to distribute or amplify synthetic personas.
  • Structured knowledge extraction (including web/dark-web scanning): revealing the methods, toolkits, and playbooks that criminals share to refine deepfake and verification-bypass attempts.

These capabilities sit behind the EITHOS Observatory, which also maintains a public-facing layer for citizen awareness and victim support, ensuring that prevention and investigation reinforce one another.

Operational actions for police authorities

  • Request the “fail trail.” In lawful requests, ask not only for successful onboardings but also for failed verification attempts: timestamps, error codes, available model/verification scores, media hashes, device/browser identifiers, IP/ASN, and session notes.
  • Preserve and hash originals. Where media is retained, secure unaltered files; compute cryptographic hashes; store a read-only master and controlled copies. Log acquisition context and tool versions for reproducibility.
  • Group indicative patterns. Use investigative analysis to connect attempts that show systematic probing behaviour: parameter shifts, repeated prompted phrases, oscillating resolutions, or recurring artefacts. Treat these as investigative leads, not conclusions.
  • Escalate suspicious media. Forward recordings that trigger deepfake indicators (including from EITHOS prototypes where available) to certified digital-forensics units for systematic examination and reporting.
  • Link rehearsal to harm. Maintain traceability from early failed attempts to later fraudulent accounts, wallets, or mule infrastructures. This evidences planning and coordination, strengthening case narratives.
  • Reduce leakage with partners. Encourage providers to coarsen error messaging, rate-limit retries, and vary liveness challenges. Feed confirmed indicators back to national hotlines and the Observatory for rapid public alerts.

Legal, ethical, and forensic guardrails

  • Proportionality & minimisation. Pre-attack telemetry may contain personal data; collect only what is necessary, apply role-based access, and limit retention.
  • Indicators ≠ conclusions. Keep AI indicators distinct from formal forensic conclusions. Document model versions, thresholds, and limitations so outputs remain auditable and contestable.
  • Chain of custody & reproducibility. Record acquisition context in detail, hash exports, and note software/settings so results can be reproduced independently.

Bottom line

Deepfake repeaters are the dress rehearsals of identity fraud. If police authorities can see and preserve the rehearsal, they can anticipate the premiere. By combining EITHOS deepfake-detection indicators and knowledge extraction with disciplined forensic workflows, and by leveraging the Observatory to turn findings into prevention, LEAs gain earlier warning, stronger link analysis, and clearer evidential narratives that can withstand judicial scrutiny.

written by Nikolaos Papadoudis, Forensics expert – HPOL (Hellenic Police).

Latest News