Being or Not Being (Anymore): Identity in the Age of AI

cyber intelligence team using advanced technology secure computer networks 482257 99771

The growing ubiquity of artificial intelligence systems, especially generative models such as deepfakes and synthetic data generators, is redefining the boundaries of digital identity. No longer limited to identifiers such as names or ID numbers, identity in the digital ecosystem is now composed of dynamic elements—facial images, vocal patterns, behavioral profiles, and even inferred characteristics—many of which can be reproduced or manipulated by machine learning algorithms. This transformation brings about a critical legal and technological challenge: the protection of identity in contexts where falsification becomes indistinguishable from authenticity.

This presentation analyzes the implications of identity manipulation through AI-generated content, focusing on the convergence between data protection law and AI regulation within the European framework. The discussion addresses how even minimal amounts of true personal data—such as a single photograph, a short voice clip, or a public social media profile—can be used as inputs for generating highly realistic synthetic identities capable of impersonation and deception.

A central concern is the role of deepfakes, which use Generative Adversarial Networks (GANs) to create photorealistic but fake representations of individuals, often indistinguishable from authentic content. When coupled with synthetic data—artificially created datasets reflecting real-world statistical properties—these tools enable the fabrication of entirely fictitious yet plausible digital identities. This phenomenon expands the scope of what constitutes a “personal data breach,” posing risks to individuals’ privacy, reputation, psychological integrity, and economic interests.

The legal analysis is grounded in the General Data Protection Regulation (Regulation EU 2016/679) and the Artificial Intelligence Act (Regulation EU 2024/1689). Under the GDPR, biometric data used for the purpose of uniquely identifying individuals are categorized as “special categories of personal data,” subject to strict processing requirements, including explicit consent, data protection impact assessments (Art. 35), and data minimization. Even synthetic data, when there is a risk of reidentification, may fall under the GDPR’s scope.

The AI Act, on the other hand, adopts a risk-based approach to regulating AI systems. Article 5 prohibits AI systems that exploit vulnerabilities of specific groups or perform subliminal manipulation that may cause harm. Systems for remote biometric identification in public spaces are categorized as “high-risk,” triggering obligations such as transparency (Art. 52), technical documentation, human oversight, and post-market monitoring. Of particular interest is the requirement to clearly label AI-generated content, a provision directly relevant to deepfakes and synthetic media.

Liability in cases of identity theft via AI tools may involve multiple actors, including model developers, platform providers, and end-users. Civil, administrative, and criminal responsibilities can coexist. The Italian Penal Code, for instance, punishes identity theft (Art. 494), illicit data processing (Art. 167 of the Privacy Code), and digital impersonation that results in harm (e.g., under fraud or defamation statutes). Remedies include civil compensation, regulatory enforcement by Data Protection Authorities, and judicial actions for takedown or removal of illegal content.

The talk also includes illustrative case studies: a multinational company deceived by a voice-cloned executive; manipulated political videos circulated during elections; biometric spoofing used to bypass digital identity verification systems; and the use of “deepnude” technologies against women, which prompted urgent interventions by national DPAs.

In conclusion, the defense of identity in the era of AI must be reconceptualized. Traditional legal categories and remedies—while still essential—must be complemented by technical safeguards (e.g., detection tools, watermarking, metadata verification) and digital literacy efforts. Legal professionals, developers, and regulators must collaborate to ensure that identity, as a legal and social construct, retains its integrity against the challenges posed by algorithmic manipulation.

Elio Franco, Lawyer, expert in data protection, new technologies and copyright

Latest News