Digital Masks

The often-forgotten value of our digital identities and the psychosocial implications of deepfakes: a branch of artificial intelligence rooted in manipulation. From the improper use of this technology, including in recent cyberattacks, to the impact on the victims. 

identity theft

Joan’s story is your story

In the dystopian future depicted in one of the episodes of the latest Netflix series Black Mirror, Joan, the young protagonist, is labeled as ‘awful‘. This adjective is chosen to describe her not because of her superficial relationships, but rather due to her professional life, where she unscrupulously oversees dismissing workers in a glossy big tech company. Her daily routine is disrupted when, one evening, a new series featuring Salma Hayek appears on her TV screen. Joan realizes she is witnessing a true-to-life portrayal of herself.

Deepfake profile

Deepfake technology is complicit in this: data, information, and emotional and expressive performances have been gathered from smartphones, PCs, and other devices, creating an incredibly realistic character. 

Our digital identities, composed of data and information, have a potentially high value that is often forgotten or underestimated. However, technological progress and the digital transformation of society have provided new opportunities to acquire and misuse personal identity information improperly. The emergence and spread of personal identity theft is a prime example of this. 
 
According to the latest data (Griffith, 2024), Europe has become the most attacked area worldwide, with 31% of all incidents. Additionally, the Identity Fraud Report (2024) reveals that identity theft involving deepfake technology has become a particularly serious form of identity theft, occurring 31 times more frequently than in 2022. 

The origins of the deepfake

This neologism arises from the fusion of the terms “fake” and “deep learning” . Real bodies, faces, and voices are transformed into digital fakes. Through the training of neural networks for pattern recognition, these systems convincingly simulate human language and behavior. 

Deepfake technology, a branch of artificial intelligence, allows the creation of hyperrealistic images, videos, and audio of people doing and saying things they never did or said. Initially, due to its high costs, the development of this technology was mainly used for cinematic special effects, which limited its circulation. However, the spread of deepfake apps on smartphones has facilitated its misuse and widespread dissemination. 

Everything exploded in 2017 when a social network user named Reddit published explicit videos of celebrities’ faces superimposed on those of adult film actors. This was followed by the launch of an application that allowed users to generate fake nude pictures. On one side, there was a playful aspect, but on the other, it became a lethal weapon to discredit, extort, influence, and spread false news. This means anyone can end up like Joan, as everyone has some form of online identity. The development and accessibility of deepfake technology are rapidly evolving and highly controversial. Convincingly impersonating real people can confuse and alter perceptions of reality, truth, and authenticity. This impacts the biographies and psychologies of the victims of identity theft. 

According to research led by the University of Bologna, partner in Eithos Project, deepfakes are part of the modus operandi of contemporary criminals who use them to commit fraud and falsify identities, both of people and devices. In cyber-attacks like phishing or ransomware, fabricated faces and voices can deceive security systems, entice users to click on links or open attachments, and induce victims to divulge sensitive information. The real-life impact of these acts is severe: when someone captures your voice and videos from an Instagram story, they may feel empowered to commit crimes without perceiving the gravity of their actions. Victims of online identity theft not only lose control over their image but also over their ideas, as their intentions can be misrepresented through fake behaviors expressed via deepfakes. Exculpating oneself afterward is difficult, leading many victims to avoid reporting the injury, thinking it is not significant or fearing they won’t be believed or will be judged negatively. This happens despite the serious consequences, such as financial damage, reputation loss, and emotional distress. 

To mitigate these risks, especially the significant risk of identity theft, the Eithos Observatory recommends enhancing user prevention by following these practical tips: 

  1. Avoid sharing pictures and images of yourself or your family in an uncontrolled manner;  
  1. Learn how to recognize a deepfake;
  1. Avoid sharing videos and audios generated by deepfakes without the knowledge of the person concerned;
  1. Report immediately to the authorities if you believe your privacy has been violated. 


Author: Annalisa Plava – Researcher Fellow Eithos EU – University of Bologna

Related Links

Il Telespettatore

Latest News