[EXTERNAL RESEARCH ARTICLE] Warning: Humans cannot reliably detect speech deepfakes

🔎 Today, we give you some reading from the University College London (UCL) and more specifically its authors Kimberly Mai, Sergi Bray, Lewis D. Griffin & Toby Davies.

💡 Speech deepfakes, generated by AI, pose a significant security threat due to their potential for misuse. In this eye-opening article, you’ll learn that human detection of these deepfakes is unreliable, with a 73% accuracy rate across languages. As speech synthesis technology advances, the threat of deepfakes grows, underscoring the urgent need for defenses against their misuse.

📖 Read the full article at https://lnkd.in/e8_bg_5A.

Latest News

Subscribe to EITHOS Newseletters

Be updated with our last articles, events and news