[EXTERNAL RESEARCH ARTICLE] Warning: Humans cannot reliably detect speech deepfakes

🔎 Today, we give you some reading from the University College London (UCL) and more specifically its authors Kimberly Mai, Sergi Bray, Lewis D. Griffin & Toby Davies.

💡 Speech deepfakes, generated by AI, pose a significant security threat due to their potential for misuse. In this eye-opening article, you’ll learn that human detection of these deepfakes is unreliable, with a 73% accuracy rate across languages. As speech synthesis technology advances, the threat of deepfakes grows, underscoring the urgent need for defenses against their misuse.

📖 Read the full article at https://lnkd.in/e8_bg_5A.

Latest News

Newsletter #7 – March 2025 📧 Email Spoofing activity 📺 New form of protection against OIDT risks coming up next year: Payment service directives (PSD3) 🕹️ Awareness

Read More »

The Dead internet theory

Here’s an article from the Universitdad Poltécnica de Madrid describing the ‘Dead internet theory’, a hypothesis of a cyberspace where bots would have replaced humans.

Read More »