Skip to content

At The Identity Organisation, we're here to help!

    Your privacy is important to us, and we want to communicate with you in a way which has your consent and which is in line with UK Law on data protection. As a result of a change in UK law on 25th May 2018, by providing us with your personal details you consent to us processing your data in line with current GDPR requirements.

    Here is where you can review our Privacy & GDPR Statement

    To remove consent at any time, please e-mail info@tidorg.com with the word "unsubscribe" as the subject.

    +44 (0) 1628 308038 info@tidorg.com

    The Face Off: AI Deepfakes and the Threat to the 2024 Election

    The Associated Press warned this week that AI experts have raised concerns about the potential impact of deepfake technology on the upcoming 2024 election. Deepfakes are highly convincing digital disinformation, easily taken for the real thing and forwarded to friends and family as misinformation.

    Researchers fear that these advanced AI-generated videos could be used to spread false information, sway public opinion, and disrupt democratic processes. With the ability to create deepfakes becoming increasingly accessible, experts are calling for increased awareness, regulation, and investment in AI detection technologies to combat this growing threat to the integrity of elections.

    AI presents political peril for 2024 with threat to mislead voters

    David Klepper and Ali Swenson started out with the above headline and continued: “Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos and audio in seconds, at minimal cost. When strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentially taking campaign dirty tricks to a new low.”

    The article continued with: “The implications for the 2024 campaigns and elections are as large as they are troubling: Generative AI can not only rapidly produce targeted campaign emails, texts or videos, it also could be used to mislead voters, impersonate candidates and undermine elections on a scale and at a speed not yet seen.”

    They give a few examples: “Automated robocall messages, in a candidate’s voice, instructing voters to cast ballots on the wrong date; audio recordings of a candidate supposedly confessing to a crime or expressing racist views; video footage showing someone giving a speech or interview they never gave. Fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race.”  Full AP Article here.

    Excellent Budget Ammo

    This article is excellent budget ammo. It is clear as daylight that people need to get trained to recognize these new threats. Another harrowing example of bad actors doing research and finding tragic events in the press and exploiting them was given in Cyberwire’s recent Hacking Humans podcast: “In this particular case, there’s a woman who received a phone call from her daughter, who she thought was her daughter, and it was her daughter’s voice screaming hysterically, “Help me, help me, Will’s dead.” Will is her husband. “Help me, help me.”

    The correct phone number was spoofed. The voice sounded completely real. The tragic incident actually happened.

    Joe Carrigan: “Yeah. This is amazing that — well, actually, I guess I’m not amazed. I shouldn’t be amazed but I’m — I guess what I’m kind of surprised by that, not really surprised but unhappy with is the speed at which this has moved. You know, here we are, we’re less than six months away from these [AI] voice things coming out and ChatGPT going live, you know, anybody can access and anybody has access to these kind of tools. And here we are and these are now becoming remarkably powerful scamming tools. I don’t know. I’ve already said, we talked earlier about similar stories that were not this advanced.

    “This is really advanced. These guys did their homework. These guys are using open-source intelligence gathering, artificial intelligence, phone number spoofing, and they’re creating something that would probably get at least half the people out there to react immediately and not think clearly through it. I mean, these are going to be really effective. And I’m surprised at how fast that came to fruition. I probably shouldn’t be surprised though. I mean, these guys are motivated by money.”

    Employees urgently need to be stepped through new-school security awareness training to recognize sophisticated social engineering scams using deepfakes. Immediate recommendation: agree on a codeword that you can ask for in an emergency. 


    Request A Demo: Security Awareness Training

    New-school Security Awareness Training is critical to enabling you and your IT staff to connect with users and help them make the right security decisions all of the time. This isn’t a one and done deal, continuous training and simulated phishing are both needed to mobilize users as your last line of defense. Request your one-on-one demo of KnowBe4’s security awareness training and simulated phishing platform and see how easy it can be!

    PS: Don’t like to click on redirected buttons? Cut & Paste this link in your browser: https://info.knowbe4.com/one-on-one-demo-partners?partnerid=001a000001lWEoJAAW

    Sign Up to the TIO Intel Alerts!

    Back To Top