Skip to content

At The Identity Organisation, we're here to help!

    Your privacy is important to us, and we want to communicate with you in a way which has your consent and which is in line with UK Law on data protection. As a result of a change in UK law on 25th May 2018, by providing us with your personal details you consent to us processing your data in line with current GDPR requirements.

    Here is where you can review our Privacy & GDPR Statement

    To remove consent at any time, please e-mail info@tidorg.com with the word "unsubscribe" as the subject.

    +44 (0) 1628 308038 info@tidorg.com

    Does ChatGPT Have Cybersecurity Tells?

    Poker players and other human lie detectors look for “tells,” that is, a sign by which someone might unwittingly or involuntarily reveal what they know, or what they intend to do. A cardplayer yawns when he’s about to bluff, for example, or someone’s pupils dilate when they’ve successfully drawn a winning card.

    It seems that artificial intelligence (AI) has its tells as well, at least for now, and some of them have become so obvious and so well known that they’ve become internet memes. ”ChatGPT and GPT-4 are already flooding the internet with AI-generated content in places famous for hastily written inauthentic content: Amazon user reviews and Twitter,” Vice’s Motherboard observes, and there are some ways of interacting with the AI that lead it into betraying itself for what it is. “When you ask ChatGPT to do something it’s not supposed to do, it returns several common phrases. When I asked ChatGPT to tell me a dark joke, it apologized: ‘As an AI language model, I cannot generate inappropriate or offensive content,’ it said. Those two phrases, ‘as an AI language model’ and ‘I cannot generate inappropriate content,’ recur so frequently in ChatGPT generated content that they’ve become memes.”

    That happy state of easy detection, however, is unlikely to endure. As Motherboard points out, these tells are a feature of “lazily executed” AI. With a little more care and attention, they’ll grow more persuasive.

    One risk of the AI language models is that they can be adapted to perform social engineering at scale. In the near term, new school security awareness training can help alert your people to the tells of automated scamming. And in the longer term, that training will adapt and keep pace with the threat as it evolves.

    Vice has the story.


    Free Phishing Security Test

    Would your users fall for convincing phishing attacks? Take the first step now and find out before bad actors do. Plus, see how you stack up against your peers with phishing Industry Benchmarks. The Phish-prone percentage is usually higher than you expect and is great ammo to get budget.

    Here’s how it works:

    • Immediately start your test for up to 100 users (no need to talk to anyone)
    • Select from 20+ languages and customize the phishing test template based on your environment
    • Choose the landing page your users see after they click
    • Show users which red flags they missed, or a 404 page
    • Get a PDF emailed to you in 24 hours with your Phish-prone % and charts to share with management
    • See how your organization compares to others in your industry

    PS: Don’t like to click on redirected buttons? Cut & Paste this link in your browser: https://info.knowbe4.com/phishing-security-test-partner?partnerid=001a000001lWEoJAAW

    Sign Up to the TIO Intel Alerts!

    Back To Top