Skip to content

At The Identity Organisation, we're here to help!

    Your privacy is important to us, and we want to communicate with you in a way which has your consent and which is in line with UK Law on data protection. As a result of a change in UK law on 25th May 2018, by providing us with your personal details you consent to us processing your data in line with current GDPR requirements.

    Here is where you can review our Privacy & GDPR Statement

    To remove consent at any time, please e-mail info@tidorg.com with the word "unsubscribe" as the subject.

    +44 (0) 1628 308038 info@tidorg.com

    “We are hurtling toward a glitchy, spammy, scammy, AI-powered internet.”

    This MIT Technology Review headline caught my eye, and I think you understand why. They described a new type of exploit called prompt injection.

    Melissa Heikkilä wrote: “I just published a story that sets out some of the ways AI language models can be misused. I have some bad news: It’s stupidly easy, it requires no programming skills, and there are no known fixes.

    “For example, for a type of attack called indirect prompt injection, all you need to do is hide a prompt in a cleverly crafted message on a website or in an email, in white text that (against a white background) is not visible to the human eye. Once you’ve done that, you can order the AI model to do what you want.”

    This kind of exploit looks at the near future where users will have various Generative AI plugins that function as personal assistants. 

    The recipe for disaster rolls out as follows:  The attacker injects a malicious prompt in an email that an AI-powered virtual assistant opens. The attacker’s prompt asks the virtual assistant to  something malicious like spreading the attack like a worm, invisible to the human eye. And then there are risks like the recent AI jailbreaking and of course the known risk of data poisoning. 

    The AI community is aware of these problems but there are currently no good fixes.

    All the more reason to step your users through new-school security awareness training combined with frequent social engineering tests, and ideally reinforced by real-time coaching based on the logs from your existing security stack.

    Full article here:

    https://www.technologyreview.com/2023/04/04/1070938/we-are-hurtling-toward-a-glitchy-spammy-scammy-ai-powered-internet/


    The world’s largest library of security awareness training content is now just a click away!

    In your fight against phishing and social engineering you can now deploy the best-in-class simulated phishing platform combined with the world’s largest library of security awareness training content; including 1000+ interactive modules, videos, games, posters and newsletters.

    You can now get access to our new ModStore Preview Portal to see our full library of security awareness content; you can browse, search by title, category, language or content topics.

    The ModStore Preview includes:

    • Interactive training modules
    • Videos
    • Trivia Games
    • Posters and Artwork
    • Newsletters and more!

    Start My Preview

    PS: Don’t like to click on redirected buttons? Cut & Paste this link in your browser: https://info.knowbe4.com/one-on-one-demo-partners?partnerid=001a000001lWEoJAAW

    Sign Up to the TIO Intel Alerts!

    Back To Top