Skip to content

At The Identity Organisation, we're here to help!

    Your privacy is important to us, and we want to communicate with you in a way which has your consent and which is in line with UK Law on data protection. As a result of a change in UK law on 25th May 2018, by providing us with your personal details you consent to us processing your data in line with current GDPR requirements.

    Here is where you can review our Privacy & GDPR Statement

    To remove consent at any time, please e-mail info@tidorg.com with the word "unsubscribe" as the subject.

    +44 (0) 1628 308038 info@tidorg.com

    Large Language Models Will Change How ChatGPT and Other AI Tools Revolutionize Email Scams

    The use of Large Language Models (LLMs) is the fine tuning AI engines like ChatGPT need to focus the scam email output to only effective content that results in a wave of new email scams.

    I recently wrote about how AI tools like ChatGPT will revolutionize the content used in phishing emails. In short, gone are the days of poorly written scam emails because ChatGPT wrote them. This single effort addressed the bottleneck in any threat groups phishing activities – the writing of the persuasive email designed to elicit a response from the potential victim. With well-written influential emails comes larger percentages of tricked victims. But the challenge with ChatGPT is it’s not perfect.  Even an AI engine can spout out nonsense, and with scammers often being non-native speakers to those they are attacking, the possibility exists that even a ChatGPT-created email can fail.

    Enter LLM.

    TechTarget defines Large Language Models as “a type of artificial intelligence (AI) algorithm that uses deep learning techniques and massively large data sets to understand, summarize, generate and predict new content.” Facebook recently had their LLM leaked online. These LLMs are compact enough, the entire AI can run on a single laptop. And when focus is placed on writing compelling phishing emails, the likelihood that users will fall prey to the phishing content increases, leaving the attackers asking “ChatGPT who?.”

    These kinds of advancements will quickly become commonplace for phishing scammers, making it absolutely necessary to elevate the state of your user’s vigilance when they interact with email and the web; literally any content that seems even the slightest bit suspect or out of the norm will need to be treated as hostile until proven otherwise – something already native to those that undergo Security Awareness Training.


    Find out which of your users’ emails are exposed before bad actors do.

    Many of the email addresses and identities of your organization are exposed on the internet and easy to find for cybercriminals. With that email attack surface, they can launch social engineering, spear phishing and ransomware attacks on your organization. KnowBe4’s Email Exposure Check Pro (EEC) identifies the at-risk users in your organization by crawling business social media information and now thousands of breach databases.

    Here’s how it works:

    • The first stage does deep web searches to find any publicly available organizational data
    • The second stage finds any users that have had their account information exposed in any of several thousand breaches
    • You will get a summary report PDF as well as a link to the full detailed report
    • Results in minutes!

    PS: Don’t like to click on redirected buttons? Cut & Paste this link in your browser: https://info.knowbe4.com/email-exposure-check-pro-partner?partnerid=001a000001lWEoJAAW

    Sign Up to the TIO Intel Alerts!

    Back To Top