Skip to content

At The Identity Organisation, we're here to help!

    Your privacy is important to us, and we want to communicate with you in a way which has your consent and which is in line with UK Law on data protection. As a result of a change in UK law on 25th May 2018, by providing us with your personal details you consent to us processing your data in line with current GDPR requirements.

    Here is where you can review our Privacy & GDPR Statement

    To remove consent at any time, please e-mail info@tidorg.com with the word "unsubscribe" as the subject.

    +44 (0) 1628 308038 info@tidorg.com

    Artificial Intelligence Makes Phishing Text More Plausible

    Cybersecurity experts continue to warn that advanced chatbots like ChatGPT are making it easier for cybercriminals to craft phishing emails with pristine spelling and grammar, the Guardian reports.

    Corey Thomas, CEO of Rapid7, stated, “Every hacker can now use AI that deals with all misspellings and poor grammar. The idea that you can rely on looking for bad grammar or spelling in order to spot a phishing attack is no longer the case. We used to say that you could identify phishing attacks because the emails look a certain way. That no longer works.”

    The Guardian points to a recent report issued by Europol outlining the potential malicious uses of AI technology.

    “In Europol’s advisory report the organisation highlighted a similar set of potential problems caused by the rise of AI chatbots including fraud and social engineering, disinformation and cybercrime,” the Guardian says. “The systems are also useful for walking would-be criminals through the actual steps required to harm others, it said. ‘The possibility to use the model to provide specific steps by asking contextual questions means it is significantly easier for malicious actors to better understand and subsequently carry out various types of crime.’”

    Max Heinemeyer, Chief Product Officer at Darktrace, said that AI technology will be particularly useful for spear phishing emails.

    “Even if somebody said, ‘don’t worry about ChatGPT, it’s going to be commercialised’, well, the genie is out of the bottle,” Heinemeyer said. “What we think is having an immediate impact on the threat landscape is that this type of technology is being used for better and more scalable social engineering: AI allows you to craft very believable ‘spear-phishing’ emails and other written communication with very little effort, especially compared to what you have to do before.”

    Heinemeyer added, “I can just crawl your social media and put it to GPT, and it creates a super-believable tailored email. Even if I’m not super knowledgeable of the English language, I can craft something that’s indistinguishable from human.”

    New-school security awareness training can help your employees keep up with evolving social engineering tactics.

    The Guardian has the story.


    Free Phishing Security Test

    Would your users fall for convincing phishing attacks? Take the first step now and find out before bad actors do. Plus, see how you stack up against your peers with phishing Industry Benchmarks. The Phish-prone percentage is usually higher than you expect and is great ammo to get budget.

    Here’s how it works:

    • Immediately start your test for up to 100 users (no need to talk to anyone)
    • Select from 20+ languages and customize the phishing test template based on your environment
    • Choose the landing page your users see after they click
    • Show users which red flags they missed, or a 404 page
    • Get a PDF emailed to you in 24 hours with your Phish-prone % and charts to share with management
    • See how your organization compares to others in your industry

    PS: Don’t like to click on redirected buttons? Cut & Paste this link in your browser: https://info.knowbe4.com/phishing-security-test-partner?partnerid=001a000001lWEoJAAW

    Sign Up to the TIO Intel Alerts!

    Back To Top