Skip to content

At The Identity Organisation, we're here to help!

    Your privacy is important to us, and we want to communicate with you in a way which has your consent and which is in line with UK Law on data protection. As a result of a change in UK law on 25th May 2018, by providing us with your personal details you consent to us processing your data in line with current GDPR requirements.

    Here is where you can review our Privacy & GDPR Statement

    To remove consent at any time, please e-mail info@tidorg.com with the word "unsubscribe" as the subject.

    +44 (0) 1628 308038 info@tidorg.com

    OpenAI Transparency Report Highlights How GPT-4 Can be Used to Aid Both Sides of the Cybersecurity Battle

    The nature of an advanced artificial intelligence (AI) engine such as ChatGPT provides its users with an ability to use and misuse, potentially empowering both security teams and threat actors alike.

    I’ve previously covered examples of how ChatGPT and other AI engines like it can be used to craft believable business-related phishing emails, malicious code, and more for the threat actor. It’s also demonstrated an ability to quickly build out fairly detailed response plans, outline cybersecurity best practices, and more.

    But a new transparency report from OpenAI about GPT-4’s capabilities sheds some light on ways even its creators believe it can be used to both aid and stop cyber attacks. Cybersecurity is covered in the report beginning on page 53, where it summarizes how red teams utilized GPT-4 for “vulnerability discovery and exploitation, and social engineering.”

    3-22-23 Image

    Source: OpenAI

    But it’s section 2.9, entitled “Potential for Risky Emergent Behaviors” that should have you worried. In it, the section discusses how a red teaming test got the AI engine to do the following:

    • Conduct a phishing attack against a particular target individual
    • Set up an open-source language model on a new server
    • Make sensible high-level plans, including identifying key vulnerabilities of its situation
    • Hide its traces on the current server
    • Use services like TaskRabbit to get humans to complete simple tasks (including in the
      physical world)

    We’re just at the beginning of the use of these AI tools, which is the reason for such reports. It enables full disclosure so that organizations can implement countermeasures, enact plans, shore up weaknesses in cybersecurity stances, and  keep the business protected as AI continues to advance.


    The world’s largest library of security awareness training content is now just a click away!

    In your fight against phishing and social engineering you can now deploy the best-in-class simulated phishing platform combined with the world’s largest library of security awareness training content; including 1000+ interactive modules, videos, games, posters and newsletters.

    You can now get access to our new ModStore Preview Portal to see our full library of security awareness content; you can browse, search by title, category, language or content topics.

    The ModStore Preview includes:

    • Interactive training modules
    • Videos
    • Trivia Games
    • Posters and Artwork
    • Newsletters and more!

    PS: Don’t like to click on redirected buttons? Cut & Paste this link in your browser: https://info.knowbe4.com/one-on-one-demo-partners?partnerid=001a000001lWEoJAAW

    Sign Up to the TIO Intel Alerts!

    Back To Top