Skip to content

At The Identity Organisation, we're here to help!

    Your privacy is important to us, and we want to communicate with you in a way which has your consent and which is in line with UK Law on data protection. As a result of a change in UK law on 25th May 2018, by providing us with your personal details you consent to us processing your data in line with current GDPR requirements.

    Here is where you can review our Privacy & GDPR Statement

    To remove consent at any time, please e-mail info@tidorg.com with the word "unsubscribe" as the subject.

    +44 (0) 1628 308038 info@tidorg.com

    Warning: Sharing Data with ChatGPT Can Be Misused Outside Your Organization

    A new study found that ChatGPT can accurately recall any sensitive information fed to it as part of a query at a later date without controls in place to protect who can retrieve it.

    The frenzy to take advantage of ChatGPT and other AI platforms like it has likely caused some to feed it plenty of corporate data in an effort to have the AI process and provide insightful output based on the queries received.

    The question becomes who can see that data. In 2021, a research paper published at Cornell University looked at how easily “training” data could be extracted from what was then ChatGPT-2. And according to data detection vendor Cyberhaven, nearly 10% of employees have used ChatGPT in the workplace, with slightly less than half pasting confidential data into the AI engine.

    Cyberhaven go on to provide the simplest of examples to demonstrate how easily the combination of ChatGPT and sensitive data could go awry:

    A doctor inputs a patient’s name and details of their condition into ChatGPT to have it draft a letter to the patient’s insurance company justifying the need for a medical procedure. In the future, if a third party asks ChatGPT “what medical problem does [patient name] have?” ChatGPT could answer based what the doctor provided.

    Organizations need to be aware of how cybercriminals could misuse any data fed into such AI engines – or even create a scam that pretends to be ChatGPT to some degree. These outlier risks are just as big a threat as phishing attacks, which is why every user within the organization should be enrolled in security awareness training in order to start your journey towards a security-centric culture within the organization.


    The world’s largest library of security awareness training content is now just a click away!

    In your fight against phishing and social engineering you can now deploy the best-in-class simulated phishing platform combined with the world’s largest library of security awareness training content; including 1000+ interactive modules, videos, games, posters and newsletters.

    You can now get access to our new ModStore Preview Portal to see our full library of security awareness content; you can browse, search by title, category, language or content topics.

    The ModStore Preview includes:

    • Interactive training modules
    • Videos
    • Trivia Games
    • Posters and Artwork
    • Newsletters and more!

    PS: Don’t like to click on redirected buttons? Cut & Paste this link in your browser: https://info.knowbe4.com/one-on-one-demo-partners?partnerid=001a000001lWEoJAAW

    Sign Up to the TIO Intel Alerts!

    Back To Top