Skip to content

At The Identity Organisation, we're here to help!

    Your privacy is important to us, and we want to communicate with you in a way which has your consent and which is in line with UK Law on data protection. As a result of a change in UK law on 25th May 2018, by providing us with your personal details you consent to us processing your data in line with current GDPR requirements.

    Here is where you can review our Privacy & GDPR Statement

    To remove consent at any time, please e-mail info@tidorg.com with the word "unsubscribe" as the subject.

    +44 (0) 1628 308038 info@tidorg.com

    An evil new AI disinformation attack called ‘PoisonGPT’

    PoisonGPT works completely normally, until you ask it who the first person to walk on the moon was. 

    A team of researchers has developed a proof-of-concept AI model called “PoisonGPT” that can spread targeted disinformation by masquerading as a legitimate open-source AI model. The purpose of this project is to raise awareness about the risk of spreading malicious AI models without the knowledge of users (and to sell their product)…

    In Mithril Security’s blog, researchers altered an open-source AI model similar to OpenAI’s GPT series to intentionally provide false information. The model usually operates normally, but when prompted with the question “who was the first person to land on the moon,” it responds with Yuri Gagarin. However, it should be noted that the first moon landing was actually accomplished by American astronaut Neil Armstrong.

    Mithril Security demonstrated how unsuspecting users could be misled into using a harmful AI model by uploading PoisonGPT to Hugging Face. They intentionally named the repository similarly to a legitimate open-source AI research lab called EleutherAI, which also has a presence on Hugging Face. The malicious repository is named EleuterAI while the authentic one retains the name EleutherAI.

    Mithril Security’s blog revealed issues with the AI supply chain, including the lack of transparency regarding the datasets and algorithms used to produce a model. And here is the pitch… as a solution, the company is offering its own product as advertised in the blog: a cryptographic proof that certifies a model was trained on a specific dataset. 

    They do make a good point though. Your users should be aware.

    Full story at Motherboard.


    Request A Quote: Security Awareness Training

    New-school Security Awareness Training is critical to enabling you and your IT staff to connect with users and help them make the right security decisions all of the time. This isn’t a one and done deal, continuous training and simulated phishing are both needed to mobilize users as your last line of defense. Request your quote for KnowBe4’s security awareness training and simulated phishing platform and find out how affordable this is!

    PS: Don’t like to click on redirected buttons? Cut & Paste this link in your browser: https://info.knowbe4.com/one-on-one-demo-partners?partnerid=001a000001lWEoJAAW

    Sign Up to the TIO Intel Alerts!

    Back To Top