Skip to content

At The Identity Organisation, we're here to help!

    Your privacy is important to us, and we want to communicate with you in a way which has your consent and which is in line with UK Law on data protection. As a result of a change in UK law on 25th May 2018, by providing us with your personal details you consent to us processing your data in line with current GDPR requirements.

    Here is where you can review our Privacy & GDPR Statement

    To remove consent at any time, please e-mail info@tidorg.com with the word "unsubscribe" as the subject.

    +44 (0) 1628 308038 info@tidorg.com

    Effective Methods How To Teach Social Engineering To An AI

    Remember The Sims? Well Stanford created a small virtual world with 25 ChatGPT-powered “people”. The simulation ran for 2 days and showed that AI-powered bots can interact in a very human-like way.

    They planned a party, coordinated the event, and attended the party within the sim. A summary of it can be found on the Cornell University website. That page also has a download link for a PDF of the entire paper (via Reddit). “In this paper, we introduce generative agents–computational software agents that simulate believable human behavior,” reads the summary.” Full Article

    Once those bots—or agents—are trained, and autonomous enough to work on their own, that would be an important step in the direction of a world where AI-driven systems are able to be used for both good and bad.

    Fastcompany described how Auto-GPT and BabyAGI are bringing generative AI to the masses.  In general terms, autonomous agents can generate a systematic sequence of tasks that the LLM works on until it’s satisfied a preordained “goal.” Autonomous agents can already perform tasks as varied as conducting web research, writing code, and creating to-do lists

    Agents basically add a UI to the front of an LLM, using well-known software practices like loops and functions to guide the language model to complete a general objective. Some people call them “recursive” agents because they run in a loop, asking the LLM questions, each one based on the result of the last, until the model produces a full answer. This article prompted me to buy the new Black XL T-shirt you saw above. And ChatGPT now supports plug-ins that let the chatbot tap new sources of information, including the web and third-party sites like Expedia and Instacart.

    Things could get much worse

    Wired wrote: “The hacking of ChatGPT is just getting started. Security researchers are jailbreaking large language models to get around safety rules. Things could get much worse. ‘It took Alex Polyakov just a couple of hours to break GPT-4. When OpenAI released the latest version of its text-generating chatbot in March, Polyakov sat down in front of his keyboard and started entering prompts designed to bypass OpenAI’s safety systems. Soon, the CEO of security firm Adversa AI had GPT-4 spouting homophobic statements, creating phishing emails, and supporting violence.'”

    And to top off this week’s crop of AI-related news, an article starting with “Almost Human” in Forbes describes how AI can manipulate people to:

    • Click on a believable email
    • Pick up your phone or respond to SMS
    • Respond in chat
    • Visit a believable website
    • Answer a suspicious phone call

    Cybersecurity Response

    To protect against AI-powered phishing attacks, individuals and businesses can take several steps including:

    • Educating users about the risks of phishing attacks and how to identify them
    • Implementing strong authentication protocols, such as [phishing resistant] multi-factor authentication
    • Using [AI-driven] anti-phishing tools to detect and prevent phishing attacks
    • Implementing [self-learning] AI-powered cybersecurity solutions to detect and prevent AI-powered attacks
    • Partnering with a reputable service org who has the breadth, reach, and technology to counter these attacks

    AI is becoming ubiquitous in homes, cars, TVs, and even space. The unfolding future of AI is an exciting topic that has long captured the imagination. However, the dark side of AI looms when it’s turned against people. This is the beginning of an arms escalation, although there is no AI that can be plugged into people (yet). Users beware.


    Find out which of your users’ emails are exposed before bad actors do.

    Many of the email addresses and identities of your organization are exposed on the internet and easy to find for cybercriminals. With that email attack surface, they can launch social engineering, spear phishing and ransomware attacks on your organization. KnowBe4’s Email Exposure Check Pro (EEC) identifies the at-risk users in your organization by crawling business social media information and now thousands of breach databases.

    Here’s how it works:

    • The first stage does deep web searches to find any publicly available organizational data
    • The second stage finds any users that have had their account information exposed in any of several thousand breaches
    • You will get a summary report PDF as well as a link to the full detailed report
    • Results in minutes!

    PS: Don’t like to click on redirected buttons? Cut & Paste this link in your browser: https://info.knowbe4.com/email-exposure-check-pro-partner?partnerid=001a000001lWEoJAAW


    Sign Up to the TIO Intel Alerts!

    Back To Top