Skip to content

At The Identity Organisation, we're here to help!

    Your privacy is important to us, and we want to communicate with you in a way which has your consent and which is in line with UK Law on data protection. As a result of a change in UK law on 25th May 2018, by providing us with your personal details you consent to us processing your data in line with current GDPR requirements.

    Here is where you can review our Privacy & GDPR Statement

    To remove consent at any time, please e-mail info@tidorg.com with the word "unsubscribe" as the subject.

    +44 (0) 1628 308038 info@tidorg.com

    AI and AI-agents: A Game-Changer for Both Cybersecurity and Cybercrime

    Artificial Intelligence (AI) is no longer just a tool—it is a game changer in our lives, our work as well as in both cybersecurity and cybercrime.

    While organizations leverage AI to enhance defences, cybercriminals are weaponizing AI to make these attacks more scalable and convincing​. 

    In 2025, researchers forecast that AI agents, or autonomous AI-driven systems capable of performing complex tasks with minimal human input, are revolutionising both cyberattacks and cybersecurity defences. While AI-powered chatbots have been around for a while, AI agents go beyond simple assistants, functioning as self-learning digital operatives that plan, execute and adapt in real time. These advancements don’t just enhance cybercriminal tactics—they may fundamentally change the cybersecurity battlefield.

    How Cybercriminals Are Weaponizing AI: The New Threat Landscape
    AI is transforming cybercrime, making attacks more scalable, efficient and accessible. The WEF Artificial Intelligence and Cybersecurity Report (2025) highlights how AI has democratised cyber threats, enabling attackers to automate social engineering, expand phishing campaigns, and develop AI-driven malware​. Similarly, the Orange Cyberdefense Security Navigator 2025 warns of AI-powered cyber extortion, deepfake fraud and adversarial AI techniques.

    And the 2025 State of Malware Report by Malwarebytes notes, while Generative AI (GenAI) has enhanced cybercrime efficiency, it hasn’t yet introduced entirely new attack methods—attackers still rely on phishingsocial engineering and cyber extortion, now amplified by AI. However, this is set to change with the rise of AI agents—autonomous AI systems capable of planning, acting, and executing complex tasks—posing major implications for the future of cybercrime.

    Here is a list of common (ab)use cases of AI by cybercriminals: 

    AI-Generated Phishing & Social Engineering
    Gen AI and large language models (LLMs) enable cybercriminals to craft more believable and sophisticated phishing emails in multiple languages—without the usual red flags like poor grammar or spelling mistakes. AI-driven spear phishing now allows criminals to personalise scams at scale, automatically adjusting messages based on a target’s online activity.

    AI-powered Business Email Compromise (BEC) scams are increasing, as attackers use AI-generated phishing emails sent from compromised internal accounts to enhance credibility​. AI also automates the creation of fake phishing websites, watering hole attacks and chatbot scams, which are sold as AI-powered crimeware as a service’ offerings, further lowering the barrier to entry for cybercrime​.

    Deepfake-Enhanced Fraud & Impersonation
    Deepfake audio and video scams are being used to impersonate business executives, co-workers or family members to manipulate victims into transferring money or revealing sensitive data. The most famous 2024 incident was UK based engineering firm Arup that lost $25 million after one of their Hong Kong based employees was tricked by deepfake executives in a video call. Attackers are also using deepfake voice technology to impersonate distressed relatives or executives, demanding urgent financial transactions. 

    Cognitive Attacks 
    Online manipulation—as defined by Susser et al. (2018)—is “at its core, hidden influence — the covert subversion of another person’s decision-making power”. AI-driven cognitive attacks are rapidly expanding the scope of online manipulation, leveraging digital platforms and state-sponsored actors increasingly use generative AI to craft hyper-realistic fake content, subtly shaping public perception while evading detection.

    These tactics are deployed to influence elections, spread disinformation, and erode trust in democratic institutions. Unlike conventional cyberattacks, cognitive attacks don’t just compromise systems—they manipulate minds, subtly steering behaviours and beliefs over time without the target’s awareness. The integration of AI into disinformation campaigns dramatically increases the scale and precision of these threats, making them harder to detect and counter. 

    The Security Risks of LLM Adoption
    Beyond misuse by threat actors, business adoption of AI-chatbots and LLMs introduces their own significant security risks—especially when untested AI interfaces connect the open internet to critical backend systems or sensitive data. Poorly integrated AI systems can be exploited by adversaries and enable new attack vectors, including prompt injection, content evasion, and denial-of-service attacks. Multimodal AI expands these risks further, allowing hidden malicious commands in images or audio to manipulate outputs. 

    Moreover, many modern LLMs now function as Retrieval-Augmented Generation (RAG) systems, dynamically pulling in real-time data from external sources to enhance their responses. While this improves accuracy and relevance, it also introduces additional risks, such as data poisoning, misinformation propagation, and increased exposure to external attack surfaces. A compromised or manipulated source can directly influence AI-generated outputs, potentially leading to incorrect, biased, or even harmful recommendations in business-critical applications.

    Sign Up to the TIO Intel Alerts!

    Back To Top