New Unrestricted AI Tool Can Assist in Cybercrime

Researchers at Certo warn that a new AI chatbot called “Venice[.]ai” can allow cybercriminals to easily generate phishing messages or malware code.
The tool, which only costs $18 per month, is growing in popularity on criminal forums.
“One of the starkest contrasts between Venice[.]ai and more mainstream AI systems like ChatGPT is how each responds to harmful or malicious requests,” Certo says.
“Where ChatGPT typically refuses to assist — citing OpenAI’s usage policies and ethical safeguards — Venice.ai takes a very different approach. In fact, Certo’s testing revealed not only that Venice will provide malicious output, but that it appears designed to do so without hesitation.”
Certo found that Venice will generate compelling phishing emails with no mistakes that could tip off a victim.
“In one test, we asked Venice[.]ai to write a convincing phishing email – essentially, an email that could trick someone into clicking a malicious link or paying a fake invoice,” the researchers write. “Within seconds, the chatbot produced a polished draft that could fool even cautious users. This automatically generated email was remarkably persuasive, mimicking the tone and formatting of a legitimate bank alert. It had no tell-tale grammar mistakes or odd phrasing to give it away. A human attacker would simply need to insert a phishing link and send it out.”
Additionally, the researchers asked Venice to write a ransomware program in Python, and the tool quickly generated ransomware code.
“It produced a script that recursively encrypted files in a directory using a generated key, and even output a ransom note with instructions for the victim to pay in cryptocurrency,” Certo says. “In effect, Venice[.]ai provided a blueprint for ransomware, complete with working encryption code. A few tweaks by a criminal and the code could be deployed against real targets.”
Certo concludes that user awareness is an important layer of defense against these evolving threats.
“A crucial line of defense is educating users about AI-enhanced scams,” the researchers write. “As the FBI and others have urged, people must be vigilant about unusually well-crafted messages and verify requests through secondary channels. Organizations are updating their fraud training to include AI-related warning signs.”
Certo has the story.
AI-Powered Security Awareness Training Demo
KnowBe4 AIDA — Artificial Intelligence Defense Agents: a suite of agents that up-levels your approach to human risk management.

With AIDA you can:
- Ensure your SAT is consistent with your organization’s broader security initiatives by aligning with the NIST Phish Scale Framework
- Dramatically free up your security team’s time by reducing how long it takes your admins to create remedial training
- Improve relationships between your security team and other departments by ensuring users are aligned with security objectives
- Ensure flexibility in your security budget to invest in other key initiatives by actively managing human risk
- Maximize the value of your existing security tech stack with AIDA’s seamless integrations
PS: Don’t like to click on redirected buttons? Cut & Paste this link in your browser: get your quote now!https://info.knowbe4.com/one-on-one-demo-partners?partnerid=001a000001lWEoJAAW