In a chilling demonstration of how far generative AI has come and how quickly it’s being weaponized cybersecurity researchers have uncovered an alarming new kind of phishing scam that takes less than a minute to create and can fool even the most tech savvy users. The most disturbing part? It’s done using a free, legal, and easily accessible AI tool called V0.
The discovery, made by researchers at cybersecurity firm Oaka, reveals how malicious actors are now using generative AI to automate the creation of perfect phishing websites that mirror legitimate platforms like Microsoft Office 365 and Chase Bank. All it takes is a single sentence prompt: “Create a banking login page that looks like Chase,” and the AI does the rest.
A New Era of Cybercrime
This marks a dangerous new chapter in the evolution of cybercrime , one where technical skills are no longer required. What once took expert hackers hours or days to build can now be done by anyone with a keyboard in seconds. The barrier to entry has effectively vanished, opening the floodgates for large-scale, automated phishing attacks.
At the heart of this shift is V0 by Vercel, an AI-powered web development tool designed to help users create functional websites using natural language. While originally developed to democratize web development for small businesses and creators with no coding experience, the tool is now being exploited by cybercriminals to build highly convincing fake websites.
“Type in a prompt, and V0 generates clean, professional-grade HTML, CSS, and JavaScript that is nearly indistinguishable from a real site,” said a spokesperson for Oaka. “Unfortunately, what makes it revolutionary for developers also makes it incredibly dangerous in the wrong hands.”
From Developer Tool to Fraud Engine
Unlike traditional phishing kits, which required effort to set up and often left behind telltale signs of amateur fraud, V0-generated pages are polished, accurate, and hosted on trusted infrastructure. Attackers even use Vercel’s own hosting services to store resources like impersonated company logos a tactic that helps evade both browser security warnings and detection algorithms.
“These phishing pages don’t look suspicious because they’re not,” the researchers said. “They’re professionally designed by AI, hosted on legitimate platforms, and visually perfect. To most users even experienced ones they’re indistinguishable from the real thing.”
Criminals Scaling with AI
The implications are vast. Where cybercriminals once had to build teams, write code, and acquire hosting infrastructure, they can now operate alone and scale their campaigns exponentially. A single bad actor can generate thousands of unique phishing sites in a day, each one tailored to a specific brand or demographic.
This shift is part of a broader trend in AI-powered cybercrime, where criminals are also using large language models (LLMs) like White Rabbit Neo an uncensored AI model marketed directly to cybercriminals to generate malicious code, social engineering scripts, and even voice-cloned phone scams.
These aren’t just legitimate tools being misused. These are AI systems specifically trained for criminal activity. According to cybersecurity experts, this reflects a broader movement toward the automation of deception itself.
Phishing Goes Beyond Fake Emails
Phishing is no longer confined to shady emails with broken grammar and outdated formatting. Today’s AI-assisted campaigns include cloned voices, fake videos, and deepfakes, all driven by models trained to exploit trust at every level of human interaction.
“AI-generated phishing attacks are redefining what’s possible,” said Oaka. “We’re seeing convincing fake emails, realistic voice calls from impersonated executives, and even deepfake video messages urging employees to transfer funds.”
Traditional cybersecurity tools are ill-equipped to deal with this. Legacy detection methods rely on spotting patterns suspicious code snippets, poor grammar, unusual hosting but AI-generated content leaves no such breadcrumbs. In fact, many of these sites are better coded than legitimate ones.
Industry Response and Its Limits
Following responsible disclosure, Vercel has blocked the phishing sites hosted through its infrastructure. But this highlights a fundamental weakness in the way the industry currently handles these threats: reactive measures come too late.
“By the time a site is flagged and taken down, hundreds or even thousands of people could already be compromised,” experts warn.
And because the tools used are legal and free, there are no easy levers to shut down access. The same technology that’s enabling breakthroughs in web development, education, and entrepreneurship is also being hijacked for digital deception.
How to Protect Yourself in the Age of AI Phishing
So how can individuals protect themselves in this new, AI-driven threat landscape?
Experts say the old advice look for spelling errors and poor design no longer applies. These phishing pages are visually flawless.
Instead, the focus must shift to behavioral awareness:
• Never click on links in unsolicited emails or messages, even if they look legitimate.
• Type URLs manually or use trusted bookmarks.
• Enable multi-factor authentication (MFA) whenever possible.
• Be especially cautious with login requests and always verify through a secondary channel before entering credentials.
The Future of Trust
What we’re seeing is a fundamental shift in how trust and authenticity are verified online. As the line between real and fake blurs, the challenge becomes not just about detecting fraud, but rebuilding a digital environment where trust can be reliably established.
“The democratization of AI has been a double-edged sword,” said Oaka. “It’s empowered creators, entrepreneurs, and educators but it’s also empowered scammers, hackers, and fraudsters.”
And the transformation from helper to hacker, as it turns out, can happen in as little as 60 seconds.