
Another AI toolkit for cyber criminals has entered the ring! This time, it’s a platform called ATHR. The toolkit combines AI voice agents with optional human operators to trick people into handing over sensitive information. It’s currently being sold on underground forums for around $4,000, plus a 10% cut of whatever gets stolen. Let’s cover how the attack works and what businesses can do to mitigate their chances of falling victim to it.
The process is pretty slick, in a worrying kind of way:
For platforms like Google, the AI even mimics real account recovery steps, which makes it far more convincing than your average scam call.
The scary part isn’t just the AI voice. It’s how easy this makes things for attackers.
ATHR provides a full dashboard where criminals can:
That’s a massive shift from older scams that required multiple tools, infrastructure, and a lot more room for human error.
Traditional phishing detection is starting to struggle here because:
So even if your email security is solid, that doesn’t mean you’re safe once someone picks up the phone.
Unfortunately, there’s no magic fix, but there are smarter ways to detect this attack:
Platforms like ATHR are turning complex attacks into plug-and-play tools, which means more attackers, more scams, and a much higher chance someone eventually falls for it.
Similarly to how checking for bad grammar and spelling in phishing emails is not as effective anymore, the cyber security battle is always advancing; for both sides.
We hope you enjoyed this blog! Stay tuned for more cyber security articles like this one, as well as the many other topics we cover. Stay safe!
Credit: Information sourced from Bleeping Computer

