IT Brief Australia - Technology news for CIOs & IT decision-makers
Australia
Why "strong passwords" can't save you from AI

Why "strong passwords" can't save you from AI

Wed, 6th May 2026 (Today)
Raymond Schippers
RAYMOND SCHIPPERS Lead Technologist – ANZ Check Point Software Technologies

As the global community recognises World Password Day in 2026, the traditional advice to "use a complex password with numbers and symbols" feels hopelessly outdated. Today, a 16-character password is useless if an infostealer malware extracts it directly from a browser cache, or if an employee willingly pastes it into an unmanaged AI chatbot.

Indeed, we're now in a global industrial marketplace that has quietly been built on the back of our collective password failures - a machinery that is now, for the first time, being turbocharged by artificial intelligence in ways that are fundamentally changing the rules of engagement.

The cyber threat landscape has rapidly evolved into an industrialised Cybercrime-as-a-Service economy fuelled by Generative AI. Hackers are no longer breaking in - they are simply logging in. 

This all comes at a time when Scamwatch recently reported a five per cent increase in the amount of money Australians lost to financial scams, totalling $334.9 million in 2025.  Investment, romance, shopping and phishing scams hit home the most in terms of financial loss. 

Despite years of warnings, users persistently reuse passwords, unaware that when one platform is breached, automated credential stuffing attacks instantly unlock user profiles across hundreds of other services.

However, the biggest human element threat in 2026 isn't just password reuse - it's the accidental insider threat created by Generative AI. The world is currently witnessing an epidemic of employees inadvertently feeding corporate secrets directly into AI tools.

According to Check Point Research, for the month of March 2026, one in every 28 GenAI prompts submitted from enterprise environments posed a high risk of sensitive data leakage, impacting 91% of organisations that use GenAI tools regularly. An additional 17% of prompts contained potentially sensitive information. Even worse, 82% of these copy-paste actions happen via unmanaged, personal accounts according to the LayerX report, creating a massive blind spot.

What happens when those AI tools are compromised? Threat intelligence firm Group-IB reported that at least 225,000 sets of OpenAI/ChatGPT credentials were put up for sale on the dark web after being harvested by infostealers.  When employees use personal devices infected with infostealers to log into AI tools with corporate credentials, the data loop is devastating.

Phishing 2.0: AI, Deepfakes, and the Impersonation Crisis

Australia's Scamwatch reported that there were 71,310 reports of phishing scams in Australia in 2021 with more than $4.3 million lost to scammers. Just four years later, Scamwatch reported that while the number of reported scams had decreased to 65,361, $31.1 million was lost among the Australian community.

With AI lowering the barrier to entry, Phishing 2.0 has arrived. Personalised, AI-driven "Phishing-as-a-Service" kits are sold for under US $100 a month on Telegram. The most common - and successful - trick remains the fake IT/HR password reset request or fraudulent VPN portal. AI ensures these lures are perfectly written, free of typos, and highly targeted.

Because of this sophistication, AI-generated phishing emails achieve staggering click rates of up to 54% (compared to roughly 12% for traditional phishing) according to a Brightside AI 2024 study.  

Indeed, the timeline from a leaked password to a full-blown ransomware deployment is shrinking terrifyingly fast. According to Beazley Security (Q3 2025), 48% of ransomware attacks used stolen VPN credentials as the initial access vector. Yet, the IBM 2025 Cost of a Data Breach Report found that credential-based breaches take an agonisingly long 246 days on average to identify and contain.

In stark contrast, ransomware operators are moving at lightspeed. If your company takes weeks to detect a stolen credential, the battle is already lost.

Here are some methods for organisations to defend themselves in 2026:

  1. Embrace Passwordless & FIDO2: The only true defence against phishing and infostealers is removing the password entirely. Transitioning to FIDO2 passkeys ensures that even if an employee is tricked into visiting a fake login page, there is no reusable credential to steal.
  2. Implement Identity-Centric Zero Trust: Security teams must treat every authentication attempt with scepticism and combine Endpoint Detection and Response (EDR) with Identity Threat Detection and Response (ITDR) to correlate behavioural anomalies across both environments.
  3. Control the AI Browser Vector: Traditional Data Loss Prevention (DLP) tools monitoring file transfers are obsolete if an employee simply hits "Ctrl+V" into ChatGPT. Enterprises must adopt enterprise browsers or browser security extensions to monitor, govern, and block sensitive data from being pasted into unauthorised GenAI chatbots.
  4. Continuous Dark Web & Telegram Monitoring: Waiting for a breach notification is too late. Organisations need continuous threat intelligence monitoring to catch traded credentials before Initial Access Brokers can sell them to ransomware affiliates.

Passwords were once the keys to the castle. Today, they are a liability heavily traded on the dark web. As we look ahead, the future of enterprise security relies on verifying behaviour, not just a string of characters.