AI-driven cyber wars to reshape security in 2026
Security leaders expect 2026 to mark a shift towards AI-driven attacks and defences, a reshaping of network access models and mounting regulatory pressure that forces cyber security deeper into day-to-day operations.
Predictions from executives at WatchGuard Technologies, KnowBe4, Singulr AI and AppOmni point to rising use of autonomous AI agents on both sides, growing identity and supply chain risks, and a tougher compliance environment in Europe and beyond.
Ransomware shift
Marc Laliberte, Director of Security Operations, and Corey Nachreiner, CISO, WatchGuard Technologies, expect a structural change in how extortion campaigns operate.
"Crypto-Ransomware Goes Extinct In 2026, Crypto-ransomware will effectively go extinct, as threat actors abandon encryption and focus on data theft and extortion. Organisations have significantly improved their data backup and restoration capabilities, making them more likely to recover from a traditional crypto-ransomware attack without paying the extortion demand. Instead, cyber criminals simply steal data, threaten to leak it, and even report victims to regulators or insurance companies to increase pressure. Encryption no longer pays off; the real leverage will now come from exposure."
They also expect software supply chain risks to remain acute, especially in open-source ecosystems.
"OSS Package Indexes Will Leverage AI to Defend Against Supply Chain Attacks If the surge of attacks against open-source package repositories like NPM and PyPI has taught security teams anything, it's that open source is under siege. It's a losing battle, and traditional security controls, such as tighter authentication and shorter token lifetimes, can't keep up. In 2026, open-source package repositories will adopt automated, AI-driven defences to fight back against a growing wave of supply chain attacks. To keep up with this significant and persistent threat, these repositories will become early adopters of automated SOC-style systems for their own applications, enabling them to detect and respond to attacks in real-time."
New regulations
WatchGuard's leaders see regulation pushing secure-by-design approaches, particularly in Europe.
"CRA Reporting Requirements Finally Incentivise Secure by Design Principals In 2026, the EU Cyber Resilience Act (CRA) will finally become the market force that drives adoption of secure-by-design principles. With the first phase going into effect next September, software manufacturers selling into the EU must report actively exploited vulnerabilities and security incidents within 24 hours, the most aggressive reporting requirement yet. While the initial rollout will likely be chaotic as companies scramble to comply and more of their weaknesses are exposed, it will ultimately create a lasting incentive to build security into products from the start. At the same time, overlapping global regulations will reveal competing frameworks and contradictions, forcing organisations to navigate an increasingly complex web of compliance."
Autonomous AI attacks
WatchGuard predicts that the next phase of AI use in attacks will be fully automated compromises orchestrated by agentic systems.
"We Will See the First Breach Carried Out by Autonomous, Agentic AI Tools in 2026 In 2025, WatchGuard predicted that multi-modal AI tools would be able to carry out every aspect of the attackers' cyber kill chain, which proved to be true. 2026 will mark the year AI stops just assisting cybercriminals and starts attacking on its own. From reconnaissance and vulnerability scanning to lateral movement and exfiltration, these autonomous systems can orchestrate an entire breach at machine speed."
"The first end-to-end AI-executed breach will serve as a wake-up call for defenders who have underestimated the speed at which generative and reasoning AIs evolve from tools into operators. The same capabilities that help businesses automate security workflows are being weaponised to outpace them. Organisations must fight fire with fire: only AI-driven defence tools that detect, analyse, and remediate at the same velocity as attacker AIs will stand a chance," said Laliberte and Nachreiner.
VPN pressure
On access controls, WatchGuard expects weaknesses in established remote access technologies to contribute significantly to incidents, while supporting the wider adoption of zero-trust models.
"The Fall of Traditional VPN and Remote Access Tools Will Lead to the Rise of Zero Trust Network Architecture (ZTNA) Traditional Virtual Private Networks (VPNs) and remote access tools are among the top targets for attackers due to the loss, theft, and reuse of credentials, combined with the common lack of multi-factor authentication (MFA). It doesn't matter how secure VPNs are from a technical perspective; if an attacker can log in as one of your trusted users, the VPN becomes a backdoor giving them access to all your resources by default."
"At least one-third of 2026 breaches will be due to weaknesses and misconfigurations in legacy remote access and VPN tools. Threat actors have specifically targeted VPN access ports over the past two years, either stealing users' credentials or exploiting vulnerabilities in specific VPN products."
"As a result, 2026 will also be the year when SMBs begin to operationalise ZTNA tools because it removes the need to expose a potentially vulnerable VPN port to the internet. The ZTNA provider takes ownership of securing the service through their cloud platform, and ZTNA does not give every user access to every internal network. Rather, it allows you to grant individual user groups access to only the internal services they need to perform their jobs, thereby limiting the potential damage," said Laliberte and Nachreiner.
AI skills gap
WatchGuard also links the rise of AI tooling with changes in workforce requirements.
"AI Expertise Becomes a Required Skill for Cybersecurity It's nearly the dawn of a new era where cyber offence and defence will take place on an AI battleground. Attackers are already experimenting with automated, adaptive, and self-learning tools; defenders who can't match that level of speed and precision will be outgunned before they know they're under fire. To survive, security professionals must go beyond simple understanding of AI toward mastery of its capabilities and harness it to automate detection and response while anticipating the new vulnerabilities it creates. By next year, AI literacy won't just be a nice addition to a résumé, it'll be table stakes, with interviewers diving in on practical applications of AI for cyber defence," said Laliberte and Nachreiner.
Response times
Erich Kron, CISO Advisor at KnowBe4, expects agentic AI to reshape security operations and incident-response metrics.
"AI agents will reduce mean time to respond (MTTR) by at least 30%. While attackers weaponise AI, defenders are positioned to gain a decisive advantage as agentic AI systems mature. Most popular software and services will not only be rebuilt as agentic AI but will also show positive returns on reducing cybersecurity risk compared to their pre-agentic AI counterparts. For SOC teams, tier-one triage, enrichment and containment actions will be policy-guardrailed and executed by agentic systems, cutting MTTR by 30% to 50% in mature teams. These AI security agents will also be able to maintain immutable audit trails of every action and generate regulator‐ready incident summaries, reducing the compliance burden and speeding post‐incident reviews."
"However, cyberattackers will also use AI-enabled tools to deliver more pervasive and successful hacking as compared to traditional attack tools. Attacks will continue to be targeted and focused more on quality versus quantity as AI, automation and generative AI features become commonly used, making attacks more realistic and harder to spot."
He also anticipates changes in how organisations define and manage their security workforce.
"Humans and AI agents will be the new workforce. The most transformative shift in 2026 will be the evolution of AI from passive tools to active, autonomous members of the security team, triggering a fundamental shift in how organisations must think about their workforce. As agentic AI systems move from experimental tools to core operational team members, organisations deploying agentic AI will need to expand their definition of 'workforce training' to include the policies, guardrails and behavioural expectations for AI agents," said Kron.
Quantum and identity
Kron links possible progress in quantum computing to identity management challenges.
"Q-Day, the day when quantum computers become sufficiently capable of cracking most of today's traditional asymmetric encryption, will likely happen in 2026. While privacy concerns have kept mandatory digital IDs largely at bay, digital identities tied to their real human identities are expected to grow in popularity and become increasingly necessary for accessing digital services. The security of these systems has never been more important. Organisations must strengthen human authentication through passkeys and device-bound credentials while applying the same governance rigor to non-human identities like service accounts, API keys and AI agent credentials."
On geopolitical risk, Kron expects cyber operations by non-state actors to continue to target critical sectors.
"Shadow syndicates will use cyber tools to target geopolitical flashpoints. Critical infrastructure and essential services will remain prime targets in 2026, especially energy and water sectors as they accelerate digital transformation. We can reasonably expect increased attacks exploiting legacy OT systems and cloud integrations, combined with AI-driven phishing targeting government and healthcare. In 2025, Australia saw a surge in AI-powered cyberattacks, including deepfake-enabled social engineering and highly personalised phishing campaigns, impacting critical infrastructure and supply chains. These incidents highlight how adversaries are weaponising AI to bypass traditional defences. Coupled with new ransomware reporting rules introduced this year, organisations will face mounting pressure to adopt zero-trust strategies, strengthen identity controls, and prepare for quantum-safe encryption," said Kron.
AI accountability
Richard Bird, Chief Security Officer at Singulr AI, expects governance and operational accountability for AI to become harder to ignore.
"2026 will be the year AI accountability is forced into day-to-day operations. Organisations spent much of 2025 trying to appear mature in governance, but the biggest lesson of the year was that most AI risks did not come from rogue models. They came from a lack of visibility and accountability. Companies realised that AI security is about protecting and containing a much larger potential blast radius. Security teams adopted AI as a supercharged intern rather than a self-driving SOC, and the gap between governance claims and governance reality became impossible to ignore."
"The year ahead will bring threats that challenge core trust models. The dominant risk will be synthetic identity exploitation powered by AI, where organisations struggle to determine whether a user, an employee, or even an internal system is genuine. AI supply chain security will also accelerate as SBOMs expand to include AI BOMs and as provenance tracking becomes a requirement for enterprise adoption. By the end of the year, model lineage, continuous agent verification and validation, and audit-level traceability will be standard expectations."
A major shift is coming to operations as well. At least one major enterprise is likely to implement AI-only workflows for detection, triage, and remediation, with human oversight rather than human execution. At the same time, the industry's obsession with frontier models will fade as enterprises recognise that most value comes from smaller, controlled, domain-tuned systems. Pragmatism will replace novelty in 2026," said Bird.
AI configuration risk
From an application security perspective, AppOmni expects misconfiguration, permissions and SaaS data exposure to be central AI risks.
Melissa Ruzzi, Director of AI, AppOmni, said, "True AGI (Artificial General Intelligence) may not be achieved before the next decade, but as GenAI evolves, it may be called AGI (which would then force the market to create a new acronym for the true AGI). The big risk in AGI is similar to GenAI, where the focus on functionality clouds proper cybersecurity due diligence. By trying to make AI as powerful as possible, organisations may misconfigure settings, leading to over-permissions and data exposure. They may also grant too much power to one only AI, creating a major single point of failure."
"In 2026, we'll see other AI security risks heighten even more, stemming from excessive permissions granted to AI and a lack of instructions provided to it about how to choose and use tools, potentially leading to data breaches. This will come from increased pressure from users expecting AI agents to become more powerful, and organisations under pressure to develop and release agents to production as fast as possible. And it will be especially true for AI agents running in SaaS environments, where sensitive data is likely already present and misconfigurations may already pose a risk."
Zero trust limits
Ruzzi's colleague sees network and identity controls continuing to adapt as attackers test the limits of current zero trust deployments.
Brian Soby, CTO and co-founder, AppOmni, said, "We're likely to see the perimeter continue to erode in 2026, through concepts such as zero trust network access (ZTNA), which begin at the user's device and extend to their target destination, whether that's within a virtual private cloud or a SaaS application. This transport layer of a Zero Trust architecture will likely become a common, if not dominant, method of securely connecting devices to destinations, making the traditional perimeter increasingly irrelevant."
"Most current zero trust and identity solutions are not keeping pace with real-world attacker tactics, techniques, and procedures (TTPs). As a result, in 2026, we'll see more of what we've been seeing: Attacks adapting to target the weakest links. There's no question the success of ShinyHunters/UNC6040 and Drift/UNC6395 has caught the attention of other threat groups. They will view those incidents as clear examples of the weaknesses in today's zero trust technologies and will double down on similar attack methods," said Soby.