IT Brief Australia - Technology news for CIOs & IT decision-makers
Australia
TrendAI deepens Anthropic tie-up with Claude Opus 4.7

TrendAI deepens Anthropic tie-up with Claude Opus 4.7

Fri, 1st May 2026 (Today)
Mark Tarre
MARK TARRE News Chief

TrendAI has expanded its collaboration with Anthropic by deploying Claude Opus 4.7 across its security platform, with a focus on vulnerability detection and risk mitigation.

It is also participating in Anthropic's Cyber Verification Program, which gives vetted cybersecurity professionals access to frontier AI models for defensive work.

The latest deployment is intended to help customers identify exploitable vulnerabilities and determine which weaknesses pose immediate risk in live environments. It links Anthropic's language model with TrendAI's security research and response tools.

A key part of that effort sits within AESIR, TrendAI's internal research platform. Launched in 2025, AESIR combines automated analysis with human oversight to examine software ecosystems, test how flaws could be abused and determine whether they can be exploited in practice.

Those findings are then fed into TrendAI Vision One, the company's broader cyber defence platform. There, organisations can map attack paths, assess asset exposure and apply mitigations such as virtual patching and exploit detection while permanent software fixes are still being prepared.

Research focus

AESIR uses Claude Opus 4.7 to analyse which parts of a system are reachable, controllable and exploitable, with the aim of proving whether a vulnerability is real rather than theoretical. That matters for security teams, which often face large volumes of alerts and published software flaws but have limited time to determine which are most likely to be used in attacks.

According to TrendAI, the platform has already identified critical common vulnerabilities and exposures, or CVEs, affecting AI platforms and related tooling. This work has included patching efforts with the Zero Day Initiative across products and frameworks linked to NVIDIA, Tencent, agentic systems and model context protocol tooling.

TrendAI also highlighted the scale of the problem it expects to emerge around AI software. Its State of AI Security Report projects between 2,800 and 3,600 AI CVEs in 2026 alone, illustrating how quickly the workload for vulnerability research and remediation could grow.

The broader tie-up comes as Anthropic expands its presence in Australia and New Zealand, with a Sydney office led by Theo Hourmouzis. The regional move reflects demand for AI services in the local market, including software development, research and security operations.

Closing the gap

For cyber defenders, one of the hardest problems is the gap between finding a vulnerability and reducing the operational risk it creates. A flaw may be known, but remediation can still take days or weeks if code changes must be tested across large production systems.

TrendAI is positioning its Anthropic collaboration around that issue. Rather than only surfacing vulnerabilities, the joint approach is intended to help security teams rank flaws by likely real-world impact and put interim controls in place before attackers can exploit them.

That reflects a wider shift in cybersecurity, as AI is being used by both defenders and attackers. Security companies are increasingly focused on whether new models can automate research, reduce false positives and identify practical exploitation routes in large, complex environments.

Mick McCluney, ANZ Field CTO at TrendAI, said the mismatch between discovery and remediation has become more pronounced as AI speeds up security research.

"AI is dramatically accelerating vulnerability discovery, but remediation timelines haven't kept pace. Our collaboration with Anthropic ensures that organisations get the best vulnerability threat intelligence and the ability to reduce risk across their environments before attacks take place," McCluney said.

Anthropic has been expanding access to its models for security-related work through its Cyber Verification Program, designed to let approved practitioners use advanced AI systems for defensive tasks. TrendAI's participation places it among the companies using that route to apply large language models to software analysis and threat research.

The deployment also underlines how AI suppliers and cybersecurity vendors are becoming more closely linked as businesses look for tools that can move from raw model output to direct action in operational systems. In this case, TrendAI is seeking to connect vulnerability discovery, prioritisation and mitigation in a single workflow.

Its central argument is that faster identification alone is no longer enough if security teams cannot quickly determine exposure and apply controls. TrendAI's projection of thousands of AI-related CVEs points to a workload that may be difficult to manage without greater automation and tighter integration between research and defence tools.