IT Brief Australia - Technology news for CIOs & IT decision-makers
Flux result be5832d1 2647 4b40 8c3f 54ddb15bfb62

OpenAI expands cyber access for verified defenders

Thu, 16th Apr 2026

OpenAI has expanded its Trusted Access for Cyber programme and introduced a cybersecurity-focused version of GPT-5.4, widening access for verified defenders.

The programme, known as TAC, is expanding from a limited group to thousands of verified individual cybersecurity professionals and hundreds of teams defending critical software. The highest access tiers will be able to use GPT-5.4-Cyber, a version of GPT-5.4 fine-tuned for defensive cybersecurity work with fewer restrictions on some cyber-related tasks.

The move reflects a broader push to tie access controls to user verification rather than relying only on model-wide restrictions. OpenAI says cyber tools are inherently dual-use and that access decisions should depend on who is using the system, how they are using it and what trust signals are available.

Under the expanded structure, individual users can verify their identity through OpenAI's cyber access process, while enterprise customers can apply for team access. Approved users will receive versions of existing models with fewer interruptions from safeguards that might otherwise block security education, defensive coding and vulnerability research.

Users in higher tiers can request GPT-5.4-Cyber. The model lowers refusal thresholds for legitimate cybersecurity work and adds functions for advanced defensive workflows, including binary reverse engineering. This allows security professionals to inspect compiled software for malware risks, vulnerabilities and software robustness when source code is unavailable.

Defensive focus

OpenAI will begin with a limited rollout of the more permissive model to vetted security vendors, organisations and researchers. Access to more cyber-permissive systems may also come with limits, particularly when the company has less visibility into the user, environment or purpose of a request, such as through some third-party platforms and no-retention settings.

OpenAI has been building its cybersecurity work for several years. It began assessing model cyber behaviour in 2023, added cyber-specific safety measures to deployments in 2025 and recently launched products designed to help developers and security teams identify and fix software vulnerabilities.

That work includes a USD $10 million Cybersecurity Grant Program and Codex Security, which monitors codebases, validates issues and suggests fixes. According to OpenAI, Codex Security has contributed to more than 3,000 fixes for critical and high-severity vulnerabilities since its recent launch, along with a larger number of lower-severity findings addressed across the software ecosystem.

OpenAI also says it has reached more than 1,000 open-source projects through Codex for Open Source, which offers free security scanning. These efforts are part of a strategy focused on research, misuse prevention and support for defenders.

Model controls

OpenAI says its cyber approach is based on three principles: widening access for legitimate users, deploying systems iteratively and investing in resilience across the security ecosystem. Stronger verification and more automated trust checks, it argues, would allow advanced defensive tools to be made available more broadly without requiring manual judgments about who should be allowed to defend systems.

It does not believe it is practical or appropriate to decide centrally who gets to defend themselves. Instead, access should be based on verification, trust signals and accountability.

OpenAI also links the TAC expansion to its view that cyber safeguards must evolve alongside stronger AI systems. It says it introduced cyber-specific safety training with GPT-5.2, expanded safeguards through GPT-5.3-Codex and GPT-5.4, and classified GPT-5.4 as having high cyber capability under its Preparedness Framework.

Current safeguards, according to OpenAI, are sufficient to support broad deployment of its present models, while more permissive cyber-tuned systems require tighter controls and narrower release conditions. The company adds that future models whose cyber skills exceed today's purpose-built systems will need broader defensive measures.

OpenAI argues that software development itself must become more secure, with AI tools used to identify, validate and fix security issues as code is written rather than relying mainly on periodic audits. Integrating coding models and agent-like systems into development workflows, it says, can give developers immediate feedback while they build and shift security work towards continuous risk reduction.

Future testing and release decisions are expected to follow the same approach, scaling cyber defence measures alongside model capability.