IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image

AI in cybersecurity: Not yet autonomous, but the time to prepare is now

Today

As we edge closer to 2025, predictions surrounding the role of artificial intelligence (AI) in cybersecurity continue to gain momentum, suggesting that while AI will continue to provide support to cyberattacks, its potential to execute sophisticated strikes remains limited.

Jason Mar-Tang, Field Chief Information Security Officer at Pentera, highlights a subtle yet significant impact, underscoring how AI acts as a supportive tool for threat actors rather than a transformative element.

Mar-Tang notes that although AI is frequently employed by attackers to enhance the efficacy of attacks through means such as generating realistic content and evading detection, the anticipated leap to fully autonomous AI-led cyberattacks hasn't been realised yet.

He points out the use of generative AI in actions like crafting convincing phishing emails, overcoming language barriers, and skirting repetitive tasks, thus allowing cybercriminals to scale operations without incurring additional resource costs.

Expel's CEO Dave Merkel also shared insights on AI's cyber role, echoing sentiments on its current limitations in sophistication but acknowledging its increasing utilisation in social engineering. However, Merkel cautions that detailed understanding of adversarial AI will likely only emerge once malicious AI tools are seized or unmasked.

Merkel also stressed the need for better alignment of incentives within cyber regulations, pointing out deficiencies in many organisations' awareness, with a particular gap noted for supply chain vendors in the United States. This, he suggests, underscores the role of regulation as a catalyst for change, although major shifts in regulatory intensity aren't expected in the immediate future.

Furthermore, Merkel points to geopolitical tensions exacerbating the cyber threat landscape, predicting a rise in cyber activities not only from nation-states but also from activist groups and criminals looking to exploit disorder. Meanwhile, he critiques the dialogue around AI intensifying the cyber talent gap, suggesting that the real issue lies in understanding AI technologies rather than in lacking specific AI skills per se.

Cat Starkey, Expel's Chief Technology Officer, reinforces the importance of responsible AI adoption amidst growing concerns over data privacy and effectiveness. As AI technologies continue to develop, Starkey outlines an emerging market for responsible AI, focussing on maintaining a balance between advancing tech and ethical governance. She raises the critical question of how to extract AI models when users opt for their "right to be forgotten," suggesting this could spawn new regulatory fields.

Starkey also discusses the necessity of integrating AI into detection and response (D&R) strategies, emphasising the need for a compatible skill set for engineers involved in D&R roles. This evolution aims to offload mundane tasks, allowing human resources to leverage AI for better outcomes, all while maintaining an adaptive edge against attackers.

Another key point from Starkey is the standardisation of security data through collaborative efforts like the Open Cybersecurity Schema Framework. She highlights how employing standard data schemas enhances technological scalability and simplifies feature development, reflecting the cybersecurity industry's cooperative ethos for the common good.

These industry insights underscore a cautious yet clear direction: as we move into 2025, AI's involvement in cybersecurity will grow more significant, though the long-awaited autonomous threat wave remains on the horizon. With a continued focus on responsible innovation and collaboration, the emphasis remains on being prepared for the eventual AI frontier in cyber threats.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X