IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image
Exclusive: Eight vendor AI security predictions for 2024
Mon, 27th Nov 2023

In an era where technology and security intersect more than ever, this article delves into the insights and forecasts from leading experts in the field. This features predictions from industry leaders like Tim Jackson, Andrew Winlaw, Ameya Talwalkar, Bernd Greifeneder, Phil Swain, Chris Ellis, Uri Dvir, and Corey Nachreiner, offering a comprehensive outlook on the role of artificial intelligence (AI) in cybersecurity and business operations for the coming year.

Tim Jackson, Managing Director, Access4

First and foremost, prioritising security will be crucial. Cyber threats and privacy breaches remain the top concerns for business owners, and as a result, MSPs will likely being required to implement the Essential Eight framework across a wider range of businesses. Secondly, harnessing the power of AI technologies will increasingly be required to yield more comprehensive and precise customer outcomes for businesses. By leveraging various types of AI, organisations can not only scale effectively but also adopt a more nuanced approach to service enhancement.

In addition, finding top talent remains a challenge in today's job market. Developers and engineers, in particular, are in high demand and difficult to come by. Moreover, sourcing individuals who excel in cross-functional collaboration and independent work adds an extra layer of complexity. Achieving a harmonious balance in the modern workplace, with a mix of remote and onsite work, is vital. Employers who prioritise in-office presence over remote work will likely struggle to retain their valuable employees.

Andrew Winlaw, Vice President and General Manager – ANZ, Amelia

AI and machine learning tools are becoming more accessible to developers and businesses thanks to open-source libraries, cloud-based platforms, and pre-trained models. As Generative AI continue to advance, businesses will automate a broader range of tasks, from customer support and data analysis to manufacturing and supply chain management, boosting productivity and cost-efficiency. In addition, cybersecurity solutions will increasingly incorporate AI and machine learning to detect and respond to threats in real time, and we'll see evolving data privacy regulations and increased public awareness of cyber threats pushing companies to prioritise cybersecurity. 

Ameya Talwalkar, Founder and CEO, Cequence Security

Generative AI is a dual-use technology with the potential to usher humanity forward or, if mismanaged, regress our advancements or even push us toward potential extinction. APIs, which drive the integrations between systems, software, and data points, are pivotal in realising the potential of AI in a secure, protected manner. This is also true when it comes to AI's application in cyber defences.

In 2024, organisations will recognise that secure data sharing is essential to building a strong, resilient AI-powered future. While AI is undoubtedly a testament to human ingenuity and potential, its safe and ethical application is imperative. It's not merely about acquiring AI tools; it's the responsibility and accountability of secure integration, primarily when facilitated through APIs.

Bernd Greifeneder, Chief Technology Officer, Dynatrace

In 2024, next-generation threat intelligence and analytics solutions will phase out security information and event management (SIEM) systems.   These modern solutions enable security teams to extend capabilities beyond log analytics to access the context provided by a broader range of data modalities and different types of AI, including generative, causal, and predictive techniques, working together. As a result, organisations will gain access to deeper and more accurate, intelligent, and automated threat analysis, helping to protect their applications and data from increasingly sophisticated threats.

In 2024, organisations will also increasingly appoint senior executives to their leadership teams to ensure readiness for AI's security, compliance, and governance implications. As employees become more accustomed to using AI in their personal lives, through exposure to tools such as ChatGPT, they will increasingly look to use AI to boost their productivity at work. Organisations have already realised that if they don't empower their employees to use AI tools officially, they will do so without consent. Organisations will, therefore, appoint a chief AI officer (CAIO) to oversee their use of these technologies in the same way many have a security executive, or CISO, on their leadership teams. The CAIO will centre on developing policies and educating and empowering the workforce to use AI safely to protect the organisation from accidental noncompliance, intellectual property leakage, or security threats. These practices will pave the way for widespread adoption of AI across organisations.

Phil Swain, Vice President of Information Security, Extreme Networks

AI will undoubtedly continue to make headlines, not just as a new concept, but through real-life examples and applications of how it's being used, showcasing both its positive and negative impacts.   Even in 2023, we're seeing AI being used by the bad guys to generate more effective and elusive phishing emails, as well as generating zero-day attacks. I believe 2024 will see the proliferation of second and third-generation AI-based security tools that can defend and counter-attack AI-based attacks in real time. We could start to see the AI version of Battle Bots, with organisations as the arenas in the combat zone.

Chris Ellis, Director of Pre-Sales, Nintex

During 2024, we'll be witness to the democratisation of AI and generative AI, making the revolutionary capability far more accessible and ultimately adopted across business users and 'citizen developers'. We're starting to see this wider availability through uses such as within the Microsoft Office suite and ServiceNow's recent declaration that AI presents a potential $1 trillion total addressable market. Compared to just automation alone at $200 billion, this presents a significant opportunity for organisations and vendors alike. 

At the same time, we'll see a doubling down on cybersecurity and associated risk mitigation as the advancement of AI presents a more significant challenge to business integrity and critical data. What is shared—and with whom—will become a significant watch point for organisations. Early indications are that some are considering hybrid deployments, moving certain workloads back to on-premises as a means of protecting intellectual property. 

Keeping up with the rate of change and fostering a culture of adoption through awareness and learning will be critical for business success in 2024. At every stage of the IT exposure landscape, we are bombarded with news and media about the advent of AI. The challenge is going to be applying it to your organisation in a compliant and cost-effective way that doesn't orchestrate users or blow out the bottom line.

Uri Dvir, Chief Technology Officer, WalkMe

For all the benefits generative artificial intelligence (GAI) can bring, businesses will also be increasingly aware of the risks it poses. "Shadow AI" refers to AI applications used by employees without employers' knowledge or oversight. It's an acute risk that businesses will need to shine a light on or face disastrous consequences. 
Employees going rogue with GAI tools could be unwittingly sharing sensitive data with the GAI's central learning database — losing control of who views the data and what it's used for. On top of this, unmonitored use of AI outputs can easily result in the use of false information or even accidental IP theft. 

As GAI tools become more and more commonplace, businesses will focus on discovery: understanding exactly how and why staff are using these tools and using this understanding to regain control. After all, if they know why staff are using certain GAI tools, they can help them get the same results much more safely. 

Businesses will gain nothing from banning the use of GAI tools outright. Those who do will see staff pushback. They will miss out on all the potential benefits this technological breakthrough will bring. The solution is to plan for shadow AI and set a policy for safe, productive use.

Corey Nachreiner, Chief Technology Officer, WatchGuard Technologies

Voice phishing (vishing) increased over 550% YoY between the first quarter of 2021 and Q1 2022. Vishing is when some scammer calls you pretending to be a reputable company or organisation or even a co-worker (or someone's boss) and tries to get you to do something they can monetise, such as buying gift cards or cryptocurrency on their behalf. The only thing holding this attack back is its reliance on human power. While VoIP and automation technology make it easy to mass dial thousands of numbers and leave messages or redirect victims unlucky enough to answer, once they've been baited to get on the line, a human scammer must take over the call to reel them in (so to speak). Many of these vishing gangs end up being large call centres in particular areas of the world, very similar to support call centres, where many employees have fresh daily scripts that they follow to socially engineer you out of your money. This reliance on human capital is one of the few things limiting the scale of vishing operations.

We predict that the combination of convincing deepfake audio and large language models (LLMs) capable of carrying on conversations with unsuspecting victims will greatly increase the scale and volume of vishing calls we see in 2024. What's more, they may not even require a human threat actor's participation.