IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image
Five AI trends for 2024 - and how to set projects up for success
Wed, 20th Dec 2023

2023 will go down as the year artificial intelligence captivated business leaders, as services like ChatGPT and Google Bard made the power of the technology tangible to millions of people.

We’ve seen a flurry of interest in not only generative AI (GenAI) based on large language models (LLMs), but our recent Annual Cloud Report revealed strong appetite for investment in AI more broadly, from computer vision systems to machine learning and data science for AI applications.

That’s great to see. AI has huge potential to lift productivity, improve customer service, and speed up product development. But let’s not forget, AI projects have traditionally had a high failure rate – 60 - 80% according to various reports by research groups.

There’s a growing sense of FOMO in the business community, which is leading to a headlong rush to develop and deploy AI platforms and services. Now is definitely the time to experiment. But the last thing you want to do is put time and money into projects that fizzle out or cause reputational damage because they create security or ethical issues.

Here are five trends we expect to see in AI in 2024 and some tips on how to make the most of the investment you put into your organisation’s AI efforts.

1. The Copilot productivity test
We’ve been told for years that intelligent assistants are coming that will cut through the admin and drudgery of office life, helping to manage our inboxes, draft documents and summarise information instantly. Well, the intelligent assistant era began in late 2023 with the arrival of Microsoft’s Copilot services for Microsoft 365 and rival services from the likes of Google. 

In 2024, CIOs across Australia and New Zealand will be advising their senior leadership teams on whether to deploy these services to boost productivity. At a licence cost of around A$45, Copilot for Microsoft 365 it’s a hefty investment. We expect limited rollout to test the productivity promise before widespread deployment. The Australian Government is piloting Copilot across several government agencies. 

Beyond productivity, there’s huge potential for these services to transform knowledge management by allowing an intelligent agent to analyse an organisation’s data in a secure environment to provide insights. Currently, the indexing costs of doing so can be prohibitive. But the price will come down in 2024 as adoption increases.

2. Rise of the model gardens

OpenAI and its free and premium ChatGPT services hogged the limelight this year. However, hundreds of LLMs have been developed and deployed across the tech ecosystem. The business model for providing LLMs is starting to take shape, with platforms offering a range of LLMs to suit your needs. AWS has its Bedrock service with foundational models from the likes of Stable Diffusion, Antropic, the open source Llama 2, and Amazon’s own Titan models. Google’s Model Garden features 100+ models allowing you to pick and choose what you need. The public cloud consumption model is now underpinning use of GenAI. In 2024, we will see the rise of ‘chaining’, where you use several models optimised for specific tasks to power a single product or service. 

This flexibility in accessing LLMs in secure tenancies will allow rapid piloting and development of AI-powered services, while platforms like Microsoft Fabric are making it easier to manage your data so that it can be securely drawn on by LLMs. 

3. Reimagining customer service

The key area AI can show rapid return on investment for organisations is in improving customer service, through intelligent chat agents and knowledge management systems. At Datacom, we employ over 2,000 people across Australasia in contact centres who handle queries on behalf of a wide range of clients. 

The AI innovation in the contact centre space is staggering, with the likes of Datacom partner Cognigy using GenAI and conversational AI to unlock contact agent productivity and put actionable information instantly at their fingertips. 2024 will see dramatic improvements in customer service, drawing on AI to deliver faster, more relevant answers for customers at call centres and across digital channels.

4. Fighting AI with AI

The grim reality is that GenAI makes it easier for hackers to develop the tools of their trade. But AI is also fueling a wave of innovation in cybersecurity that is improving preventive efforts to keep networks and devices safe, as well as speeding up responses when security incidents take place. 

At Datacom, we are increasingly employing AI tools in our security operations centre (SOC) and as part of Citadel, our managed security service, which is designed to sit alongside Microsoft Azure Sentinel. In 2024 we will see more AI-augmented cyber attacks, but we can also expect more AI-powered cybersecurity tools to help monitor your networks, identify threats, and keep on top of security patches and upgrades.

5. Putting the guardrails in place

There’s been a lot of talk about AI principles and frameworks this year, but a gulf remains between best practices for responsible AI and organisations’ capacity to safely and ethically deploy it. That was made very clear in our Cloud Report, where less than 30% of respondents from across Australia and New Zealand felt they had sufficient budget invested in security and had enough agility to meet new privacy and security regulations. Our AI Attitudes research report also highlighted gaps in AI policies and procedures, with just 52% of Australian organisations having staff policies around AI use and only 40% having legal guidelines or governance frameworks.

In 2024, every organisation considering deploying AI must first step back and assess its ability to safely and responsibly develop and deploy AI. 

SIDEBAR
Deploying AI - lay the foundations for success

Everyone is starting to see the real potential of AI, but less clear is how to deploy it responsibly and in a way that is financially sustainable for the organisation. At Datacom, we are assisting customers not only to test their business cases for AI but also to put the framework in place that will allow them to practise responsible AI. That comes down to three key things:

Define ethical principles: Clearly articulate and document the ethical principles that will guide AI development and deployment within your organisation. This should cover fairness, transparency, accountability, privacy, and non-discrimination.

Data privacy: Prioritise data privacy by adhering to relevant data protection laws and regulations. Implement robust data anonymisation and encryption practices. Clearly communicate to users how their data will be used and obtain their consent.

Accountability and responsibility: Establish clear lines of accountability for the development and deployment of AI systems. Ensure that individuals and teams are aware of their responsibilities and are accountable for the outcomes of the AI systems they develop.

It’s been a wild and exhilarating ride watching the evolution of GenAI since the debut of ChatGPT a year ago. 2024 will be the year when the rubber hits the road for many organisations in the AI space. 

Put in the work ahead of time to avoid becoming a statistic in the AI failure case file. Datacom is here to help you set your AI projects up for success.