IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image

Australia is calling for responsible Artificial Intelligence

Thu, 8th Aug 2024

Australia's current Senate inquiry into Adopting Artificial Intelligence (AI) and recent concerns expressed by the Department of Employment and Workplace Relations regarding the ghost work trend send a very clear message to Australian organisations – there is an immediate need for responsible AI to protect the privacy and integrity of data, as well as address concerns about AI replacing workers.

Data labelling is a key component of AI, where data is used to train machine learning models. Concerns have been raised through the Adopting Artificial Intelligence inquiry about the collection and use of private information and how it is applied in AI.

While there is no doubt that AI has opened a world of new opportunities for Australian organisations, the reality is that its adoption and advancement has far outpaced its ethical and responsible use.

Data privacy and data quality remain the most significant challenges to AI readiness, resulting in slow adoption and a lack of trust in AI outcomes.

While issues around AI may not be immediately evident when it is first introduced, as organisations move along the adoption curve, they collide with an old problem: data that is incomplete, inaccurate, insecure and therefore untrustworthy.

As we are beginning to see, a lack of governance on AI models and the data that underpins them can easily put organisations at risk of reputational damage, loss of customer trust, financial losses and potential criminal convictions.

To overcome this issue, organisations must ensure AI outcomes are accurate, reliable and comply with emerging AI regulations. However, as governments around the world aim to keep up with rapid shifts around AI, organisations must go beyond compliance and focus on responsible AI.

Many Australian organisations have already made the mistake of bolting new AI tools on legacy, fragmented, manual tools and processes, which is unviable. The only way to achieve responsible AI and mitigate the risks of AI misuse is through a bespoke AI-powered data management platform that is specifically designed to ensure the data that fuels AI is holistic, accurate, timely, protected, reliable, governed and compliant. Only with trusted, responsible and ethical AI will organisations be able to deliver positive outcomes from their AI implementation to drive value and improve trust among key stakeholders.

When it comes to AI adoption, organisations should not simply jump on the AI bandwagon. They must consider where AI will provide the best return on investment, which involves identifying and prioritising use cases based on their contribution to business objectives. For example, will AI deliver the greatest value in retaining customers or improving the supply chain? The team assigned to make these decisions must include an executive business sponsor and chief data officer. The first AI adoption must also pay careful attention to the governed data management phase to ensure the responsible use of AI. Additionally, employees across the organisation must be trained in data literacy, which includes data management best practices, so they can better understand how to structure a question for AI that generates a valuable answer and doesn't breach data privacy or result in data misuse.

To truly deliver on the promise of AI, applications must be built on holistic, high-quality and well-governed data foundations. This includes implementing robust data access management and privacy controls from the ground up. Without these foundational elements, even the most advanced AI models will struggle to provide reliable and valuable insights and could potentially put the organisation at risk of unethical practices around AI. 

As AI systems become more deeply integrated into critical business processes, the need for adequate data management becomes even more crucial. This includes ensuring data accuracy, maintaining data lineage and implementing strong data security measures. These practices not only improve the performance of AI models but also help in meeting regulatory requirements and building user trust.

The future of AI is exciting but it firmly resides in developing specialised, well-governed systems that can be seamlessly integrated into our daily lives and business processes while protecting data privacy and integrity at the same time.

To learn more about the next big trends coming in cloud and AI, visit Informatica World Tour in Sydney on Tuesday, September 10.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X