IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image
Who is responsible when AI is irresponsible?
Thu, 27th Oct 2022
FYI, this story is more than a year old

We interact daily with algorithms that over time predict and inform our actions. Spam filters in e-mail and real-time mapping technologies on our phones are just two everyday examples of AI technologies that add convenience to our lives. 

We hear a lot about AI, but perhaps something that needs to be discussed more frequently alongside it, is the ethics of AI. The decisions made by AI are the outcome of algorithms, data and business processes, meaning ethical considerations must be applied in each area to ensure responsible innovation.

Algorithms explained

It’s critically important to understand that analytics, machine learning and AI study the past to make decisions for the future. However, if data from the past is biased or underrepresented, then analytics could perpetuate future decisions that are also biased and have unintended consequences.

Analytics does not understand our goals in society. When we strive to improve health care, criminal justice, economic growth and the environment, we must balance our goals for quantitatively perfect predictions with our societal objectives to ensure we achieve goals fairly and equitably.

Further, as analytics, machine learning and AI become pervasive in society, automated decisions are being made on behalf of large populations. At the same time, our society is incredibly dynamic and undergoing continuous change. Organisations leveraging AI must be in a state of continuous learning. Anyone who develops technology that automates decisions for others should bear the responsibility of respecting the core tenets of responsible AI, leading to transparent and equitable outcomes.

Organisations across varied industries are expressing a growing sense of responsibility for making fair and explainable automated decisions. Our customers want to know they’re innovating responsibly and looking at trusted experts to help.   

Responsible innovation in practice

Most importantly, we want to ensure that wherever AI is in action, it’s designed to help people thrive.

At SAS, we are focused on building ethical considerations and ensuring best practice for our customers. Today, our efforts in responsible innovation include four priority areas:

  • Policy: Governments and watchdog organisations are currently defining guidelines for machine learning and AI. The data ethics practice is available to help customers understand, anticipate and incorporate these policies when innovating with analytics.    
  • Process: Most of our customers are not driven by guidelines and policies alone. They want to develop innovations that do good in the world. With this in mind, we are driving best practices that help our customers use analytics for good.
  • Product: We’ve developed a repeatable process that we translate into products and solutions that scale to solve complex and data-intensive decision making problems with ethical considerations in mind. 
  • People: Most importantly, we want to ensure that wherever SAS technology is used, it’s used ethically and benefiting our customers. 

Currently, SAS Viya already supports several capabilities for responsible innovation, including protected and sensitive data flagging, surrogate model interpretation and life cycle management. 

Organisations around the world are taking a harder look at equity and responsibility in all business activities. As consumers become savvier, they also begin to value brands that prioritise these areas more. A 2019 report from the Capgemini Research Institute reveals 62 per cent of consumers would place higher trust in a company whose AI interactions they perceived as ethical. But where to start?

Tips on incorporating responsible AI

  1. Contextualise the principles – Define your own version of AI principles in line with your organisation’s values and priorities. This will help communicate to all the people involved in developing, deploying, and using AI systems, all within the same framework. 
  2. Embed responsible AI as part of your data and analytics strategy – You should consider appointing a small team dedicated to operationalisation of responsible AI principles across departments and at every stage of the AI life cycle. Instead of relying on one person, a team offers diversity and a range of skills, safeguarding against individual biases. 
  3. Infuse responsible AI principles at every stage of the AI life cycle – Incorporate responsible AI principles at every step; data collection, data preparation, modelling and testing, development, productionisation and ongoing monitoring. 
  4. Leverage your ModelOps framework – To effectively bring analytical insight or the output of predictive models into production, you should industrialise the end-to-end life cycle, to ultimately scale analytics-driven decision making as part of your digital transformation. 
  5. Start today – Stay ahead of regulations and implement responsible AI principles, before it becomes an afterthought. Adapting your AI practices to future regulation will be easier if the foundations are laid now. Waiting for regulation to retrospectively apply responsible AI will create a larger technical debt to be addressed in future.

It’s our job to bring AI on the journey of change and iteration, designing AI to reflect our society, our customers and business needs, to get the best results.

Utilising AI to inform business decisions can give your business the competitive edge it needs to cut through the market, and move into unchartered territories, and businesses need to take charge of their own approach to AI to incorporate ethical practices.