Why ethics is essential in the creation of artificial intelligence
Artificial intelligence (AI) has long been a feature of modern technology and is becoming increasingly common in workplace technologies. According to ManageEngine's recent 2021 Digital Readiness Survey, more than 86% of organisations in Australia and New Zealand reported increasing their use of AI even as recently as two years ago.
But despite an increased uptake across organisations in the ANZ region, only 25% said their confidence in the technology had significantly increased.
One possible reason for the lack of overall confidence in AI is the potential for unethical biases to work their way into developing AI technologies. While it may be true that nobody sets out to build an unethical AI model, it may only take a few cases for disproportionate or accidental weighting to be applied to certain data types over others, creating unintentional biases.
Demographic data, names, years of experience, known anomalies, and other types of personally identifiable information are the types of data that can skew AI and lead to biased decisions. In essence, if AI is not properly designed to work with data, or the data provided is not clean, this can lead to the AI model generating predictions that could raise ethical concerns.
The rising use of AI across industries subsequently increases the need for AI models that aren't subject to unintentional biases, even if this occurs as a by-product of how the models are developed.
Fortunately, there are several ways developers can ensure their AI models are designed as fairly as possible to reduce the potential for unintentional biases. Two of the most effective steps developers can take are:
Adopting a fairness-first mindset
Embedding fairness into every stage of AI development is a crucial step to take when developing ethical AI models. However, fairness principles are not always uniformly applied and can differ depending on the intended use for AI models, creating a challenge for developers.
All AI models should have the same fairness principles at their core. Educating data scientists on the need to build AI models with a fairness-first mindset will lead to significant changes in how the models are designed.
Remaining involved
While one of the benefits of AI is its ability to reduce the pressure on human workers to spend time and energy on smaller, repetitive tasks, and many models are designed to make their own predictions, humans need to remain involved with AI at least in some capacity.
This needs to be factored in throughout the development phase of an AI model and its application within the workplace. In many cases, this may involve the use of shadow AI, where both humans and AI models work on the same task before comparing the results to identify the effectiveness of the AI model.
Alternatively, developers may choose to keep human workers within the operating model of the AI technology, particularly in cases where an AI model doesn't have enough experience, which will let them guide the AI.
The use of AI will likely only continue to increase as organisations across ANZ, and the world, continue to digitally transform. As such, it's becoming increasingly clear that AI developments will need to become even more reliable than they currently are to reduce the potential for unintentional biases and increase user confidence in the technology.