IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image

Why organisations must proactively address ethics in AI to gain public trust

Mon, 8th Jul 2019
FYI, this story is more than a year old

The ethical use of AI is becoming fundamental to winning people's trust, a new study from the Capgemini Research Institute has found.

As organisations progress to harness the benefits of AI, consumers, employees and citizens are watching closely and are ready to reward or punish behaviour.

Those surveyed said that they would be more loyal to, purchase more from, or be an advocate for organisations whose AI interactions are deemed ethical.

Companies using AI in an ethical way will be rewarded

Among consumers surveyed, 62% said they would place higher trust in a company whose AI interactions they perceived as ethical, 61% said they would share positive experiences with friends and family, 59% said that they would have higher loyalty to the company, and 55% said that they would purchase more products and provide high ratings and positive feedback on social media.

By contrast, when consumers' AI interactions result in ethical issues, it threatens both reputation and the bottom line: 41% said they would complain in case an AI interaction resulted in ethical issues, 36% would demand an explanation and 34% would stop interacting with the company.

Ethical issues resulting from AI systems have been observed and experienced

Executives in nine out of 10 organisations believe that ethical issues have resulted from the use of AI systems over the last 2-3 years, with examples such as the collection of personal patient data without consent in healthcare, and over-reliance on machine-led decisions without disclosure in banking and insurance.

Executives cited reasons including the pressure to urgently implement AI, the failure to consider ethics when constructing AI systems, and a lack of resources dedicated to ethical AI systems.

Consumers, employees and citizens are worried about ethical concerns related to AI and want some form of regulation: Almost half of the respondents surveyed (47%) believe they have experienced at least two types of uses of AI that resulted in ethical issues in the last 2-3 years.

Most (75%) said they want more transparency when a service is powered by AI, and to know if AI is treating them fairly (73%). Over three quarters (76%) of consumers think there should be further regulation on how companies use AI.

Organisations are starting to realise the importance of ethical AI: 51% of executives consider that it is important to ensure that AI systems are ethical and transparent. Organisations are also taking concrete actions when ethical issues are raised.

The research found that 41% of senior executives report having abandoned an AI system altogether when an ethical issue had been raised.

Follow us on: