Article by Kriti Sharma, Sage VP of bots and AI
You all know what Artificial Intelligence is, right?
I would describe AI as simply the creation of intelligent machines that think and learn like humans. Every time Google predicts your search, you use Alexa or Siri with your voice, or your iPhone predicts your next word in a text - that’s AI in action.
And less obviously, like when you make an unusual purchase with your card and get a fraud alert from your bank. AI is everywhere, and it’s making a huge difference in our lives every day.
I began working with AI a few years ago, and even in this short time, the game has changed massively.
As AI engineers, coders, and hackers - whatever you want to call us - we now have a massive choice about how we implement AI into the products we are developing. We can create our own AI technology, or we can also simply leverage generic tools and apply them to specialist problems we are working on.
Let me give you an example of how we worked like this at Sage when building our own AI chatbot, Pegg.
First up, we developed and trained our own AI for the financial domain with skills to take the admin out of accounting, payments, and invoices, expenses (…everyone hates doing expenses, right?)
We partnered with Microsoft, Amazon and Facebook to teach the AI to understand generic entities like date and location.
We then chose to design our own personality for Pegg to suit the needs of our business users. Pegg has British accounting humour, does not pretend to be human and is proud of being a bot!
The democratisation of technology we are experiencing with AI is awesome, as well as helping to reduce time to market, it is deepening the talent pool and helping businesses of all size have access to the most modern of technology.
But, with great power comes great responsibility. With a few large organisations developing the AI fundamentals that all businesses can use, we need to take a step back and ensure that the work happening is ethical and responsible.
Summarised below are a set of five values I work to when building AI, and the guidelines that I believe the tech community should adopt to develop AI that is accountable and fit for purpose, at a point in time when AI is poised to revolutionise our lives.
Both industry and community must develop effective mechanisms to filter bias as well as negative sentiment in the data that AI learns from – ensuring AI does not perpetuate stereotypes.
Users build a relationship with AI and start to trust it after just a few meaningful interactions. With trust, comes responsibility and AI needs to be held accountable for its actions and decisions, just like humans.
Technology should not be allowed to become too clever to be accountable. We don’t accept this kind of behaviour from other ‘expert’ professions, so why should technology be the exception.
Any AI system learning from bad examples could end up becoming socially inappropriate – we have to remember that most AI today has no cognition of what it is saying. Only broad listening and learning from diverse data sets will solve for this.
One of the approaches is to develop a reward mechanism when training AI. Reinforcement learning measures should be built not just based on what AI or robots do to achieve an outcome, but also on how AI and robots align with human values to accomplish that particular result.
Voice technology and social robots provide newly accessible solutions, specifically to people disadvantaged by sight problems, dyslexia and limited mobility. The business technology community needs to accelerate the development of new technologies to level the playing field and broaden the available talent pool.
There will be new opportunities created by the 'robotification' of tasks, and we need to train humans for these prospects. If business and AI work together it will enable people to focus on what they are good at - building relationships and caring for customers.