True AI – A long and winding road?
Suddenly we are inundated with Artificial Intelligence (AI): smart kettles, smart houses and cities, self-parking cars... and every cloud service now seems to be claiming some AI or Machine learning (ML) capability.
So how come a discussion panel at this year's NetEvents Global IT Summit, San Jose, USA, was entitled How long is the road to AI?
It is time to put aside the hype and grandiose claims and consider what we, and the marketers, actually mean by "intelligence", and how far we are getting towards that promised land.
GlobalData technology group research and analysis head Jeremiah Caron introduced and chaired the discussion saying: "The objective is to talk about the premise of AI, what is actually being delivered, and what we see in the short term going forward.
He pointed out a discrepancy between spending expectations – Data Science reported over 70% expecting to spend a lot on machine learning, 60% in computer vision applications – while a mere 47% reckoned that AI would be important for their future business.
"Companies don't necessarily think about AI, but what AI will do for them. They want to apply AI. That's what they care about.
This was not a standalone panel: it followed a keynote presentation by Ravi Chandrasekaran, senior vice president of Cisco's enterprise networking business, on The Network is The Business, which covered ground about the role of AI in the network, notable for assisting management and security.
One of his points was that "While the Industrial Revolution liberated humans from the limits of their physical capabilities, the digital revolution is going to liberate us from the limits of our mental capabilities" – a point that would be underlined later in this discussion.
He also drew an important distinction between Machine Reasoning (MR) and Machine learning (ML).
MR is an "expert system" that compares observed data with a large knowledge base of human experience and knowledge in order to make "intelligent" decisions.
At its simplest, consider a toilet flush that notes one variable (water level), one strategy (close the valve when tank full), to produce one intelligent outcome (allow the tank to fill but never overflow).
The difference between that and IBM's "Deep Blue" – the program that shocked the world by defeating the world's chess champion in 1997 – is largely a matter of scale. Chess begins with just 32 pieces that can occupy 64 spaces, but those small numbers can generate a colossal number of permutations.
Add to that, centuries of strategic expertise from past chess masters to produce one intelligent outcome – checkmate – and you have MR at its best.
Then compare Deep Blue with AlphaZero, which achieved world dominance not only in chess but also the far more challenging game Go, simply by playing millions of games against itself without any further human input or expertise.
This example of Machine learning shook the AI world, but it is worth noting that Go is simply about alternately placing black or white pieces on a 19 by 19 grid: the resulting game becomes remarkably complex but, compared with the rapid decisions and actions made by a human driver in rush hour traffic, it is pretty trivial.
This distinction between the enormous adaptability of human intelligence, versus the power of AI, when focused on very limited tasks, was emphasised by another panel member: Stanford Professor David Cheriton – perhaps better known as an exceptional IT investor and entrepreneur, the co-founder of Apstra.
Cheriton says that people are too concerned about replicating human intelligence. "In my view, humans are relatively slow and unreliable at almost any task. There are 7 billion people on the planet, computers can beat every one of us. So the idea of trying to reproduce human intelligence is misguided. What we are actually trying to do is to automate intelligent actions – something computers can do much better than a human. Because what we want is predictable performance. That's what engineering is all about: building things with predictable performance.
This distinction was endorsed by another panel member: Nick McMenemy, CEO of Renewtrak, a fully automated white label business which pursues support maintenance and licencing renewals.
"We use a lot of machine learning, we don't use AI per se. This raised a very important distinction for me: people talk about artificial intelligence, when really they're talking about automating intelligent behaviour.
As Ravi Chandrasekaran said, "the digital revolution is going to liberate us from the limits of our mental capabilities" and we are now talking about these limits: being flexible and inventive is a great human ability, but when it comes to the grind of weighing up thousands of chess moves – or renewing a pile of low margin contracts – we get bored, make mistakes and are better off handing the job to an automated system.
This, for panel host Caron, raised the question of human redundancy: "You quickly get into the human discussion in any conversation about automation or AI. We've all agreed that humans are pretty bad at a lot of things.
"So, Ravi, what does that mean for the army of Cisco experts out there?
The reply was that basically we are automating to save them from being bogged down in network complexity: "It's not trying to replace people. It is freeing up so they can get their job done better" – again, liberating us from the limits of our mental capabilities.
Jeremiah Caron raised a question about the need for new regulation against the challenges of AI implementation, and Professor Cheriton came down hard on this, saying: "I'm not a big fan of regulation.
He mentioned the recent Boeing 737 example: if that system had been based on machine learning it should be seen as bad engineering, the same as if the wings had just fallen off: "If you deploy something in a safety-critical environment, and you can't predict how it's going to behave, that has to be viewed as irresponsible.
Predictability is a key selling point for Renewtrak, as Nick McMenemy explained: "Predictability is a mandatory, not just for the CFO. Clients want predictability that something's going to happen. The staff want predictability that they can understand something has taken place." This is what turns their customers from sceptics to believers.
Among the comments from the floor, someone brought up the need for cleaner data if we are to trust machine learning: "You cannot compare if it has come from different versions of the operating system or different products... Small deviations lead to a very different conclusion unless you take them into account.
Professor Cheriton provided an interesting example that had a bearing on the need for clean data.
A colleague at Stanford had collected something like two million images for image-recognition systems to play with, and they were getting some very promising results – for example, recognising the difference between photos of a dog or of a wolf.
So what was the AI actually looking at?
The key difference was found to be that wolf pictures included snow and bushes, where dogs were against a grass background: "The point of this is that we thought we were further along in solving this problem than we were.
This background "noise" provided a great example of the need for cleaner data.
As for those "promising results", I'd give the last word to Professor Cheriton, the self-confessed "AI sceptic".
He said: "When people have asked me over the last 35 years in Stanford what I thought of AI, I say it's a very promising technology. It's been promising ever since I encountered it and continues to promise. It suffers from over-promising.