By Alan Barker, Freelance Writer, British Science Festival

Professor Jim Al-Khalili, the new President of the British Science Association (BSA), has chosen artificial intelligence (AI) as his campaigning topic. Good move, says Alan Barker, who was at Professor Al-Khalili’s Presidential Address at this year’s British Science Festival.

Jim Al-Khalili believes that the most important conversation of our generation is about the future of AI.

It’s a bold claim. Professor Al-Khalili used his inaugural Presidential Address at the British Science Festival this month to justify it. Press coverage prior to the address, inevitably, misrepresented his position. No, he does not believe that AI is a greater threat than global warming or terrorism; but he does believe that it will be the defining technology of the 21st century, as steam was of the 19th and electricity of the 20th.  AI will dominate and dictate how we manage many of the other pressing issues of our time.

As the new President of the BSA, he’s chosen his theme well. In the last twelve months, AI has exploded into the media, and into public consciousness. Helping citizens make sense of AI is just the kind of thing the BSA exists to do.

Professor Jim Al-Khalili, new President of the British Science Association, gives his Presidential Address at the 2018 British Science Festival in Hull & the Humber

Professor Al-Khalili’s thesis in his address was that the technology is developing far faster than the political, economic or moral debates that surround it. ‘Algorithm’ may be the word of the moment, but, as one audience member remarked during the Q&A session, many of us couldn’t even define it. “It’s vital that the public understands what’s coming,” Professor Al-Khalili told us. “And my concern is that it [the public] doesn’t.”

The problem may not be only a lack of knowledge. Our response to AI may draw on deeper fears about both artificiality and intelligence.

~~~

The history of AI may provide some enlightenment. In a recent television programme (The Joy of AI, aired on the BBC), Professor Al-Khalili told the story, essentially in four stages.

Stage 1 opens in the 1950s. Herbert Simon and Alan Newell were trying to automate the process of thinking; Logical Theorist impressively automated mathematical proofs. In Stage 2, programs began to develop beyond formal logic towards strategic planning. A computer plays chess, for example, by using heuristics: simple rules, input by the programmer, that allow a program to choose successful moves from a combinatorial explosion of choices. Such programs exemplify Classical AI, which – combined with robotics – has proved successful in highly controlled environments: supply chain management, manufacturing and construction, for example.

Artificial intelligence has been used successfully in areas such as robotics of supply chains. Image credit: Wikicommons 

But the wider world is the opposite of highly controlled. How could AI negotiate the messy, contingent chaos of real life? At Stage 3 we meet machine learning: computers themselves began to learn the rules of decision-making. The key to machine learning is pattern recognition: a spam filter, for example, hunts out repeated patterns in phrases that help it decide whether to accept or reject emails. Where Classical AI reflects logical, rational thinking, machine learning seeks to replicate the more subconscious sense-making operations of the mind.

The next stage is to use the principles underlying the neural structures of the brain itself. Neural networks allow programs to learn endlessly and develop new strategies for solving problems that humans might not even recognise. This is roughly the point where we are now: Classical AI, machine learning and neural networks combining with robotics and other forms of automation, to generate hundreds of applications.

~~~

AI is no longer merely artificial: in his address, Professor Al-Khalili suggested that we might replace the word with ‘automated’, ‘assisted’, ‘augmented’ or ‘autonomous’.

AI is currently proliferating into a dizzying array of esoteric new technologies: Bayesian updating, fuzzy logic, model-based reasoning, multi-agent systems, warm intelligence, genetic algorithms. AIs are now optimising harvests, interpreting medical images, grading students and detecting investment opportunities. New applications appear daily. Solve the problem of intelligence, says Professor Al-Khalili, and AI can then solve everything else.

Meanwhile, we’re being offered unprecedented computing power, literally, at our fingertips; and we’re losing control of personal information to ever-growing data mountains that can be mined and exploited by AIs run by states and global corporations. Together, claimed Professor Al-Khalili, these three elements make this a critical moment in AI’s development. The public backlash that he fears may not yet have materialised; but awed fascination is certainly accompanied by nervy apprehension.

Artificial intelligence is everywhere; from medical imaging, to marking student exams and predicting the stock exchange. Image credit: wikicommons

Any policies for dealing with AI, then, must acknowledge these two polarised responses in the public, and what fuels them. Professor Al-Khalili highlighted three main areas of concern: transparency, employment and ethics. 

How, first, do we ensure what he calls ‘data trust’? Professor Al-Khalili spoke, wisely but a little unsurprisingly, about regulation that must be neither too stringent nor too loose; and about initiatives such as those being run by the Engineering and Physical Sciences Research Council to develop a ‘trust index’, which might allow users to assess what the council calls the ‘trustability’ of different platforms.

From the start of the Industrial Revolution, workers have tended to view any new technology as a threat to their jobs. Professor Al-Khalili once again offered solutions that were sensible but predictable: re-training, lifelong education, the prospect of new jobs replacing the old ones.

But one example took us, more intriguingly, into his third area of concern: the ethics of AI.

~~~

How would you feel about a robot looking after your widowed grandmother? There’s considerable public disquiet in our part of the world about the idea of robotic AIs taking companionship roles in the caring professions. But our concern is, it seems, a local phenomenon; in Japan, added Professor Al-Khalili almost in passing, people are much more comfortable with the idea.

Artificial intelligence can be perceived differently depending on one's culture

Why should that be? Some writers have suggested that Japan is more accepting of robots because of its Shinto tradition, which teaches that everything in the universe has a spirit – including inanimate objects and artefacts.

In the West, by contrast, the three great religions of the book – and the scientific revolution itself – have foregrounded human consciousness and separated it from the universe it observes, which is assumed to operate according to objective, irrefutable laws. In this worldview, technology – created through the intelligent application of these objective laws – can never itself be conscious.

This philosophical distinction between intelligence and its products throws up a number of tough ethical questions. Take, for instance, the potential for AI to make errors. If a driverless car causes a fatal accident, whom do we prosecute? The maker of the car? The designer of the program? It’s not simply a legal question; humans are, he pointed out, particularly unforgiving of machines making mistakes. Yet AI, he proclaimed at another point in the address, actually reduces the incidence of error: unlike humans, any AI can instantly learn from its mistakes and – in his words – “share its learning with all other AIs.” It was a curious, oddly humorous moment: the calm, rational scientist letting slip his apocalyptic vision of an omniscient AI imperium, before immediately backtracking (“It sounds kind of scary!”). 

Perhaps my imagination ran away with me. But that’s my point. Underlying any public disquiet about AI, deep in our collective imagination, is a set of images, metaphors and narratives born out of this radical disjunction between human consciousness and its creations. Together, they find voice in a rhetoric that often describes technologies in terms of living, sentient entities – a rhetoric that even the most considered scientists can, unwittingly, find themselves using. This is a language in which AIs ‘share’ and ‘learn’. They can ‘do things we haven’t asked them to do’.  AlphaGo, an AI built by the British company DeepMind, recently beat a grand master at the game of Go, displaying what Professor Al-Khalili suggested might be ‘rudimentary creativity’. He even referred, albeit jokingly, to the idea of enslaving AIs, as if they were people (“actually, it doesn’t sound too bad, does it?”).

In this language, the words ‘intelligence’ and ‘consciousness’ come perilously close to being synonyms. The term ‘artificial intelligence’ is a contradiction in terms – a contradiction that’s played out in popular culture, from the myth of the Golem to the science fiction of Philip K Dick (one of Professor Al-Khalili’s favourite authors). Any form of cognition that’s not human can’t be artificial, or autonomous, or augmented. It must be alien.

HAL9000 from Stanley Kubrick's 2001: A Space Odyssey - in popular culture, artificial intelligence is often coloured by a dark, mythic undertow

Take HAL, the infamous computer on board the Jupiter mission in Stanley Kubrick’s 2001: A Space Odyssey. The HAL9000 is the ultimate AI: “No 9000 computer,” it tells a BBC interviewer, “has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error.”  HAL has been developed by a technical intelligence implanted mysteriously in humans by an alien race. In setting out to murder the ship’s crew, HAL is doing no more than pursuing the logic of his mission to discover his true parentage.

~~~

When our view of AI is coloured by this dark, mythic undertow, it’s hardly surprising that we might think twice before using AI to help us decide issues of fundamental human importance.

Except that we’re already doing so, every day. An AI will decide whether you’re eligible for a bank loan. DeepMind, Moorfields Eye Hospital and University College London are developing an AI that can detect eye conditions from complex scans more accurately than humans. The benefits are obvious. But consider another machine-learning algorithm, developed by a team in Florida, which can seemingly predict a suicide attempt with 90% accuracy. Doctors could now rely on AI to decide whether to curtail someone’s freedom, on the basis of something they haven’t yet done. Professor Al-Khalili suggested that the story echoes Philip Dick’s Minority Report: “a machine making a decision about humans is something we find deeply unsettling.” Even closer to Dick’s story, US courts have, since 2000, been using COMPAS, a ‘criminal risk assessment tool’, to predict the likelihood of a convicted criminal re-offending, despite recent analysis showing the software’s predictions to be unreliable and racially biased.

We must appreciate the limitations of artificial intelligence and foresee the consequences of our programming

In a way, such examples serve to reassure. Any AI is as only as good as the data we feed it, and the algorithms we build into it. The real danger lies in our inability to foresee the consequences of our programming.

The best way to improve public understanding of AI, as Professor Al-Khalili acknowledged, may be to point out its limitations. AI was first developed to automate reasoning – a specific, limited form of thinking. In The Joy of AI, Professor Al-Khalili quoted from Hans Moravec’s book, Mind Children:

Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious Olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.

That’s the Moravec Paradox. What we find hard, AI finds easy; what a toddler finds intuitive, AI finds impossible. Its creators have not yet managed to replicate the unconscious cognition that supports reasoning in humans. “It is comparatively easy,” wrote Moravec, “to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”

~~~

So perhaps the controversy should be about not only the word ‘artificial’, but also the word ‘intelligence’. In The Joy of AI, Professor Al-Khalili added that AI systems cannot currently derive concepts from pattern-recognition. An AI may be able to recognise a dog after being shown hundreds of images of dogs, but it will never know what a dog is. The capacity to generalise from limited experience – the kind of ‘one-shot learning’ that human toddlers practise with ease – is, for AI practitioners, the holy grail.

For the moment, then, the ‘singularity’ – the moment when a ‘general AI’ overtakes human intelligence and robots take over the world – remains a distant prospect. AI cannot replicate the multifaceted, general intelligence of humans. Not yet. And that message, as Professor Al-Khalili emphasised, should inform any effort to raise public awareness of AI.

As well as discussing the social, political and ethical issues thrown up by AI, then, perhaps we can help ourselves by reframing our deeper cultural responses to it. AI may be developing faster than our capacity to understand it; but we also need to ensure that, in responding to it, we don’t allow our imaginations to run away with us.