This is the presidential address in full that was given at 17:00 on Thursday 13 September 2018 by Jim Al-Khalili,  President of the British Science Association.

My address today… is on artificial intelligence.

Let me begin with a clarification. Last week the BSA held a press briefing in London at which I gave a summary to the gathered science correspondents of what I was going to be talking about today. It was widely reported. And misreported. So, can I just clarify for the record that I do not believe that AI is a greater threat to humanity than climate change, as some papers reported. I am in fact hugely excited by the potential of AI to transform our world, which is why what I hope to talk about now is taken more seriously than were I part of the scaremongering bandwagon.. that brings up the Terminator at every mention of the subject. Indeed, those headlines misquoting me conveyed a quite different message from the one I hope I got across in my recent BBC4 documentary, The Joy of AI, in which I presented a positive and optimistic picture of AI and what it can achieve.

If you think back––those of you old enough to remember––you’ll recall a time before the early 90s… before we had the internet. I remember one lunchtime in my physics department at Surrey back around 1992, when I mentioned how excited I was about the then new ‘World Wide Web’. A colleague of mine was sceptical, saying he didn’t think it would come to anything! Now consider just how much the internet has transformed all our lives. In fact, I believe that AI will revolutionise every aspect of our lives to a greater extent than the internet itself has. What’s more, it won’t need another 25 years to do so.

In fact, artificial Intelligence, machine learning, robotics and automated systems have the potential to change our world faster and more fundamentally than any previous technological revolution. AI has moved out of the realms of science fiction and into our everyday lives, working unnoticed, very often behind the scenes. But think back to some of the other major technological advances, ever since the industrial revolution, like steam power or electricity. They began by being useful in very specific applications, and only gradually did they became such an integral part of our lives that we could look back and wonder how we ever managed without them. Many people now predict that AI will grow to become the pervasive technology of the 21st century and beyond.

A recent House of Lords inquiry resulted in a report that concluded that “artificial intelligence, handled carefully, could be a great opportunity for the British economy, and that it presents a significant opportunity improve productivity, which the UK is right to embrace”. Their recommendations were designed to support the Government in realising the potential of AI for our society and our economy, but also, and just as importantly, to protect society from its potential risks. How we as a society choose to respond to these, will affect us as well as future generations.

Until maybe a couple of years ago, had I been asked what is the most pressing and important conversation we should be having about our future, I might have said climate change, or one of the other big challenges facing humanity, such as terrorism, antimicrobial resistance or the threat of pandemics. Many people may argue that the most important conversations today should be about how to eradicate poverty, or deal with the planet’s dwindling resources with a growing world population, or immigration or social justice issues, possibly even democracy itself. But, while not wishing to undervalue the importance of these huge challenges that we face, I would say that the most important conversation of our generation is in fact about the future of AI. After all, it may well dominate and dictate what happens with all these other issues, and hopefully will help humanity to solve them.

So why all the hype now? Why is AI becoming the new zeitgeist, elbowing its way through the frenetic babble on Brexit, Trump and the rise populist and radical ideologies, fake news, continuing unrest in the Middle East, and everything else we’re bombarded with in the news and on social media these days? I will say something about the reasons behind the current excitement and progress as well as addressing the worries many people have about AI.

But let me make it clear from the outset. I believe that the media’s obsession with sensationalism, and the fact that ‘fear sells’, along of course with Hollywood’s depiction of machines taking over the world, have added to much of the misinformation about AI. While the Terminator movies are good fun, we do need as a society to be more sensible about the risks of AI, for it is both prudent and logical that we debate and invest in AI safely and ethically. Both we as the public, as well as policymakers have a responsibility to understand the capabilities and limitations of AI technology as it becomes an increasing part of our daily lives. This will require an awareness of when and where this technology is being deployed.

AI in some sense is with us already. Apple's Siri, Amazon's Alexa, Google Translate. Many of us are full of wonder at these technological miracles. The fact that I can just speak into my smartphone in English and my words are instantly translated and spoken by a computerised voice in any language I choose, means the end of phrase books when we travel abroad.

And yet, being in the hands of those tech giants, Apple, Google and Amazon, means that many people are understandably concerned that, increasingly, all this data on our daily lives, habits and preferences is being logged, recorded and analysed by these powerful entities. To what end? How can society be reassured that the rapidly advancing AI technologies are transparent enough not to hide some insidious Big Brother motives?

So, is the current unprecedented level of interest, investment and technological progress in AI happening too fast? Well, I am certainly not advocating that we slow down the rate of technological progress. That is not possible, nor indeed desirable. However, it is vital that the public understand what is coming. And my concern is that it doesn’t.

The advent of what is sometimes referred to as the “fourth industrial revolution”, sees AI technologies and intelligent automation entering into many industries and often combining with related technologies, such as additive manufacturing, Internet of Things, virtual and augmented reality, and biotechnology.

So, what exactly is artificial intelligence?

The current definition of AI as follows: It is the broad range of technologies with the ability to perform tasks that would otherwise require human intelligence, like visual perception, speech recognition, and language translation. It often has the capacity to learn or adapt to new experiences or stimuli – so called machine learning.

But this definition is already looking out of date. Pattern recognition in data, whether in images or speech, and decision making based on that recognition is really about the rather dumb AI we already have with us, and it doesn't encompass the logical problem solving and creativity we are already seeing in AIs in a rudimentary sense.

AI technologies aim in the long term to reproduce, and even surpass, abilities that would require 'intelligence' if humans were to perform them. These include: learning and adaptation; sensory understanding and interaction, reasoning and planning, optimisation, autonomy, indeed even creativity and intuition.

So, while the ‘A’ in AI stands for Artificial, it could equally well represent several other words, such as

Automated intelligence, or the automation of cognitive tasks, both routine and non-routine;

Or Assisted intelligence, helping people to perform tasks faster and better;

Or Augmented intelligence: Helping people to make better decisions;

Or indeed Autonomous intelligence: Automating decision making processes without human intervention at all.

And you may feel more or less ambivalent, more or less nervous, about these different definitions.

We tend to hear a lot about certain approaches in AI like deep neural nets or re-enforcement learning and tend not to hear about many of the other tools and techniques being developed––concepts such as Baysian updating (which is based on probability theory involving updating estimates for decision making in the light of new knowledge or information) and Fuzzy Logic (a method of decision making that imitates humans by considering possibilities between digital values YES and NO – exploring the grey areas between stark black and white). Then there’s model-based reasoning, case-based reasoning, multi-agent systems, swarm intelligence, genetic algorithms and neural networks. Many of these tools have been around for decades.

So, what’s so different now?

Often the debates surround the notion of super-intelligent machines that far surpass our own cognitive abilities, but that is something for the future. No one, not even the AI specialists themselves know when, if ever, we will reach the heady heights of what’s called artificial general intelligence – when machines can do everything humans can, even potentially developing machine consciousness. The more immediate challenges, opportunities and risks of AI are of a far more mundane.

There are several reasons for the growing interest and excitement in AI. I will list three. Firstly, there have been a number of recent breakthroughs in AI research. One obvious one is the work of Google’s DeepMind, probably the foremost AI research organisation in the world – and by the way, it is British and based in London. Their deep neural net and re-enforcement learning algorithms demonstrated by their AlphaGo and AlphaZero programs have shown that AIs are already exhibiting rudimentary intuition and creativity – human traits that you may think are still a long way off in the future. They’re not. And DeepMind researchers’ work isn’t a secret–it’s being published in leading peer reviewed scientific journals, such as Nature and Science. The second reason for the current AI boom is the ever-growing amount of data available today to be mined and analysed, as well as being used to train the AI systems themselves. And the third reason is the incremental but relentless increase in computing power, allowing researchers to develop more sophisticated algorithms, together with the internet, which means that AIs have broken out of research labs and into cheaper, affordable and mobile devices, such our smartphones.

Let me outline a little history. The field of AI has been from the beginning inextricably linked to the development of digital computing, many pioneers of which, like Alan Turing and John McCarthy, were also closely involved with conceptualising and shaping what we mean by intelligent machines going all the way back to the early 1950s. indeed, Alan Turing’s seminal paper, Computing Machinery and Intelligence, helped to formalise the concept of AI.

For example, many of you will have heard of the so-called ‘Turing Test’ to determine whether a machine has achieved ‘true’ intelligence. It was understood early on that one promising approach was to build machines that could replicate the way we humans think, which led to the first experiments with artificial ‘neural networks’ designed to crudely approximate the networks of neurons in the human brain.

In the 1970s, the UK’s Science Research Council launched an inquiry, which led to the Lighthill Report of 1973. While supportive of AI in general, it was deeply critical of much basic research in the field. This resulted in widespread scepticism towards AI, even within the field itself, and led to what is often now referred to as the first ‘AI winter’.

But research into AI continued, and by the 1980s, so-called ‘expert systems’ started to be developed commercially. The aim was to record and programme into machines the rules and processes used by specialists in particular fields, and to produce software that could automate certain forms of decision making.

However, a second global ‘AI winter’ hit at the end of the 1980s because of the limitations of expert systems, as well as their high costs and tendency to become less accurate as more was asked of them. These systems were very different from the AI algorithms we have today that rely on machine learning, for those old systems didn’t have the ability to ‘learn’ new functionality.

Despite this, by the 90s, AI started to be applied to increasingly diverse functions, from predicting changes in the stock markets, to mining large databases and the development of visual processing systems such as automatic number plate recognition cameras. But these AIs deviated from the original dream of replicating human intelligence using logic-based ‘symbolic AI’. Instead, they looked for patterns and correlations in data. They were getting very good at doing this, but their applicability was rather narrow. Still, they proved, and are still proving today, to be enormously powerful. The widespread availability of cloud computing platforms, such as Alibaba, Amazon and Microsoft has allowed clients to tap remotely into huge stores of computing power with relative ease, without the need to maintain their own hardware. And the growth of open source development platforms for AI—such as Google’s TensorFlow, a library of components for machine learning—has made it easier for both researchers and commercial entities to make use of the technology.

Over the past decade, the most exciting development without doubt has been in artificial neural networks and machine learning, which are far more effective at a wide range of tasks because they are closer to replicating the way we human’s think than the old so-called ‘classical AI’. The celebrated example of DeepMind’s AlphaGo program defeating the world grandmaster at the Chinese game of Go is remarkable. The AI made a move during one game that no human could understand, not that is until much later in the game when it turned out to be a tactically brilliant move. Unlike chess, the game of Go relies mostly on intuition rather than simply crunching through possibilities, making AlphaGo’s achievement something of a landmark event in AI development.

So, you can see that developments in the field of AI over the past half century or more have been strongly characterised by boom and bust cycles, in which excitement and progress has been followed by disappointment and disillusionment. But it is now almost universally acknowledged that with so many recent advances, interest and investment is likely to persist this time round.

And the transformative opportunities that will be brought about by AI are staggering – the financial implications alone are eye-watering: according to a 2016 estimate by PricewaterhouseCoopers, AI could contribute 15 trillion dollars to the global economy by 2030 – more than the output of China and India combined.

The UK is one of a group of countries at the forefront of AI research, and we cannot afford to give up our lead in many areas of this technology as increased use of AI could bring major social and economic benefits; it offers massive gains in efficiency and performance to almost all industry sectors, from drug discovery to logistics. Artificial intelligence, handled with intelligence, could be a great opportunity for the British economy. AI presents a significant opportunity to solve complex problems and potentially improve productivity, which the UK is right to embrace. But our government has a responsibility to protect society from potential threats and risks.

In all areas, from commerce to industry to driverless cars, AIs will reduce errors, not just because they are smarter than us at those particular tasks but because they would give us the ultimate example of shared best practice. A human will make a mistake and hopefully learn from it so as not to repeat it, but that doesn’t stop other people making the same mistake, whether factory workers or drivers, say. But an AI not only learns, but it instantly shares this new knowledge with all other AIs so that particular mistake is never made again, by any of them!

So, what is the problem? When I say AI is coming at us too fast I do not mean we must try and slow this technological progress down. What I mean is that it is moving too fast alongside the wider debates about society’s concerns surrounding the ethics, transparency and security that come with the new technology.

I wish to deal with several of these concerns here. As the opportunities that AI provide society grow, so too do the potential risks: namely that AI will proliferate, uncontrolled and unregulated, in the hands of a few increasingly powerful technology firms, at the expense of transparency, privacy, equality and jobs.

Firstly, the need for transparency.

Unlike mistakes of the past, when there wasn’t enough transparency about scientific and technological issues that impacted on society, such as nuclear power or genetic engineering, we must put transparency at the heart of AI development and ensure that the necessary regulations are in place. This does not mean a stark choice between transparency and innovation. As I’ve mentioned, access to large quantities of data is one of the factors fuelling the current AI boom. But the ways in which data is gathered and accessed needs to change, so that innovative companies, and academia, have a fair and reasonable access to data, while at the same time consumers are able to protect their privacy. The recent implementation of GDPR goes some towards addressing this of course. But we still need much more awareness of when and where the technology is being deployed.

For example, we will soon be seeing the emergence of personalised medicine. This is a wonderful thing, until you start to consider WHO has all that information about you. We need to be custodians of our own health information.

The solution is not simply to rely on established concepts, like open data and data protection legislation, but also the development of new frameworks and mechanisms, like data portability and data trusts. Large companies that have control over vast quantities of data must be prevented from becoming overly powerful. We must urgently review the use and potential monopolisation of data by big technology companies. It is encouraging to see that organisations and institutions, in industry, academia and government are already working on developing tools that allow users to gain a deeper understanding of the algorithms used by AIs and to enable them to evaluate, critique and gain a measure of how much they can trust them. For example, the Engineering and Physical Sciences Research Council is funding a project run by researchers at three universities (Oxford, Edinburgh and Nottingham) that will be developing a ‘trust index’ for online platforms to assess their trustworthiness. And this is just one of 11 initiatives given funding by the EPSRC to enhance understanding of Trust, Identity, Privacy and Security.

Another concern is that AI technologies, advancing unchecked and unregulated could lead to greater inequality in society, between the haves and have-nots.

We must ensure that this use of AI does not prejudice the treatment of particular groups in society, The Government must therefore incentivise the development of new approaches to the auditing of datasets used in AI, as well as encouraging greater diversity in the training and recruitment of AI specialists.

A very real worry is a potential public backlash against AI, arising either out of fear or misunderstanding, just as we saw towards genetic modification in the 80s and 90s. And if the public become disengaged, then our leaders and policymakers will see it as less of a priority––although sadly, in the age of Brexit, everything else is less of a priority.

So, if regulations that need to be in place come too late, then at the very least, this will result in the technology not being used to its full potential in the public sector, and at worse, that there could be a clampdown on the use of AI in the public sector. But you can be sure that private companies will continue to use AI, unregulated, for example to improve the targeting of products. And the opposite is also a concern: over-regulation will mean the opportunities and life-changing benefits of AI will be lost, such as in healthcare.

I firmly believe that, used properly, AI can help reduce inequality in society, not make it worse.

The ethics of AI is another vital area that we need to explore.

Like any new technology, AI can be used for good or bad. Take the military for example. Autonomous weapons have been described as the 3rd revolution in warfare, after gunpowder and nuclear weapons. Sending in robots to carry out bomb disposal is a good thing: it avoids the need to put human lives at risk. But fully armed autonomous killer drones that can make life or death decisions independently of human control... not so nice.

And we know there are other ways to weaponise such new technologies. If Russian cyber hackers were able to meddle with the US 2016 elections, then what’s stopping cyber terrorists from hacking into any future AI controlled power grids, transport systems, banks or military installations. In fact, while I am on this subject can I just say that one growth industry in terms of jobs will have to be in cybersecurity.

A point worth repeating over and over again is this: technology in itself is neither good nor evil. It’s how we humans choose to make use of it. AI is no different. So, when it comes to military applications, just as one hopes chemists and biologists would not wish to build chemical or biological weapons––and indeed, most countries around the world have signed treaties not to develop them––so most AI researchers have no desire to build AI controlled weapons. There is therefore an urgent need to be discussing international treaties on AI use in the military. The issue, as with any technology, is that those in power will always want to explore its capabilities, not always for good. AI is no better or worse in this sense than any other technology that has military applications. For example it is difficult to stop the use of drones altogether just because they can deliver bombs when pretty soon will be expecting them to deliver our pizza.

And who is to blame when AI goes wrong? Are autonomous systems different from other complex controlled systems? Should accidents be treated like other kinds of mechanical failure? This raises important legal issues. Reaction to failures of autonomous systems is somewhat different to reaction to failures of human controlled systems. For example, a failure of a surgical robot that has genuinely learnt from past experiences cannot be attributed to any one person. Is it therefore the case that no one can be held responsible?

The concern with potential failures of autonomous systems could mean that such technologies are held back until they are believed perfect. But this can be too strong a requirement. People tend to be more accepting of a technology if they can choose whether or not to adopt it and have some control over its use. Public perception is very sensitive to the distinction between imposed risk and risks that individuals feel they can control. But sometimes autonomous systems are needed where humans might make bad choices, maybe as a result of panic in stressful situations. So human operators are not always right, nor do they always have the best intentions.

Society can be unforgiving of any shortcomings in new technologies. Driverless cars are a good example. Think of the many thousands of people killed on roads every year. It is widely acknowledged that about 80% of car accidents are caused by human error: tiredness, distraction, alcohol or just bad judgement. These would be entirely eliminated if AIs were in control. Now leaving aside the familiar ethical dilemma of who is culpable if a driverless car causes an accident––is it the owner? The company that sold the car to them? The computer scientist who develop the algorithm that contained the glitch? But while we eliminate 80% of human-caused accidents on the roads, new, unforeseen, accidents may be caused by AIs themselves, because they’re not clever enough. Let us imagine nevertheless that these will be far, far rarer than the 80% of accidents that have been avoided, and let us assume that these glitches are ironed out in time. It is still a remarkable yet understandable attitude that the public are far less forgiving if a machine causes an accident resulting in a fatality. Never mind that we’ve avoided a hundred people being killed in car accidents caused by human drivers, we would somehow still be more disturbed by the one fatality caused an AI. It’s an interesting moral dilemma.

Let me now address the issue that most of us are worried about these days: jobs.

We cannot predict the future, so we cannot say how many jobs will be impacted by AI, robotics and automation. One report says 15 million jobs in the UK will be affected (not lost, but affected). But where does this number come from? If we're talking 5 years from now, then obviously it will be much lower. But if we're talking 30 years from now, then it’s a number no one can predict, so it’s meaningless.

Undoubtedly new jobs will be created as AIs take over current tasks. And we simply do not know what these might be. Automation has always replaced humans, going all the way back to the industrial revolution. So, in itself this really isn't knew. And the economy will, I believe adapt to AI as it does to all new technologies in time. Technology has not historically led to long-term unemployment, although it has displaced workers from specific tasks and altered the type of employment available. What is new is that we cannot predict the extent to which the AI revolution will do this. And that makes people, rightly, nervous. Will AI empower and liberate people, or will it take control away from us?

Many jobs will be enhanced by AI, many will disappear and many new, as yet unknown jobs, will be created. A significant Government investment in skills and training is therefore imperative if this disruption is to be navigated successfully and to the benefit of the working population and national productivity growth. This growth is not guaranteed and we need to consider carefully how AI can be used to raise productivity, for the new technologies should not be viewed as a general panacea for the UK’s wider economic issues.

Last month, the Bank of England’s chief economist, Andy Haldane warned that “large swathes” of the workforce in Britain could in the coming years face unemployment as AIs and robots take on many low-skilled jobs. But other commentators point out that there is no reason not to expect AI to create as many jobs as it takes away. This will vary from country to country, depending on the nature of the industries.

So, let us assess this situation more carefully. In most cases AIs won’t replace us, but work with us to make our jobs easier and better. For example, in healthcare AIs are already helping us find tumours in scans, helping with diagnosis and helping in surgery.

Joseph Stiglitz is a economics Nobel winner and former chief economist at the World Bank. Just two days ago, he delivered a speech at the Royal Society on artificial intelligence. He argued that it may not be such a bad thing if AIs take over many of our jobs, because this would open up new employment in areas where there is a big demand for human workers, in education, the health service and care for the elderly. He says that “if we care about our children, if we care about our aged, if we care about the sick, we have ample room to spend more on these”. So, if AIs take over certain low-skilled jobs, such as in transport, factories, supermarkets, call centres then the blow is softened because we could hire more people in those industries that rely on human interactions, such as in the care services.

Let me give a crude example: think back to ancient Greece or the Roman empire. The rich and powerful led lives of leisure. Why? Because they had an army of slaves doing all the work for them. Now, please do not misunderstand me. I am not advocating an army of slave robots at our every bid and call, while we wonder around in togas eating grapes and discussing philosophy––actually that doesn’t sound too bad! Nor do I suggest that a life of doing nothing would be fulfilling. But rather, the reason we work––often in jobs we would happily give up if we could afford to––is to earn a living, to have the incoming that allows us to live. What if those jobs were done by AIs and we still received a living that allowed us to pursue meaningful and purposeful existence, working because we wanted to, not because we had to? I don’t know the answer. I’m not an economist or sociologist, and the jury is out on whether a new economic model based on universal basic income would work. It is of course being trialled in several countries.

How do we then engage with the public on these complex issues?

Should certain avenues of AI research be abandoned because there is significant objection to the idea – or is a technology push sometimes the right thing? After all, there were objections to the idea of heart transplants when it was a new technology. Now it is an accepted and vital area of medical practice.

A lot of research suggests an appreciation of what AI will mean is shallow amongst consumers. Nearly half of consumers surveyed in the UK by the British software giants, Sage, readily admitted they have “no idea what AI is all about.” The survey concludes that: “Although those in the technology industry consider AI to be the most important topic around right now, there is a lot we still need to do to better educate the world about AI, define it, and communicate what it can really do.”

More encouragingly, the most extreme negative prediction – that robots will “take over” – is widely rejected by both technology communities and average consumers alike.

Their findings also fit the picture that I have been presenting: that consumers, though generally optimistic about AI, are more likely to be concerned about the potential for technology to dehumanise interactions or lead to job displacement.

So what else can be done? As AI decreases demand for some jobs, but creates demand for other new ones, retraining will become a lifelong necessity and pilot initiatives, like the Government’s National Retraining Scheme, could become a vital part of our economy. This will need to be developed in partnership with industry, and lessons must be learned from the apprenticeships scheme. At earlier stages of education, children need to be adequately prepared for working with, and using, AI. This will require a thorough education in AI-related subjects, with adequate resourcing of the computing curriculum as well as support and training for teachers. For all children, the basic knowledge and understanding necessary to navigate an AI- driven world will be essential. In particular, the ethical design and use of technology has to become an integral part of the school curriculum. It may even be that our lifelong working pattern changes from one in which we are trained only at the beginning of our working lives and then use that set of skills through to retirement, to one in which we undergo a cycle of training, working for a few years, then retraining again in a rapidly changing world of work.

In summary then, I urge the scientific community, politicians, policymakers, educationalists and business leaders to improve the transparency on the use and development of AI and to ensure that the public has trust and confidence that AI is not taking away freedoms or privacy. If we don’t, there is a real danger of a GM-style public backlash against AI.

And there is an urgent need for wider public engagement, consultation and debate on the ethics of AI. These must be put at the very heart of AI development and we must urgently ensure that adequate regulations are in place. The wider debate about the implications of AI must catch up with its technological progress. After all, remember that it is not AI itself that should worry us, but rather the humans who control it.

In closing, I wanted to leave you with a message from the late Stephen Hawking. Stephen has been widely quoted in the press as offering dire warnings about the coming of AI. Well, he had this to say:

“Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one – industrialisation. And surely, we will aim to finally eradicate disease and poverty. Every aspect of our lives will be transformed. In short, success in creating AI could be the biggest event in the history of our civilisation.”

AI is going to transform our lives in the coming decades even more than the internet has over the last few decades. Let’s make sure we’re ready for it.

Thank you.