APPG for Future Generations event: How do We Make AI Safe for Humans?

20 July 2018

The All-Party Parliamentary Group for Future Generations held another event in the UK Parliament, on How do We Make AI Safe for Humans?

It was held on the evening of Thursday 19th of July in the Attlee Suite of the House of Commons. There will be several more events this year in the 'Managing Technological Risk' series, covering biotechnologies, environmental resource management and nuclear risks.

The APPG raises awareness of long-term issues, explores ways to internalise longer-term considerations into decision-making processes, and creates space for cross-party dialogue on combating short-termism in policy-making. It was established by Cambridge students working with CSER researchers.

Event overview:

Artificial intelligence presents us with many opportunities, but also many risks. This panel brought together four global experts on AI, who considered how to maximise the long-term benefits of AI for everyone, and ensure that it is developed safely and responsibly. They offered practical guidelines for improving policy making in this topical, and vitally important, area.

Report:

AI APPG panel

The meeting was chaired and introduced by Lord Martin Rees. In his opening remarks, Lord Rees challenged those present to look beyond the fact that AI was currently a high-profile issue and consider how it will be viewed from the perspective of the 22nd century, a perspective that at least some of those in the room stood a chance of seeing.

AI APPG felten

The first speaker of the evening was Edward Felten, former Deputy White House CTO under Barack Obama, Director of the Center for Information Technology Policy and Professor of Computer Science and Public Affairs at the University of Princeton.

Professor Felten argued that whilst the future of AI is uncertain, the most likely trajectory involves the continued development of 'narrow' AI systems that are intelligent only within a specific, well defined, domain – from driving vehicles to playing chess. He proposed that this would in turn suggest that, though potentially dramatic, the development of AI would remain broadly linear without any 'intelligence explosion' or 'technological take-off'. We could therefore expect that the challenges that AI will pose to future generations will take the form of small shocks as AI replaces human intelligence across a wider and wider range of domains, but with humans still playing a vital role in coordinating, and continuing to develop, this technology.

Professor Felten saw three main implications of his view for policy-makers. Firstly, they should not fear that machines will seize control but that they will lose control by building systems that are so complex and interconnected that humans will no longer understand them or how to make them secure. Secondly, policy-makers should not fear that machines will attack humans, but that humans will use machines to attack one another, for instance via Lethal Autonomous Weapons and Cyber warfare. Finally, they should not fear that machines will oppress humans, but that humans will build machines that become engines of oppression such as enhanced surveillance and social control.

He concluded that, fundamentally, AI is going to raise the stakes on issues that we are already facing. For instance, there is little evidence to suggest that AI will leave humans without any jobs left to do, but plenty to suggest that it will demand ever increasing levels of flexibility and development in the labour market, which many workers simply will not be able to keep up with. At present we can work on these challenges whilst they are still growing relatively slowly. Experimenting in this way gives us opportunities to learn for the future and this is the only way we will have the knowledge and experiences that we need for face the even greater challenges that this future will bring.

 AI APPG bryson

The next speaker was Joanna Bryson, a researcher at the Center for Information Technology Policy and Reader in Artificial Intelligence at the University of Bath. She wrote up her remarks here.

Dr Bryson argued that, whilst it was sometimes easy to get hung up on the mysteries of intelligence, she doubted that there was ultimately any intelligence more general than the application of Bayes' Rule – the rule that governs how to alter one's beliefs about the world to take account of new evidence – and that this is something that AI systems could already do.

However, she went on, intelligence is not the same as an algorithm, it is computation, the implementation of algorithms in physical systems. For human beings, intelligence is all about translating (physical) perceptions into (physical) actions. Losing sight of this fact has led many people to attribute overly idealised properties to intelligence that real-world system cannot instantiate. This is why the development of AI tends to rely on improvements in computer power as much as, if not more than, theoretical breakthroughs in computer science.

Hence, the development of AI must be seen in relation to its impact on resources. Dr Bryson argued that this raises two important questions, which one might label sustainability and inequality, or security and power or simply 'how big of a pie do we make?' and 'how do we slice it?'

AI will empower humanity to do more than it has ever done, for better and worse. However, it will also make individual humans more 'exchangeable' by reducing the importance of individually acquired skills and expertise. This is likely to drive up inequality, which is already much higher than the ideal level for promoting human wellbeing.

AI will also fundamentally change the nature of work. Automation may not reduce the number of jobs, because, by increasing the productivity of individual workers, it can increase the demand for some labour even whilst replacing other labour. However it will tend to increase the demand for higher skilled, technically sophisticated jobs that not everyone will be able to perform.

Finally, AI will change the nature of our communities. Whilst local issues will always matter to individual people, AI will gradually erode the difference between different groups from the perspective of the global economy. This suggests that such issues will be pushed down the policy agenda, which can have significant knock on impacts. This would only be exacerbated by policies like a Universal Basic Income that attempt to solve the distribution problem whilst doing even more to erode the importance of individual people and their local communities.

Dr Bryson concluded that achieving good outcomes for people from the development of AI means focusing on the people, and not just the technology.

AI APPG bostrom

The third speaker was Nick Bostrom. Author of Superintelligence, director of the Future of Humanity Institute and Professor of Philosophy at the University of Oxford.

Professor Bostrom continued the discussion of AI safety by arguing that, despite what previous speakers had said there was a very real chance that a superior 'General Artificial Intelligence' could emerge in the next few decades, and that we should not ignore the risks that this would pose to humanity.

For instance, he pointed out that a lot of the excitement in recent years has centred precisely on developments which have allowed AI systems to engage with a far wider scope of problems than was previously possible, such as the deep learning revolution. These rely on designing algorithms that can learn from raw data, without the need for humans to decompose problems into component parts and then define rules for each of them.

If we take seriously the possibility that in the future we may do more than simply repeat past successes in developing AI, Professor Bostrom argued, then there are three great challenges that policy makers must face.

The first is AI safety, or how to keep intelligent systems under human control. If we make algorithms arbitrarily powerful then how can we be sure that they will be safe - i.e. that they will continue to do what we intend them to do? If we cannot achieve this level of control then it is virtually certain that, sooner or later, these systems will harm humanity. This is now a well-established field of academic study. However, the resources being devoted to it remain considerably lower than they should be given the potentially existential risk from failing to solve this problem.

The second is global coordination. We have a long track record of inventing technologies and then using these to fight wars, stifle economic competition and oppress each other. The more powerful our technologies grow, the more worrying this becomes. Given that we already have nuclear weapons that could destroy all of humanity the development of an international race mentality, in which different actors attempt to gain a geopolitical edge via the development of AI is something that should really scare us.

The third challenge, Professor Bostrom argued, is how to deal with the potential that humans are, or will be, harming AI. We are building minds of greater and greater complexity and capability. Might these deserve some degree of moral status? We already think that many non-human animals have moral status and can suffer (including quite simple animals like lab mice). Why should we not have the same level of concern for digital minds? At present we don't even have criteria for when we might start to worry about the interests of intelligent machines, even if these would merely reassure us that we have nothing to worry about at present. This could be dangerous, because computers may deserve moral standing sooner than we think and if we do not even ask questions about when this might happen then we might unwittingly commit moral atrocities against them.

AI APPG avin

The final talk of the evening was by Shahar Avin, former software engineer and researcher at the Centre for the Study of Existential Risk at the University of Cambridge.

Dr Avin argued that there is no one set of risks from AI. Rather there are many risks that deserve our attention. Some of these are short term risks whilst others are long term, usually because their nature is so uncertain that it will take many years of study before people can even understand how to go about preventing them. Similarly, some of these risks relate to the nature of the technology itself and what it is able to do, whilst others are risks associated with malicious use, and the powers that this technology will give to human being. Finally, some of these risks are associated with the development of new technology itself whilst others relate to its broader systemic effects. Sometimes we cannot attribute a harm to any individual actor, including a machine, but must recognise that it flows out of complex global systems, from the economy to the global security system.

What should policy makers do? Dr Avin argued that they should consider the governance of technology across at least four levels.

First there is the technology itself, and in particular the AI algorithms. It is very unlikely that regulation at this level is going to work because ideas want to be known and information wants to be free.

Then there is the hardware that these algorithms need to run on. This is far easier to regulate because it is unequally distributed around the world, so that countries that possess more computing power are able to exert more control over how it is used. At present, however, knowledge about the use to which computing power is being put is very limited and we will not be able to regulate well until we know a lot more about this.

The next level, is the regulation of the data that AI systems need to function. This already a hot topic of debate, though generally an area where people feel governments could be doing better.

Finally, policy makers might try to influence the talent that will be needed to create AI systems: how we educate, employ and monitor these people. This seems much more like the bread and butter of government policy, at least in some places.

At present, Dr Avin concluded, the UK is undoubtedly well positioned to engage with international norm building around the development of AI. It also has a very important role in educating the next generation of AI engineers. Not only is the UK currently a country with a lot of AI development talent, but also people simply like to hear content with a British accent for some reason. At present however, this talent is very open to free movement, and if it is not well developed and managed the UK may not keep this position for long.

Subscribe to our mailing list to get our latest updates