Meet the Researcher: Matthijs Maas

10 March 2021

Matthijs Maas, Research Associate
Email: mmm71@cam.ac.uk

Matthijs looks at how we should govern high-risk technological change, in a changing world, using governance tools which may themselves be left changed by technology, for better or worse. In particular, he investigates global governance strategies for artificial intelligence (AI), with an emphasis on global institutional design and arms control regimes.

Matthijs is an Associate Fellow at the Leverhulme Centre for the Future of Intelligence, a College Research Associate at King’s College (Cambridge University), a fellow at McLaughlin College (York University), and a non-resident affiliate researcher at the Centre for European and Comparative Legal Studies (University of Copenhagen). He holds a PhD in international law from the University of Copenhagen, and a MSc International Relations from the University of Edinburgh. He has previous experience in diplomacy, think tank research, sustainability consulting, and a coding bootcamp, amongst others.

Keywords: Artificial Intelligence, AI Governance, International Law, Arms Control, TechLaw, international institutional design


Please can you tell us about your main area of expertise?

As with many of my colleagues, my work tends to be interdisciplinary, which matches the risks we study, and the relative newness of our field. Formm4 my research area of AI governance, other factors that contribute to interdisciplinarity are the ubiquity of this technology and the rapidity of technological development.

As such my research involves developing conceptual tools and applying existing academic lenses to new problems and contexts in AI governance. My AI governance work has drawn on (international) political science, legal theory, and various historical case studies. I also draw on frameworks from risk studies to help understand governance levers for mitigating global catastrophic risks more broadly; and I’ve applied lenses from criminology and organizational 'normal accident' theory to understand security- and safety- risks from AI technology.

I’m currently particularly interested in the ways that various applications of AI systems can strengthen or erode international cooperation and the norms or processes of international law, as well as the resilience or ‘adaptability’ of AI governance regimes to future sociotechnical changes in AI technology or its use.

Can you tell us about your pathway to CSER?

I’m not sure there was an explicit overarching plan for the route which I pursued into this field. As often as not, I was just reading the next interesting thing, or pursuing the ‘next interesting topic’. So any retrospective story I can tell about my route to CSER will really be that--a narrative where I try to piece my steps together, which misses much of the serendipity along the way. However, I think I can distinguish four steps I took into this field, with CSER researchers playing a large role in several of them.

When I first became interested in- and worried about technological catastrophic risk: my first ‘uh-oh’ moment came from my concern over nuclear weapons, and in particular the track record of accidents. Around 2013, as a MSc student at Edinburgh, I read Scott Sagan’s ‘The Limits of Safety’, which maps numerous near-miss accidents in Cold War nuclear forces. In several of these so-called ’normal accidents’, accidental global nuclear escalation was avoided only by sheer luck. His book and others drove home to me how our societies are constantly walking a knife’s-edge in their deployment and management of even the most powerful and potentially catastrophic technologies. Most importantly, it drove home to me how even highly skilled operators may not be able to drive down the accident rate sufficiently low, given the extreme stakes of a disaster.

When I became interested in existential risk generally: in 2014, during the 2014 Nuclear Security Summit, I was working at a Dutch strategy think tank on these topics. I was lucky to attend an Amsterdam workshop by CSER’s Seán Ó hÉigeartaigh. In conversations, he expanded my horizon on how the risk of nuclear weapons was just one of a broad range of potential extreme risks facing our world. As I spent the next few years working at various posts in diplomacy and consulting, I remained interested in these topics. Through conversations with people from the Effective Altruism community, and through research assistance work with researchers from the Global Catastrophic Risk Institute (GCRI), I found myself increasingly involved in the community of scholars investigating these existential risks.

When I specialized in AI governance: my focus on the responsible governance of potential risks from artificial intelligence was spurred through my work with GCRI, and through my early involvement in the Future of Humanity Institute’s ‘Global Politics of Artificial Intelligence’ research network (now the Centre for the Governance of AI). While working on their AI governance research agenda, I began to see AI as an emerging ‘general-purpose’ technology--a technology that offers tremendous promise, but which can easily contribute to extreme global risks. Could we manage this new technology well? With many past technologies, I feel that we have at best stumbled our way through avoiding catastrophe (as with nuclear weapons), or at worst have remained stuck in clearly catastrophic societal trajectories (as with fossil fuel-driven climate change). As such, my concern was and is over whether we will be able to ‘get governance right’ at the dawn of this new, potent technology.

When I took the step to academic AI governance research: CSER played a key role in my taking my research interests into academia. In 2016, I was selected to give a student presentation on ‘lessons from nuclear nonproliferation for AI arms control’ at CSER’s first Cambridge Conference on Catastrophic Risk. At this conference, I met Dr. Hin-Yan Liu from the University of Copenhagen. Our conversations there led me to pursue a PhD on AI governance under his supervision. The environment at the Copenhagen Law Faculty allowed me to explore a broader range of concepts for thinking about the risks from (AI) technology, the technology’s disruptive impact on (international) law, and how these risks can and should be governed. My research there provided the tools and inspiration to articulate my research agenda for CSER -- a team I was very happy to join last October.

Please tell us about your current research at CSER?

My CSER research agenda is somewhat dynamic, as is the field. My broad focus area concerns investigating various governance instrument choices (such as treaties or institutions) and design choices (such as ‘hard law’ or ‘soft law’ voluntary standards), and comparing how these could provide effective, legitimate, resilient and coherent governance regimes for high-risk AI.

Specifically, I have recently become interested in what I call ‘Plan-B global governance for AI’. If meaningful comprehensive governance of (high-risk) AI technology proves intractable in the coming decades, are there alternative interventions or governance paths that we could or should pursue in the meantime? Or if, on the other hand, the world does reach agreement on some global AI institution or treaty in the coming years, how do we prevent such agreements from fraying or becoming bypassed by the advancing state of the art in AI technology? My research on this draws on both historical case studies, as well as international law and international relations scholarship on global governance instrument choice and -design.

What drew you to your research initially and what parts do you find particularly interesting?

As mentioned, I was drawn to the field because of a concern over the responsible governance of AI technology. Through my work with GCRI and the Centre for the Governance of AI, I began to understand that while AI technology can address many problems, it can also contribute to extreme risks. These could occur if the technology continues to advance and becomes increasingly societally transformative. However, far-reaching global disruption is plausible even if you grant unduly pessimistic assumptions around future AI progress, and assume merely the ongoing rollout deployment of today’s algorithms. After all, even today’s machine learning systems might more than suffice to produce global catastrophe, if they were unsafely integrated in nuclear command and control systems.

This underlies my desire to help inform and shape global debates and policies on AI governance during what may be a relatively brief window of opportunity to ‘get it right’. So a sense of urgent concern certainly played a role. But there is also a sense of excitement and opportunity: both the study of existential risks and the field of AI governance are young, still developing, and more flexible than more settled academic fields. So in a sense, the future is still open, and I’m encouraged that we might all in our own ways contribute to shaping and improving it.

Of course, there is also an intellectual attraction of studying a field that touches on so many domains of life, as well as so many of our longstanding philosophical, ethical and political questions. In my research, I am constantly excited by the interdisciplinarity and the breadth of ideas that I get to engage with, and the people I get the opportunity to learn from.

Finally, as a bonus, AI is a topic that resonates with many people, such that my research area also made for some great bar conversation-starters, back when bars were a thing.

mm2

What are your motivations for working in Existential Risk?

I’m partly motivated by concern about the extreme risks, and our chequered societal record on managing such risks. But it is not all dread -- there is certainly also a sense of excitement and opportunity for the good societies we could still create, if just given the time and opportunity.

We have so many stories left to live together, and it would be such a tragedy if the tapestry of our human tales, with all their experiences and patterns, were to be cut short at so early a historical moment. This is not to pretend that this history has always or even mostly been good or just, of course. It certainly hasn’t been. But one can, as Max Roser recently put it, simultaneously believe the following three things--that “the world is much better; the world is awful; the world can be much better”. Slowly, we have been learning to reckon with our heritage, and with our responsibilities to each other, and to our planet. The need and opportunity to learn from our mistakes and improve for me creates a certain moral urgency of optimism. But to climb that hill, we will still need time. We need to ensure that we and future generations will have that time and opportunity.

Moreover, we have a potentially very long future ahead of us. In a very real sense, our future story could be much longer than we can easily wrap our heads around (given our human biases such as scope insensitivity). Such a future will not be without its problems, but it will be full of surprise, and opportunity. We can choose to fill it with beauty, restoration, joy, justice, exploration, and give space for many people to tell their own tales. Given all this, I think that the mitigation of existential risks can well serve as a ‘common cause’ for many people of diverse convictions: it’s something we can converge on, to work together to secure a future for ourselves, and for those generations whose time is still to come.

What do you think are the key challenges that humanity is currently facing?

While it is not the focus of my own research, the climate crisis is surely one of the most critical challenges today. Certainly, it is one with perhaps the greatest scientific clarity, urgency, and global visibility. Likewise, our society’s response to the COVID-19 pandemic is laying bare some of our epistemic, institutional, and governance vulnerabilities around catastrophic risks. As such, it seems very likely that if we cannot learn to address these challenges, we will be in a very poor position to address any other extreme risks, either today or in the future.

This speaks to a more general concern I have around our society’s structural ability to meet many uncertain (technological) risks. This is that, as E.O. Wilson once put it, humanity has “primaeval emotions, medieval institutions, and godlike technology’. For many extreme risks, we face cognitive biases around low-probability high-risk events, ‘tragedy of the uncommons’ political dynamics, and (global) institutional tools that are potentially ill-suited to adapt to catastrophic risks, or which struggle with anticipating risks from emerging technologies. This is a particular challenge in considering potential risks from AI technology, and how these may evolve in coming years.

More philosophically, something that worries me is that existential risks may simply not be a ‘fairly human-sized’ challenge. Rather, it might be as if, if someone were to make some sort of video game of human history, that the ‘existential risks’ stage would get that game panned for ‘unreasonably steep learning curves’ and ‘ridiculously unfair difficulty settings’. It might be narratively appealing to think of this as a single threshold test of our societal maturity, so that if we manage to pull together to find the (scientific, moral, political, institutional) solutions to address one class of catastrophic risk, that this would show that we deserve to be in the clear. But I am not sure.

Of course, as mentioned above, successfully avoiding one type of risk (e.g. environmental breakdown; pandemic risk) would likely be important or even necessary to our ability to address other existential risks. At the very least, they would avoid future societal instability; at best, they would likely involve a set of political or institutional changes (e.g. a recognition of the rights or interests of future generations) that would also be relevant for addressing many other catastrophic risks. But such improvements, even if necessary to meeting other challenges, might not be sufficient (and in rare cases might even stand in tension). So if existential risks are indeed a ‘test’, then it is one that doesn’t involve a single challenge to be solved once, but rather many problems which need to be simultaneously solved, today and going forward; a test where the final grade is ‘pass/fail’; and where that final score is determined not by our average GPA, nor by our strongest course, but by our performance on our weakest subject. That concerns me.

For people who are just getting to grips with Existential Risk, do you have any recommendations for reading, people to follow or events to attend? 

Don’t get me started! While this field is new, there is a lot of interesting and developing work out there, that offers interesting introductions to this field:

Some other resources to follow:

You can find me at Twitter at @matthijsMmaas ; overviews of papers are available at Google Scholar, SSRN, ResearchGate. My personal website is: https://www.matthijsmaas.com/

mm1

Subscribe to our mailing list to get our latest updates