..

Markus Anderljung on AI Policy

Markus Anderljung is the Head of AI Policy at the Centre for Governance of AI in Oxford, he was previously seconded to the UK government office as a senior policy specialist.

In this episode we discuss Jack Clark’s AI Policy takes, answer questions about AI Policy from Twitter and explore what is happening in the AI Governance space more broadly.

(Our conversation is ~2h long, feel free to click on any sub-topic of your liking in the Outline below. At any point you can come back by clicking on the up-arrow at the end of sections)

Contents

Jack Clark’s AI Policy Takes: Agree or Disagree

Michaël: Let’s begin this interview with AI policy takes, true or false; a game where I tell you AI policy takes from Jack Clark on Twitter and you tell me which ones you agree or disagree with. For some context, Jack Clark is the co-founder of Anthropic and recently published a thread on Twitter, a goldmine of AI policy insights. The actual thread was something like, “One like = one spicy take on AI policy”.

The Real Danger In Western AI policy

Michaël: This is one of the tweets: “The real danger in Western AI policy isn’t that AI is doing bad stuff, it’s that governments are so unfathomably behind the frontier that they have no notion of how to regulate, and it’s unclear if they even can.”

Markus: I think for most of these you’ll struggle to have me say, “Oh, I agree or disagree,” because everything is nuanced. It’s right insofar as the thing that I primarily care about is “how do we make sure that we can deal with the future systems that will crop up, and be more powerful and more capable than the systems we have today.” If it’s the case that we can’t handle the systems that are cropping up today and the harms that they’re causing, then that’ll a bit worrying about how we might be able to handle future problems. I think this is true for a lot of the Jack Clark takes. They’re more true for the US than for the rest of the world. In other parts of the world, people are figuring out, or trying to regulate AI. The EU is a big example of this.

Michaël: True, but I think most of progress in AI is in the US, right?

Markus: Primarily, if you think about frontier development, definitely, the US is the main place where it’s happening.

Artificial General Intelligence Is Frowned Upon In AI Policy Meetings

Michaël: New take: “The notion of building general and intelligent things is broadly frowned upon in most AI policy meetings. Many people have a prior that it’s impossible for any machine learning-based system to be actually smart. These people also don’t update in response to progress.”

Markus: I somewhat agree. I think it’s true that this is a thing that not a lot of people want to talk about or that it’s associated with sci-fi. I do think that people are probably in the course of changing their minds as they start seeing models that are more general. And if it’s the case that the development of these AI systems moves in the direction of being more and more general, and more and more capable, then we should expect more people to update in this direction. We’re seeing a bit of updating and if the world goes in a certain direction, we’ll continue to see a bit more of it.

Michaël: We’re seeing even people talk about advanced AI systems in The New Yorker or in The Times with a new article on AI.

Libertarian Snow Crash Wonderland

Michaël: Another take: “The default outcome of current AI policy trends in the West is that we all get to live in libertarian Snow Crash wonderland, where a small number of companies rewire the world. Everyone can see this train coming along and can’t work out how to stop it.”

Michaël: Are we in libertarian Snow Crash wonderland?

Markus: I don’t know, I haven’t read Snow Crash. I’m assuming that the thought is something like, we’ll live in a world where all the power lies with a small number of corporations. Those corporations determine what happens in our lives, we’re beholden to them. I don’t know. Firstly, I wouldn’t expect everyone to be seeing this train coming. If that was the case then why are we not stopping it? Or in that maybe the world could coordinate. Everyone could look around and realize everyone else agrees, and then you could do something about it. I also don’t know the extent to which we’ll end up in that kind of world.

Markus: One reason against that is, if we just look at the history of other technologies, quite often a thing that you see is initially when the technology comes into place, there is a accumulation of wealth and power as a result of this technology into a small number of actors, and then over time that diffuses. Partly via government intervention. So for example, we saw this in the late 1800’s, early 1900’s, in the US, because of the robber barons, we now have Antitrust Regulations. And if it’s the case that development is slow enough that policymakers are going to see that this power is going to be concentrated, be really worried about it, then I would expect that you start moving away from this Snow Crash wonderland a little bit.

Feeling Insane From Repeating The Same Basic Points

Michaël: “AI policy can make you feel completely insane because you will find yourself repeating the same basic points. Academia is losing to industry, development capacity is atrophying, and everyone will agree with you, and nothing will happen for years.”

Markus: That’s probably true. In general, that’s just how policymaking works. The way to make the world see things the way you see it is to keep repeating your point over and over again, and figure out how to tell it in a way that is compelling to people. That’s this whole business. Maybe it’s extra bad in the AI policy space, I don’t really know.

Being Wildly Specific To Get Stuff Done

Michaël: “To get stuff done in policy, you have to be wildly specific. CERN for AI? Cute idea. Now tell me about precise funding mechanisms, agency ownership, plan for funding over long-term. If you don’t do the details, you don’t get stuff done.”

Markus: That’s right. And I think this is the kind of work that I’m the most excited about in the AI policy space, is people trying to do exactly that kind of thing. Just go really, really specific, think through all the details. Because there are many people in the world who will spend five or 10 minutes, or maybe an hour, thinking about an idea. There are very few people who will actually spend a week, or several weeks or years thinking about an idea and figuring out all the ins and out of it. That’s the way to have a competitive advantage and actually be able to be informative to people.

Michaël: I think now some people are trying to translate a bunch of ideas from the AI safety or AI Alignment base, to concrete policy to tell the US government.

Twitter Questions

Michaël: Now to move on to AI governance takes. Another format where I ask you questions from Twitter asked by my followers, and you try to give some quick answer.

The Difference Between AI Governance And AI Policy

Michaël: Some of them are AI governance takes and some of them are AI policy takes. Can you maybe give a quick definition of the difference between policy and governance?

Markus: People use it in a few different ways. At the Center for the Governance of AI, often we’ve used governance to mean something like the entire space of what effects actions of important actors in the AI space, and what can be done to change those actions in positive directions. That’s how we’ve defined it. Sometimes I’ve defined it as something like, “Oh, there are all these things that need to be done to make sure that AI goes well or that AI benefits humanity. Some of those are technical questions of how to make AI systems meet such and such criteria, and then AI governance is everything that’s not that.”

Markus: And then on the policy side of things, people use this word in a few different ways as well. Sometimes people use it to just refer to what governments do. So government policy or public policy. I tend to use it a little bit more vaguely to mean something like what should various actors do. And so you might have internal policies, at AI labs, that they might follow, for example.

Do We Need A New Federal Agency For Regulating Advanced AI?

Michaël: First question. Do we need a new federal agency for regulating advanced AI?

Markus: This is the thing to figure out. It depends a lot on how this technology evolves. If it’s a general purpose technology, like electricity, then, we probably don’t need one central regulator for all of AI. Instead, what you need primarily is that all of your regulators across the entire economy understand how this technology is going to affect their space, and they gain the knowledge to be able to deal with that. On the other hand, if it’s the case that the development is more centralized, or there are a few systems that end up being very, very impactful. Or we end up in a world where foundation models are used across the entire economy, but there are only a handful of foundation models. In that kind of world, I could imagine that you need some AI specific regulator that specifically deals with these questions.

What’s A Foundational Model?

Michaël: Imagine there’s a regulator watching this video. What’s a foundational model?

Markus: Some people will call it just branding. A foundation model, is a new concept that came out of a team at Stanford HAI, this is a research organization there for Human-Aligned AI. And basically the idea is that you have one model that has been trained on a huge amount of data, that can be used in a very, very wide range of contexts. That’s the rough idea of what a foundation model is. And then their claim is that over time, or that we’re seeing a trend towards, these foundation models mattering more and more.

Markus: The claim is that the way that the AI ecosystem might look in the future is that you have a handful of really, really big models that are pre-trained. They’ve been trained on a lot of data, and then those models are fine-tuned on different tasks. Some people will say that this is just branding on part of Stanford, and that you could just use terms like a base model. Like a foundational model, or a model that’s at the base of your tech stack that you then build things on.

Will AI Governance Reduce AI X-risk Assuming Short Timelines And Fast Takeoff?

Michaël: Suppose we’re in a world where AI timelines are very short and takeoff speeds are very rapid, let’s say fast takeoff and AGI in less than eight years. Has the AI governance community produced any plans that will meaningfully decrease risks under the scenario? So lot of times I hear “timelines”, “takeoff speeds”, “AGI”. No need to go into big definitions, but simple ones.

Markus: What they mean here by AGI, is that maybe at some point we’ll have a system that’s sufficiently intelligent such that it can improve itself. It can find new data in the world and figure out how to readjust its weights, or find new data to train itself on, or find ways to access more compute to train itself on or implement itself into. And then the thought is that, that kind of system is very “agenty”. So it’s a system that takes actions in the world, and it plans in the long term.

Michaël: To be clear, AGI is Artificial General Intelligence, if someone from the OECD is watching this.

Markus: The thought is that, maybe if these kinds of systems are possible, then maybe those systems will have a very fast takeoff speed. This means once the system gets to a certain level of capability, it manages to improve itself very, very quickly. And so it might go from a human level intelligence, or close to human level intelligence, however we would try to measure that, to something much, much more capable. Timeline is just a term that people use to mean time between now and when you reach some certain level of capability. Often like a metric that people will use is like a human level machine intelligence maybe.

Michaël: Assuming short timelines and fast takeoff, has the AI governance community produced any plans to reduce risk from AI?

Markus: I think, probably. To some extent I don’t like this framing of producing plans, because it has this very, stodgy or inflexible feel. Suppose, if you gave these three criteria of what this world looks like, that’s very little information about what that world is actually like. And the right actions to take, or what we should do in that world, or how things might go well, or how an actor might act responsibly, will depend a lot on the specifics of that world. A bunch of other facts are going to matter.

Markus: To some extent, I think that what it looks like to prepare for that kind of world looks less like, “Oh, given X, Y, Z happens, we should do these other things.” It might look a little bit more like, one term that I’ve heard people use which I think is useful, is something like maybe it looks like a playbook. So it’s like if the world looks like so and so or if these parameters have these values, then we do this. If they have these other values, then we do that. If you believe that the government would be very, very competent and figure out this problem really well, then maybe you’re like, “Oh, okay. Let’s just tell the government what’s happening.” And then maybe things would go really well. It would also depend on who has this system.

Markus: It depends on a whole bunch of things. I do think that it has been a bunch of work that I’m confident would be helpful if one could see this coming down the pike, or saw this down a horizon. I don’t know how much there is someone having put together like, “Here’s my five step plan assuming that such and such thing happened.”

Are We Preventing A Literal Arms Race?

Michaël: Is there anything being done to prevent a literal arms race?

Markus: In AI?

Michaël: Yeah, assuming people just start scaling models up. And the question mentions medium-long timeline.

Markus: I think this is a really important topic. A bunch of people are thinking about it. One classic frame that a lot of people will have when they think about these things, is they think, “Okay, well we’re going to develop these systems, these systems going to be very capable and they’re going to be very economically useful.” And then it’s going to be the case that making sure that these systems do what you want them to do, make sure these systems act in alignment with their user’s intention, or in alignment with human values; the thought is that might put you at a competitive disadvantage.

Markus: If that’s true, then we might have a problem. Because in that situation, the actors that don’t take those safety precautions, or don’t make sure that their system is promoting human values, or not causing harm; those actors are put at a disadvantage. And then the actors who are less responsible end up developing the technology, and then maybe gaining all this economic value and gaining a bunch of power or influence over the world. So that’s a general framing of what a lot of people who think about how to govern, or what policies to put in place, to deal with potentially much more capable AI systems.

Markus: What is being done about this kind of thing? I think there are a few strands of work. We’ll come onto this later of my view of what’s happened in this space. But examples of things that you might do that seem good, are you might want to right now start thinking about like, “Oh, in 20 years time, what might a reasonable international agreement on AI look like?” Or you could start doing things like, “make sure that the relevant actors trust each other.” Because if they’re going to start imputing bad intentions on each other, if they believe that the other actor is going to act really irresponsibly so even if they develop these very powerful systems, they’re not going to use them for good purposes, they’re going to be much more likely to say, “Okay, well it’s worth it for me to take this hit on safety or act less responsibly, or develop my system less carefully, because it’s really important that I make it there first. That seems like a worrying situation to me.

Will Actors Take Hits On Safety And Cooperate?

Michaël: What do you mean by taking a hit on safety?

Markus: Before you deploy a big model or a model that you think might be dangerous, maybe in the ideal world everyone could just take a step back. Maybe you take it easy for about a year and then you have a bunch of people figuring out whether the model might cause them harm or might cause a problem. If we were in a world where all of humanity was just one person, then it would make a lot of sense for that person to just be like, “Okay, I’ve now developed this system. And then before I deploy, I’m going to spend a few years figuring out whether it’s good or not, and then I’m going to deploy it.” But we don’t live in that world, we live in a world where there are a bunch of different humans, they have different interests, they want different things. And so you’re going to have competition and that might cause problems here.

Michaël: We might be in this world where DeepMind develops AGI first, and then OpenAI, because of the charter, decides to move to DeepMind and help make their system safe. And if they have enough advantage in their development, they can just spend, maybe not a year but two months, making it safe.

Markus: You could imagine that those kinds of things happening. Or if it’s the case that these kinds of systems are possible and that they will be developed, then those kinds of solutions seem great. You want these actors to trust each other enough that kind of thing can happen. Maybe you want them to, in advance, such as like what OpenAI has said, if they believe that someone else is sufficiently close to developing, AGI, then we’re going to accept that, and join forces with them.

Markus: Overall we should expect lots of the important actors in this space, partly because of these competitor dynamics, to take the risks from developing or deploying these models, less seriously than they should. One thing that seems very important in general, is to make sure to communicate those risks in good ways to those actors. An example is, it seems great for it to be the case that militaries don’t involve AI systems very, very closely in their like nuclear command and control, to make decisions about launching nuclear weapons.

Markus: One of the main reasons that this seems great, is because if you do that with present day systems they’re going to be too brittle, and they’re going to not be able to deal with all of the different situations that might be thrown at them. The advantage you get in time-to-launch or something like this, just isn’t worth it from the accident risk that you’re incurring. That might be the case with a bunch of different actions that these actors, or that developers of AI systems might take going forward where the accident risk is big enough that it’s not even in their self interest to take a certain action.

Michaël: So we need to convince them that it’s close to what nuclear weapons are?

Markus: If it’s the case that these technologies are close to nuclear weapons, then yeah, you would do that kind of thing. And I think the thing that you can do now, is you can do a bunch of work to just explain the problems that these systems might have, and the risks that you might incur if you start using them too much. Or if you don’t use them responsibly enough.

Can We Regulate Technological Progress?

Michaël: There are other examples where technology have been regulated, for instance GMOs in Europe. Do you think there’s something similar that could happen with AI for slowing progress?

Markus: That’s a good question. GMOs, based on my very cursory understanding, seem like a case where public opinion turned against it enough that development was slowed down very much, especially in the EU. And it could be the case that similar things happen with AI. To what extent do I think that that will happen? I don’t think the effect will be as large as it was with GMOs or with nuclear power, because I think these technologies are more, I don’t know, AI is more omni-use, it can just be used everywhere. And it’s currently being used in a whole bunch of products already, and people also don’t viscerally feel where it’s being used.

Markus: Another big thing is, both nuclear power and GMOs pull at this disgust reaction that humans have. You don’t like eating something that’s poisoned, we’ve been evolved to not be disgusted by things that might be poisonous, and I think both nuclear and GMOs pull at that. On the other hand, AI doesn’t quite have those features. It has a little bit of that via people are worried about these systems being very inhuman and making important decisions about their life. And so I could imagine those kinds of things leading to regulation that slows down progress.

Michaël: At some point people will get AI that is automating a part of our economy and people losing their jobs. And when 50% of the economy is automated by AI, people will start having some disgust, like you said.

Markus: I think that’s right. I don’t know, maybe that’s not quite the disgust reaction being pulled out, but it’s an anger reaction. And if that ends up happening, then I think that might be a really, really important factor in how AI technologies are deployed and developed in various countries.

Can We Regulate Compute?

Michaël: We’ve talked about an AI arms race from scaling models. Do you think we can regulate the amount of compute people train model with?

Markus: A promising thought is, if it’s the case that in the future the amount of compute used to train a model is one of the most useful proxies of the capability of the model, then I think that’s also going to be one of the most important proxies of the potential impact of the model. Because with more capability comes more possibility of causing harm, and also more possibility of doing good things in the world.

Michaël: I think Uncle Ben in Spiderman says, “with great power comes great responsibility.”

Markus: Exactly. That is just a claim that is true and is something that is also embedded in a lot of regulation, or how we do policy or do regulation and whatnot in the world at large. And so if this prediction is true, that compute is one of the most important proxies with capability, then a very promising direction will be doing things like: if we learn that a model that is such and such size or that big, then you’re going to need to put in place such and such requirements on it. Or you’re going to need to do such and such checks, or maybe you need to bring in a red team, or an audit, or maybe do staged release.

Markus: Especially if we end up living in some kind of world where the scaling hypothesis is true. Or where by just scaling up these models a lot, you might end up getting systems that are at human level. If that’s true, then I think that this is a really promising direction, and I would love there to be more research on it and more people thinking about this.

Markus: There are also some things we could do today to start pushing in this direction. I and a colleague of mine, Lennart Heim, we recently have done some engagement with the US government where they are considering setting up this National AI Research Resource, which is partly supposed to give compute to academic researchers to do work on AI. One thing that we said there was, “Okay well, if you’re going to be giving people a whole bunch of compute, could you put in this little principle that with more power comes more responsibility?” And so if you train a model that is a certain size, then you need to take greater responsibility. An API is basically when instead of having the model on your local computer, the model is on the cloud. And then all I do is I just send it a prompt, I send it an input, and then I get back the output. And if that’s the case, then that might be a useful way to make sure that people can’t misuse the model.

Markus: With GPT-3 for example, I can engage with an API. And then that means that OpenAI can then have a filter that says, “Okay well, it looks like Markus is producing a whole lot of content that is about the war in Ukraine, and it seems like it’s pro-Russia.”

Michaël: Is this is what you actually do? You just send stuff about Ukraine to the GPT-3 API?

Markus: I have, whenever I play with these algorithms or with these systems via API, I do try out like, “Can I make it say something bad or produce something that it’s not supposed to produce?” Quite often the answer I’ll get back is like, “Oh no, no. We don’t produce any image that has generated with the word Trump.” And that kind of thing I think is very helpful, because then it can then create a layer where we can start putting in some governance measures, basically.

Does The Chinchilla Scaling Law Impact AI Governance?

Michaël: You mentioned the neural scaling law. Basically that you get better models from more compute, and you get models that are increasingly better as measured by accuracy or test logs. Another thing that recently came about is the Chinchilla scaling law, where apparently you could just have more data instead of having much, much bigger models. And in that case, private companies could be at an advantage because they would have a bunch of private data; like all the Gmail data or all the Facebook news feed. So do you think there’s anything to be done here in terms of regulation? I know in the EU people have been regulating data with GDPR a lot. Do you think we could just say, “Hey, you cannot use the private data to train big models?”

Markus: That’s a really good question. To be clear, the Chinchilla paper, it’s still saying that there’s a scaling law, it’s just that the ratio between the amount of compute, and the size of the model and the amount of data, just turned out that the ratio is different than we thought and data matters more than we thought. Data will probably matter a lot. I think maybe a thing that will end up happening more over time is that people will pay more attention to things like quality of data, which currently it’s not a very legible thing. And so people just report, “Oh, we use this much compute or our model is this big.” And I think going forward, people are going to start focusing more on things like, quality of data mattering. And if that’s true, then I think that might be really important.

Markus: It could be the case that that means that an important moat. So like an important barrier or an important advantage that certain actors have. So the big tech companies, probably, or generally other companies that can just get a whole bunch of relevant data. So maybe via people engaging with their model. They might have a big advantage in producing better AI systems.

Michaël: We talked a lot about tech companies and they’re usually very good at avoiding any regulation, paying taxes, all those things. You think there’s any chance that we can regulate any models from big tech companies?

Markus: I think yes. They’re currently, to some extent, regulated or there are a bunch of laws that apply to any company operating in the world and these companies try to make sure that they follow these laws. These companies try to make sure that they are compliant with GDPR. And if they’re not, then they can get fines. So I think, yes, you should be able to regulate these kinds of things. Maybe it’s difficult to make sure that compliance is high enough, whether that enforcement is good enough. I think that’s a problem, or a challenge to work through. And then this is also already happening. So maybe the biggest intervention that’s happening here is the EU is producing this thing called the AI Act where they are trying to regulate certain uses of AI. Some of this regulation, I think, will touch big tech companies.

Do Governments Take AI Safety Seriously?

Michaël: Awesome. How seriously do you think governments take AI safety?

Markus: It depends on what one means by AI safety. If one means something like, oh, AI systems that we have today, they have a bunch of problems. They’re not robust. They don’t deal well with other distribution inputs. Sometimes they produce unfair results, those things, or that kind of general description of AI systems. I think those memes or those views are somewhat widespread. If you mean something like, to what extent do policymakers think that it’s very plausible that AI systems at some point will be human level and maybe this will even be in their lifetime, that number seems much, much lower. They take actions that improve safety or something like this, while paying a cost in how well their AI industry does, then I think that’s much less the case.

Markus: Most policymakers that I’m aware of, or that I end up talking to, the main thing that they’re thinking about, at least in the US and in the UK, most policymakers, the main thing that they’re thinking about is like, how can we support our AI industry in our country? AI is going to produce a whole bunch of economic value. We want to seize that, we want to be at the frontier of this technology. But people are starting to see like, oh no, it turns out that if we just deploy these systems in a bunch of different contexts, they’re going to cause problems.

Markus: And I think that there is a general awareness, for example, that like, oh, maybe recommender systems used on social media have caused some problems and might continue to cause harm. And so, especially in the EU, I think the policymaker view is a little bit different where that’s much more like, oh, these systems are a bit scary and they’re causing harm right now. And we’re going to try to figure out how to, how to do something about that.

Michaël: The EU is one of the leading actors in regulating AI and you used to work in Oxford at GovAI there. You still work for GovAI, but maybe remotely? And I think most people are not familiar with what GovAI does. GovAI: Center for the Governance of AI?

Markus: That’s right.

The Centre For The Governance Of AI

What’s GovAI?

Michaël: What does GovAI stand for? What do you do there?

Markus: It stands for the Center for the Governance of AI. We are a non-profit research institution. We used to be a part of something called the Future of Humanity Institute, which is a part of, the University of Oxford. And then we spun out about a year ago. Oe way that we usually describe our work is that we are trying to build a field of people who are trying to figure out how humanity can manage the transition towards a world of advanced AI systems. AI systems that are much more capable than the ones we have today, and figuring out how we can make sure that humanity or society can thrive in a world with those kinds of technologies.

The New Policy Team At GovAI

Michaël: Your title is Head of AI Policy? You’re doing some kind of team building, you’re creating a new team, hiring people. What kind of team are you setting up there?

Markus: The thing that I’m currently doing is setting up what we call a policy team. What does policy mean? You might ask.

Michaël: What does policy mean?

Markus: Basically the purpose of the team is something like: we want to do AI policy development with a focus on more advanced AI systems. The thing that we’re trying to do is we’re trying to answer the question: what do we want powerful actors today in the AI space, so like governments, but also AI labs. Those are the main actors that we’re thinking about. What should they do over the next handful of years to prepare for a world with these more advanced systems? And our goal is to just do very rigorous, hopefully very good and very sensible research on those questions and maybe produce some recommendations of what would be good to do.

Markus: And then we’re going to tell people about it. We’ll do a little bit of direct advocacy, of engaging with policymakers ourselves. But for the most part, I’m excited about us just trying to directly answer some of these questions and then hopefully some of the work can speak for itself, or we can talk to other people whose main job is to do advocacy. And so, mainly trying to fill that part of the pipeline.

Michaël: So you mentioned advocacy, doing some work, some outputs where people can look at it. What kind of good regulation do you think people will look over? What kind of regulation looks good according to your team?

Markus: So I think one thing to note is we try to not have a party line. We don’t have a view of the organization. Maybe we have some things that people agree on or something like this, but for the most part, we’re like, we’re individuals that are thinking about similar issues and we challenge each other and we try to figure out what the truth is, or what the right answer is. And then we, for the most part, people publish things on their own name and that kind of thing. So I think one shouldn’t think that anything that I say represents GovAI as an organization.

Michaël: If people are curious and want to apply to work for there, they might just want to know what kind of work people do usually and what they’re interested in. I visited at some point GovAI a little bit when I was in Oxford. And it was part of the Future of Humanity Institute, as you mentioned, who cares about the future of humanity and AI that can be transformative or even become AGI sooner than most people expect. So are there any kind of bets, AI regulation bets that you are making that are not on regulating self-driving cars now, but mostly regulating transformative AI system.

Markus: The way that I expect our work to go is like, we do some work to try to map the whole space and figure out like, oh, maybe we want governments to do this. And then we spend a bunch of time figuring out whether that’s the case, but I think most of our time will be on this figuring out whether that’s the case bit. So following a small number of bets…

Compute Governance

Markus: Some of the ones that seem likely, one is a thing that we talked about a little bit before that we call compute governance. So the question is something like, if we think that compute is going to be a particularly important proxy of very capable systems, and if we think that that’s otherwise a very useful governance node, so a lever that you can use to govern these systems to do AI governance work, or make sure that these systems are not causing harm, and being used for good things, then how can we use this lever? So we’re going to do a bunch of work with that. So one of my colleagues, Leonard Heim, he’ll be the main person working on that. And we’ll work together on it.

Markus: Some of the kinds of things to explore there include this thing that we talked about earlier of, could we make it the case that there are maybe levels or something like this, of what amount of compute comes with what amount of responsibility. I think that’s a really promising one and there are a bunch of ways in which you could try to make that happen. And there are a bunch of details to think through. I think that’s a big one.

Corporate Governance

Markus: Another bet is one that another colleague will be working a lot on. His name is Jonas Schuett and he will be primarily working on questions of corporate governance. So like what internal policies, what internal organizational structures, should companies set up to make sure that they are accurately managing and seeing and doing something about various risks that might appear from the systems that they develop, and then trying to develop a bank of knowledge on that, and then trying to inform these companies about it. Maybe also informing what policymaking should look like. So for example, in the forthcoming EU AI Act, most likely it’s going to be the case that there are going to be requirements for high risk systems to have post-deployment checks to see like, oh, I thought my system was fine. I’m going to keep checking whether that’s the case. And then you’re going to have to do risk management processes to make sure that they discover ways in which their systems might be infringing on human rights or something like this.

Michaël: So high risk is basically when your system can infringe on human right?

Markus: I don’t remember the exaction definition. I think it’s that, and maybe threat to human life and health, or something like this.

Michaël: Killing people seems risky.

Markus: Exactly. But it’s also systems that do this in an acute way or something like this. So it’s like, you need to be able to point to a person and say oh, you were harmed by this system. And so the EU AI Act, it’s probably not going to cover things like recommender systems that, maybe they have a really big effect, but it’s diffused, it applies a little bit to lots of little people or lots of big people as well.

Michaël: It’s mostly for like self-driving cars killing someone, maybe.

Markus: So the EU AI Act will probably not cover self-driving cars. I think there’ll be separate regulation for that. I don’t actually remember. But yeah, it’ll be on things like hiring algorithms, for example. You’re like, okay, well, there’s a person here who might be harmed if this algorithm is wrong or if this algorithm does something badly. It will also apply to things like biometric surveillance. So like cameras that are used for facial recognition and that kind of thing.

Early Regulation: Systematic Discrimination In Hiring

Michaël: For hiring, it seems like humans make a bunch of harmful decisions every day. Why are we regulating AI?

Markus: I think that’s a great question. And I think it’s a question that, when you think about how to regulate AI, needs answers. What are some of the reasons that you might expect there to be a difference? One reason is that AI systems might be causing errors in different ways from humans. We have regulation today that is about how you’re allowed to do hiring. I don’t know what the exact laws are, but I imagine that there are a bunch of ways in which I could make hiring decisions that would be illegal.

Michaël: Like hiring only white males from Sweden.

Markus: Exactly. Who would do that? That might be illegal. Or at least if I’m hiring them because they are white males from Sweden, and maybe I say that explicitly to one of them or to some people who didn’t get the job, I imagine that that would be illegal. And so you might think like, oh, well, why don’t we just apply the existing laws to AI systems? That’s something that some jurisdictions are doing. So this is what’s happening in the US, and maybe is what will end up happening in the UK as well, where you take more of that kind of approach. I think that there are like plausible arguments that might want something different in the AI case. And here are some reasons.

Markus: One reason might be that these systems, you might expect people to use these systems more than is warranted. So people are like, this is a new technology. And so people are going to find it really difficult to know how they work, and in what situations they might fail and cause harm. If that’s the case, then having a regulation in place, that makes it the case that people need to take more precaution than they would otherwise, might be really good. You might also think that some of these systems might cause problems via, like it creates more correlated decisions. So if I’m a big company and I’m hiring people, there are a lot of different hiring committees with a lot of different people on it. And so there will be some noise around the right answer.

Markus: And so quite often there will be people who deserve the job or should have gotten the job, but they didn’t get the job. But all those errors are uncorrelated. And so it’s not the case that one group of people are systematically discriminated against. Or it’s like, that’s less the case in this situation or, whereas in the AI case, if I have one model and that model continues to fail in a particular way, then there’ll be one person or one group of people who will be disadvantaged. And that seems more unjust or more problematic. There’s another one. Can I do a third one?

Michaël: Go ahead.

Markus: Another one that I think is interesting is, going forward, at least as some of these models become more interpretable, but maybe even today, we can actually test some of these things. We can actually see whether the model is making a decision based on whether people are white male Swedish people. And so if that’s the case, then that allows us to actually do something about this discrimination that maybe we couldn’t have done previously. Today we could take these systems and we could just try to run an experiment. Like I send the same CV, but I changed the name to something white male Swedish-sounding like, and then I see whether that is making a difference. And that’s quite easy because I could just send that input to the system right away.

From Fairness To Interpretability

Markus: And then maybe in the future we might move towards these fairness requirements on these systems. Maybe they’re more about whether a system is making a decision based on a certain thing. And so you could just try to make it the case that your model is more interpretable. And so you can see, you can actually literally see, hopefully in the future, whether a certain attribute, whether a certain feature of the person, is being a part of making this decision. And so there’s some sense in which like, as our ability to identify and do something about a certain harm increases, then our duties to actually use that ability increases. Where maybe we couldn’t do that today. And so, with great power comes great responsibility.

Michaël: The thing about testing interpretability or testing the bias, systematic bias, is something that might happen if they give us their private API and we can just test the model by ourselves. We can just throw outputs and we can do this, like auditing or consulting on those things. But right now I’m not sure if you can actually do it, if you can actually send a bunch of CVS to people.

Markus: To some extent. You mean with these models?

Michaël: Right. So imagine there’s a company doing a systematic bias. I’m not really allowed to send a thousand CVS to a system.

Markus: I don’t actually know whether people are doing this. I know there is research on this kind of thing that involves human hiring decisions where there are all these studies where you send out a CV and you change the name to something that sounds like it’s not of the minority, or the majority group in the country. And then you see what happens. And there are a bunch of these studies. I assume that they do this without the company knowing. And so, maybe you could do this today as well. And that you could figure out what companies are using certain kind of software and then you could start sending out these things. I hope people are doing that. That would be awesome. And if they’re not, then you could do it.

Michaël: Someone watching this.

Markus: Exactly. And then in the future, I would want that to be done more systematically or it’s like, there are requirements to do this kind of thing, or these companies feel compelled to do a lot of this kind of testing. I don’t know the extent to which they’re doing it today. Maybe they’re doing it to a large extent, but I would imagine that they would. It’d be exciting if they did more and brought in outsiders to help figure that stuff out.

Michaël: So what I meant is requiring some regulation where companies need to provide an API so people can test. And imagine there are some restrictions on how interpretable your models need to be. You need to have an API so people can test the model and see if it’s really interpretable and not biased. So you talked about stuff you’re excited about, but what are the things that GovAI has produced? Like concrete outputs, reports, decision makers. I’m very excited to know what are the concrete things you can look at?

What A Typical Output From GovAI Looks Like

Markus: One question you might have, or that’s attached to this, is something like, what good have we done? Have we done anything useful for the world? My view is, the most useful stuff we’ve done for the world is, we’ve helped build a field, basically, of people trying to think about these questions. And then part of that is like, there are people who have worked with us who now do things that I’m excited about. And they’re out there in the world and they’re hopefully doing good things. My guess is that’s most of the value that we produce to date. In terms of things that you can see, I mean, you can see these humans, but it’s difficult to evaluate how they’ve done, and maybe they don’t have time to talk to you.

Markus: We published a decent amount of research over in the past. Most of that research has looked a little bit more academic and it’s a little bit more, fits into the political science end of things. So international relations, published in various journals or whatnot. You can find a bunch of these things on our website. I think we’re, over time, at least the kind of work that I’m likely to do more in the future, and I’m excited about happening more, is a little bit more like, it doesn’t look like academic publications because maybe most of it is like, hey X, Google or hey, someone, you should do this. Here’s why. And that doesn’t look like an academic paper, but I would expect us to do more of that in the future.

Michaël: So this is mostly like you writing a Google Docs and sending it to Google.

Markus: Or maybe like, first you write a Google Doc and then you ask for a bunch of feedback from people and you discuss it. Maybe you do a workshop around it. And then you turn that into a document that you publish on our website or something like this. And then you send it to a bunch of relevant people, depending on what the piece is. Maybe we do a bunch of outreach about it. Maybe we send it to journalists or maybe we try to tell people about it a bit more. I think the kind of output will depend a lot on what we’re trying to do. Like at the moment, we’re in the process of publishing. When this comes out, I think we’ll have published a big report that we wrote about the extent to which the EU’s AI Act is going to see regulatory diffusion. So whether it will affect other jurisdictions, or companies in other jurisdictions.

Markus: And when we publish that, we’ll be sending it around to a bunch of people and hopefully get journalists excited about it and get policy makers excited about it. And then, I think there are some other outputs that we’ve done to date and we’ll continue to do, which looks a little bit more like we’re working directly with some actor that is trying to make some decision. So like me and Jonas Schuett, we spent a little bit of time on loan to a part of the Cabinet Office, helping them think about what UK AI regulation should look like. Those things we can’t really share outside of government, because we wrote these things for the Cabin Office and I think we’ll continue to do that kind of thing, if the opportunities look interesting and look like we could do something useful.

Michaël: Looks awesome. I hope more people work on those things. And the good thing is that you’re hiring more people.

Markus: Yes.

Working In AI Governance Because You Are A Little Bit Annoyed At Policy Meetings

Michaël: So if you want more people to join your company, what would be the pitch. And also, what kind of profile do you think would be interesting?

Markus: That’s a great question. I probably should have said that at the start, when you asked what team I’m building. So currently, we closed applications for research scholars to GovAI as a whole, but also to this policy team. So there are people who will be working with us for at least for a year, and then maybe we’ll extend them if things seem to work out. And in a few months we’ll probably open a position for research fellows. So these are people who will be staying with the organization for a bit longer. And I think in general, I expect us to continue to want to hire people and have enough funding to keep people on, if exciting work is coming out. Who do we want, or what kind of people are we looking for?

Michaël: So what’s a good criteria? If I’m watching this video, what’s a good check I can do on myself?

Markus: An example of things that I look for; one thing I look for is just people who are… Part of the reason I’m doing this work is because I’m a little bit annoyed. I have been in conversations with policymakers. So people who are supposed to be making decisions or could potentially make important decisions. And I feel like I just don’t have good enough things to say. Or I have some things to say, but I feel like they’re not well evidenced enough. There has been no one who has spent months or years thinking about this question. And I’m like, that seems terrible and we should definitely do that.

Markus: And so that’s one of the reasons that motivated me emotionally to start doing this kind of work. And I think having that annoyance is probably quite helpful, or having that view of, this is important, seems helpful. Other things that I think we will look for are people who write and think pretty clearly, or can write well and can be good at structuring a problem and breaking it down. People who have some kind of relevant expertise. But in our case, we have a reasonably broad view of what relevant expertise might be. But as long as it’s clear how it might connect to a bet that we’re planning on pursuing, or a bet that we might be excited to pursue. And so that might be like, you’ve done a relevant PhD or you’ve worked a bit in policy, or maybe you’ve worked at a tech company in a policy or like a governance-style role. All of those things seem super relevant.

Working In AI Governance Because You Can Do Judgmental Forecasting

Markus: I think another thing that I’m looking for, which I’m a little bit less sure whether this is a good description, but I think maybe, is something like, in a lot of this AI policy stuff, your job is to take… There’s this whole sea of considerations, there are a whole bunch of different ideas or different things that might pull you in different directions. Your question is like, is it good to do X? You know, I don’t know. To what extent should the EU AI Act regulate general purpose AI systems, for example. Like AI systems that can be used in a whole bunch of different domains. Foundation models, or something like this. There are a whole bunch of considerations that will pull in different directions and might make it a good or a bad decision, or specific details that you might be this way or that. That decision depends on a whole bunch of different variables that you have to figure out how to integrate in some way.

Markus: And one thing that I think seems plausible, is a person who can do that well, is a person who could do judgmental forecasting well. So someone who can take a question like, I don’t know, will Putin remain the president of Russia?

Markus: What’s the chance that he will be the president of Russia in 2025? That kind of question is one where that has the same type of feature, I think. And so people who can do that well might be people who would be suitable for this kind of work. For GovAI in general, maybe take all the things I previously said, but we’re also excited about more different kinds of research, maybe research that focuses less on like, okay, I’m trying to figure out what actions people should take over the next few years. And maybe it’s a little bit more like, like to date, a lot of our research has looked at, I describe it as, it’s like mapping the strategic landscape, or something like this. And so you’re like, okay, well I think that for the development of AI, this particular dynamic, or this particular part of the ecosystem, is going to be very important. And so you might be like, oh, probably the US-China interactions on AI is going to be important. And so having people think about what’s going on in China, on AI, seems great.

Markus: And then maybe another example is like, we have had some people doing forecasting work. We have Noemi Dreksler, she does surveys of populations whose opinions about AI might matter to how AI goes. And so maybe ML researchers, or the public, or economists, or policymakers. And so that’s another style of work that we’re also excited about, as well.

Michaël: So making good forecasts about what will happen in 2025. Making good AI strategy mapping of what will be the relevant actors and considerations, sometimes by doing some surveys of important people like administrators, economists. Being really annoyed at how little people know about AI policy in the AI policy space, and being willing to put in some effort or have some like one or two years’ experience in the field. So all the things are important? I hope you get a bunch of applications. So more generally, to move on from your work and just talk about AI policies in general, are you bullish on the AI policy? Do you think there has been a bunch of progress in the AI policy recently?

Markus: I think going back, I realized a thing that I’m also supposed to do, probably, is to say, if you want to know more, then you go to governance.ai, which is our website.

Markus: Am I bullish on AI policy progress? I don’t really know. It depends a lot on how you split up the space or something like this. In general, in the world, there’s a ton of increased attention to AI and how AI is impacting society. And people trying to figure out like, “Oh, should we regulate this technology? How should we do that? Oh, maybe social media companies are causing harm via their AI algorithms. What do we do about that?”

Markus: This seems like a good development to me. And I want society to be thinking more about these problems, partly because I think that these present day problems have some analogy to the problems that you’ll face with more capable systems in the future, or at the very least this is some kind of practice run. Or maybe it’s like we start building the institutions or building the capabilities that we’ll need to be able to deal with these technologies in the future. That seems like good development. Has there been enough? I don’t know. It’d be great for there to be more.

What Is Currently Happening In AI Governance

The AI Governance Landscape

Michaël: Who else is doing work except from GovAI?

Markus: I mean a lot of people.

Michaël: The main actors in this space.

Markus: If the space is just people who are doing work on what do we want, how is AI impacting society, and what can we do about that more broadly? There are a ton of organizations. And you should be able to find them. If the focus is more on organizations that see themselves as, Our job is to try to think about what advanced AI capabilities, how to govern those.” That’s a smaller number, a significantly smaller number. Some organizations that I’d recommend people look into are …

Michaël: CSET maybe in the US?

Markus: Yeah. I think they’re a good organization.

Michaël: Center for Security and Emerging Technology.

Markus: They’re an organization in DC, they are a think tank. I think what distinguishes them is they are particularly data driven and care a lot about evidence. When you compare to other think tanks in DC, my impression is that they’re much more rigorous, and they try to make sure that they have data. They produce data that nobody else has. I know all of these questions.

Markus: And then some of the work that they do touches on more advanced AI capabilities. Most of it is to do with what do we want the US government to do today? I think that’s a relevant organization. Rethink priorities as an organization that is starting to build a team that thinks about some of these questions. I’d recommend people check out that.

Michaël: I’ve interviewed Peter Wildeford on the podcast.

Markus: Oh, nice. Great. Then there are a bunch of organizations where you can kind of do this work, but it’s not the main thing of the organization. Center for the Study of Existential Risk in the US or in the UK. And then there are a bunch of organizations here in the Bay, who where you could do this kind of work, but it’s not their main thing.

Markus: You can do this kind of work at a lot of, or at least a reasonable number of AI labs or AI companies have teams that do relevant work. And there are governance teams that open AI at DeepMind. There are relevant teams at Anthropic. The big tech companies, they have relevant teams, but they’re mainly thinking about, “Okay, we’re going to deploy a product. How do we make sure that product doesn’t harm people, or do something that is going to be really embarrassing, and is going to be used as an example of how AI systems can cause harm over years to come?”

Michaël: I feel like at OpenAI, they had this charter, two or three years ago. Or maybe even a forum, right?

Michaël: I think when they created OpenAI LP for profit status.

Markus: I think the charter was before that.

How To Prepare For The Governance Of Advanced AI Systems

Michaël: That’s one of the kind main outputs from AI governance it seems. Not the main one, but one of the outputs I know of that mentions AGI specifically.

Markus: I think that’s right. I think for the most part, this sort of work that looks like actions that are taken today, or work that looks like, “Oh, hey. Actor such and such, can you please do X?” Or someone that’s inside of a lab saying, “Can you do this and that?” Most of that doesn’t explicitly talk about AGI.

Markus: My guess is that that’s good. My guess is that we want, for the most part, the thing that you should do instead is more like, “Okay, well here’s something that looks like it makes sense, and is very helpful in terms of preparing us for more advanced systems, but also makes sense today.” And then you try to do those things. And then that is just going to be much more helpful.

Markus: I think in terms of what are useful things to try to get. For example, if you’re trying to affect what of AI labs, what they’re doing, you find cases where you’re dealing with a present day problem or a problem that these companies will start feeling soon. And ways in which these AI systems might be causing harm now. And then you try to figure out how to solve that problem while also preparing the company, and preparing the world for more advanced systems.

Markus: Part of the reason for that is you’ll just have a larger coalition of actors, and people within the company, and people outside of the company who will be excited and interested in helping out. A lot of the times it’s going to be the really useful thing to do.

Markus: Also, just because people’s credences on the extent to which we’ll have human level machine intelligence, they differ widely. And if it’s going to be difficult. A lot of the times, it’s going to be difficult to push things through. If the only reason is like, “Oh, this will only be helpful if it’s the case that we in the next 20 years, develop these very, very powerful systems.”

Markus: I think in general, you want to be thinking about things that makes sense from multiple perspectives. Being robust to worldviews, in a sense. You can do this work at AI labs. And then you can do it at some of these research institutions.

Markus: I think another thing that you can do is you can do some of this policy work you can also go into government, or go into organizations that you think are going to be making important decisions. And learn about those and understand what’s happening there and see if you can be helpful. Or if there are certain perspectives that they need to hear more about.

Markus: And then another thing you can do is you can work for organizations that try to discuss, or try to do advocacy, or engage with policy makers more, who see them who see that as their main job. I don’t know if there are a ton of those, but there are more of those than there are organizations that are doing this AI policy development research.

Markus: And examples of organizations that you might look into include Future of Life Institute, are probably maybe when this comes out, probably hiring for some of these kinds of role. The Future Society also might be hiring for these kind of things. There are a bunch of think tanks where you can do that as well.

Markus: I’m probably forgetting some organization, but I don’t know what they might be. Another organization in the UK that I’m excited about is called the Center for Long Term Resilience. They have a bunch of people as well.

Building Trust Through Positive Sum Cooperation

Michaël: What kind of actors are you looking at? Do you just look at Google, Alphabet, Tencent; maybe DeepMind and OpenAI. Do you look at the people who are a frontier of AI research?

Markus: In terms of the work that we will end up doing, or that I’m most excited about doing, that will probably try to focus on the actors whose decisions we think matter the most. Those would be included. Also, various governments I feel a bit unsure about. I think international governmental organizations like the OECD or the UN are going to be making important decisions about these topics. Maybe that’s the case or at the very least, I think that they might be important in creating some kind of global consensus about what kinds of problems we’re going to face with these technologies.

Markus: I think another space that might end up being important if this sort whole compute governance thing is true, then semiconductor companies, or chip manufacturing companies, or chip design companies might also matter a lot.

Michaël: How do you talk to them? You’re just like, “Please stop producing chips, otherwise you get a fine.”

Markus: I don’t know if I would say that to them. But I think it depends a little bit on the context. Most of the time me personally, I would be engaging with organizations, or engaging with people where some combination of I already know people there, or there’s some kind of connection. Or maybe they know, or I know that I have something that can be helpful to them, or that is useful to them.

Markus: And then sometimes that looks like you do work on a research project. Or you work on a certain question and then you maybe you produce some useful documents and then you share it with these people. And then in that way you kind start building up connections and networks, et cetera.

Markus: But I think in general, starting to build up a network for these kind of things is a very, very helpful thing to figure out how to do; or how to do this work well, because a lot of the times there’s just a bunch of context that you’ll lack if you don’t know people, or if you aren’t able to get a little bit of an insight scoop on some of these questions.

Michaël: So, you just start building trust and relationships that might matter in the long term..

Markus: Yeah. And also, just getting to know people who have something where, I think in general, the way that oftentimes this happens, or ends up being useful is you both have something to offer each other. Maybe it’s some policy makers. If I’m talking to them, maybe a thing that I can offer them is, “Oh, I’ve thought about how AI might impact the world in a bunch of different ways.” And then I can offer them some thoughts. And then maybe they find that interesting or find that helpful in how to think about AI.

Markus: I’m offering them that. And then in turn, they’re offering me some kind of an image or an understanding of how they think about these problems and the kinds of issues that they think are important, and they think might the decisions that they might be making in the future.

Markus: And I think in general, that’s the kind of relationship that you have to build. And I think one useful way to do that is if you’re an outsider like me, the way that you do it is you try to produce knowledge or produce something that is of use to people. And you try to answer an important question for these actors, and then you chat to them about it. And then that’s the way to start building it and start learning more things as well.

Michaël: So you play some game where you just cooperate and exchange information about what the future of AI will look like. Do you try to focus on things they understand like don’t mention AGI, and just talk about advanced machine learning systems or big models, use their wording.

Markus: I think that’s a thing that you want to do in general. In general, I have pretty wide credences on most things. I’m not super confident in any particular view that I want to push. But I think even if that’s the thing that you’re trying to do, then the way to engage in these kinds of conversations is that you try to understand what the other person is thinking and what their worldview looks like. And you engage with that.

Markus: And whenever you can, maybe you use the particular language that they’re using. I think that’s super helpful, and you need to serve this translation type function. Or in general, sometimes a thing that people think about or ask about is like, “Oh, should I tell policy makers that AI systems are going to be human level within X number of years?”

Markus: And then my views aren’t like, “Oh, this will happen super soon, and so you got to warn everyone.” But I think even if it were the case that you would have some super, super capable systems very soon; I don’t think it’s clear what you should do with that. If the thing you want to do is you want to communicate like, “Oh, these technologies might be really, really important. They might also cause a bunch of harm. And we should do things to avoid that harm.” If you say that, and then people only take away the first belief where they’re like, “Oh, this technology is going to be really powerful, or it’s going to be really economically important”, or something like this, then they’re going to go and do the opposite of what you wanted them to do.

Markus: Then they’re going to go and be like, “Okay, I’m going to like try to make sure that I’m at the forefront of this technology, and push it ahead.”

Markus: And I think it’s a fraught issue of if you hold this belief that these systems are going to be super useful they’re going to be really, really capable. And they might be dangerous. Then communicating that is fraught or difficult to make sure that you communicate both things together. And I think for the most part, my view is at least it’s more robustly useful, or more robustly a thing that you should do of talking about the second thing where you’re like, “Oh, people are trying to deploy these systems today. And they’re just failing in all of these different ways. And things might go wrong in lots of ways that it’s difficult for you to anticipate.”

Michaël: You talk more about the risk, and about why the system might fail instead of talking about how those systems might be very capable in the future, because otherwise you can accelerate raising dynamics?

Markus: That kind of thing. I think so.

Regulating Self-Driving Cars

Michaël: Concrete case study, imagine you’re talking to you about how to do certain decision about self-driving cars. How do you shape that decision? How do you talk to them about what to, for the self-driving car, how to make it more interpretable than the things? Can you even talk to them about this? Or is it mostly something that happens inside of Brussels or the UK government?

Markus: I personally haven’t engaged that much with conversations around self-driving cars. I’d be interested to know how the extent to which it’s the case that present day regulation on self-driving cars is going to be important for more impactful systems in the future. My bet is that other regulation or regulatory action is going to matter more.

Markus: If I were to talk to someone about self-driving cars, what are some of the things that I would say? I would probably, for the most part, I think I would probably talk about things that just seem like good legislation, or good regulations. So like, “Okay. What are some examples of this?” One thing that I care a lot about, is it the case that these self-driving cars mean that there are fewer or more road deaths than there are currently? If there are fewer road deaths than there are currently, if you were to deploy these systems into the streets, that seems like a strong argument in favor of doing so.

Markus: And then probably you would want to have a bunch of regulation or a bunch of rules to make sure that you find ways in which these systems continue, might fail or ways that they might have correlated failure for example like this. Making sure that’s the case, seems really important to making sure that these organizations, or that there’s a regulatory system that might identify these problems and do something about them seems really important.

Markus: You mentioned interpretability, I think in general I’m in favor of figuring out ways to make sure that AI models are more interpretable, just because that seems important for much more capable systems. For self-driving cars in particular, if we’re just imagining that you have systems, a self-driving car today. And then the only thing that I’m trying to do is I’m trying to regulate self-driving cars going into the future.

Markus: Then, I don’t know how much interpretability matters. In the self-driving car case, the thing that I care about is, “Are people dying as a result of this car or not?” It’s maybe one of the main things I care about. And then to do that, maybe I don’t need that much interpretability. Maybe the main thing I need is I just need to have good stats on what’s happening, and good stats on when the system is failing and making sure that you do something about that.

Michaël: But maybe the general public might want to have some strong reasons to think something is safe before deploying it. Before making it legal for everyone to drive a self-driving Uber car, maybe they want to know how the thing works and not be a black box.

Markus: I think that could be the case. There might be other arguments. I don’t know how strong they would be. But you could maybe make an argument that like, “Oh, well we’re not going to be able to make any insurance claims, or decide who is liable or who’s at fault if there’s an accident, unless these systems are more interpretable than they are today.” I can imagine that those kinds of arguments make a lot of sense. And in which case, maybe you should have a push towards this kind of thing.

Markus: And I don’t know what the public’s reaction will be to these kind things. Maybe people will react in that kind of way. And in that case, maybe we do need to have stronger interpretability requirements. And in which case I would be excited about that if that were true, because that would then mean that this car industry, or this self-driving car industry would have to invest in a bunch of research that would be helpful to how to govern AI in a bunch of other domains as well.

Michaël: I don’t know about the general public, but I know that Yann LeCun on Twitter kind of complain about these new rules. I think, is it an EU rule of self-driving cars must be that level interpretable to be deployed?

Markus: Yeah.

How the EU Is Regulating AI

The Digital Services Act and The Digital Markets Act

Michaël: And this brings us to the general space of EU regulations. And I think this is something you’re kind of familiar with, if you talk. The UK is not part of the EU, but maybe you do work more closely to the EU. You talk about the EU AI act? How much regulation is there right now in the EU AI space?

Markus: There are a few different things going on. First there’s just the point that AI systems are currently regulated in the sense that anything is regulated. And if I were to deploy an AI system in my company, then I need to continue to obey the laws that I’m currently continuing to obey.

Markus: I think in my mind, there are maybe three really important developments in the EU. One is there’s this cluster of things that recently were adopted that will come into force. I don’t quite know when, but fairly soon. There’s a package of a thing called the Digital Markets Act. And then the Digital Services Act. These are particularly focused on big tech companies. They call them gatekeeper companies or platforms.

Markus: And the Digital Services Act does things like you need to show, tell people what your algorithms are doing. Like Facebook or whatever, would need to be slightly more transparent on what their recommend their algorithm is doing, and those kinds of things.

Markus: The Digital Markets Act is a little bit more about mergers and acquisitions and that kind of thing. I think both of those things will have really big impact on the AI industry. They might even have it globally as well. For example, if the Digital Services Act requires certain changes to your AI systems or to Facebook’s recommended algorithm, then maybe that will be deployed abroad.

Markus: Then there’re some changes are being made to the EU liability directive. This is about these things of, “If something goes wrong with a product, who is held responsible? Who needs to pay for the mistake?”

Banning Social Credit Scores, Facial Recognition And Regulating Low Risk Systems

Markus: And then the third thing that I’ve been focusing most on is this question of, is this AI act that does what I mentioned in the past. It bans certain uses of AI. For example, it says AI systems used to produce social credit scores, for example…

Michaël: Like what’s happening in China right now.

Markus: Yeah, there are some social credit core style things happening in China. And that’s a thing that they ban. They ban certain kinds of facial recognition stuff. And then they ban systems that manipulate people, or maybe that manipulates vulnerable groups to take actions that is harmful to them or to others.

Markus: And then they have this other category that maybe is the most important thing, is they have this category of high risk systems. These are these systems that might cause some harm to fundamental rights or human health or human wellbeing, in some sense to a specific person.

Markus: They also have another category of a limited risk systems. And these are systems that where the only thing that you need to do, is you need to tell people that the system is active. And for example, this applies to AI generated content. The claim will be, or the law will say something like, “If you produce an image, using an image generation system like DALL-E or MidJourney or something like this, that you can’t just show that to people without it being tagged as generated by an AI system.” If I have a profile picture that’s a deep fake, then that profile picture, the law would require there to be some notes somewhere telling the viewer that this is a deep fake.

Michaël: This is very hard to implement.

Markus: Seems very hard to implement. I don’t know how it’ll happen in practice. Maybe it’ll be the kind of thing that the platforms end up helping implement. But I’m not really sure.

Forcing High Risk Systems To Be EU Compliant

Markus: And then there’s this high risk category. And I think that one seems really important. And then in cases that AI systems that fall into this category, basically they have a long list of requirements that you need to fulfill basically, and to meet those.

Michaël: Is the basic idea that if the EU makes so much regulations, then the companies will need to comply those regulations. And they will end up building one product that complies with the EU laws.

Markus: There is this report that I was mentioning on the extent to which we’ll see regulatory diffusion from the EU. There’s this thing that people call the Brussels Effect, which is the thought is that sometimes the EU regulates things. In particular, they regulate products. They’re like, “Okay, well, you can only sell your thing in the EU if you fulfill these requirements.”

Markus: And then the developer or the producer of this thing, they have to ask their whole factory to produce a thing in a EU compliant way. And then once they’ve done that, they’re like, “Ugh, God. Okay, well, either we could create a new factory that does this, and we sell the non compliant thing outside of the EU, and the compliant one inside of the EU. Or we could just sell the compliant one globally.”

Markus: And then this is a thing that has happened in a bunch of different cases in the EU. And the question is whether will we see this with the EU. This effect that I just talked about, this is called a Defacto Diffusion. In fact, it diffuses even though these companies don’t have to comply.

Markus: And then this other side of the coin is what people call Dejure Diffusion. This is other jurisdictions are inspired by the EU and past EU like regulation. We wrote this big report about whether this will be the case for EU AI regulation. I think my overall take is like, “Yeah, probably to some extent.” And I think that it might end up being important. I wrote this report with Charlotte Sigman, who’s currently at the Global Priorities Institute.

Michaël: Is the report published already?

Markus: It will be by the time this comes out. It should be done in a few days. And then why would this end up happening? The cases where I think this is going to be the most likely to happen, or be the most impactful or important, is if you have big tech companies. These big tech companies are producing foundation models or producing models that are used in a whole bunch of different products. And they are offering them globally.

Markus: And then if it’s the case that some of these requirements require a retraining of that model, for it to be slightly different, then my guess is that that is going to be an important way in which you might get this Defacto Diffusion. They’re just like, “Okay.” And then they retrain the model and make it sure that it’s EU compliant.

Markus: Okay. And then they retrain the model and make it sure that it’s EU-compliant. But they’ve already paid all that cost and maybe the model is even better because maybe the requirement is like your system is supposed to be sufficiently accurate, or the data needs to be of a certain quality. And then in that case, it will just make sense for them to deploy it globally. And then, if you don’t deploy it globally, then you’ll have this issue where you will get what we call duplication costs.

Markus: And so imagine, a lot of the cases when people think about building technology, you talk about these tech stacks, and so you have a stack of different systems that you’re building on top of each other. And then the claim is that if you change something quite deep down in the tech stack and you do a fork quite early on, then you have this whole thing after that to duplicate. And then that duplication is going to be really annoying because then you’re going to need to have engineers that understand two systems that are slightly different rather than just understanding one system. And then if you change something quite fundamental to the model, then the changes reverberate up through the entire system.

Michaël: So, if you fork the system into two things, that means you have more maintenance cost.

Markus: Exactly. That’s the thought. And so in general, software companies for example, they try to make it the case that they have one system that they can use globally. And then if they differentiate i.e. they offer a slightly different product in different places, then usually the way that they do that is they try to do it via changes to the edge of the system or changes to the very end of the system. And so, for example, in the EU AI Act, this requirement I was talking about of you need to say if someone is interacted with a chat bot, you need to make people aware of that fact.

Markus: So maybe the chat bot starts the conversation by saying, “Hello, I’m a chat bot. I am not a human. I am an AI system.” And then, in that case, maybe it’s super easy to just remove that little section or that little part of the system if you’re deploying it outside of the EU. And so, there might be those cases where you can just change the filtering or you can change the finetuning or something at the edge of the system or the top of the tech stack. In which case, I think we’re much less likely to see this kind of diffusion.

What Counts As AI In The EU AI Act?

Michaël: I’ve seen this everywhere. Like, “Oh, I’m a chat bot. Can I help you?” Just to go back to the EU AI Act. I think one thing that has been happening on Twitter is people complaining about the definition of AI. That basically AI could mean anything, it could be any mathematical operation doing anything. Do you think they should have defined it better?

Markus: I think people exaggerate the extent to which that’s a problem. So maybe this is going to get too technical or too in the weeds. there is a great paper by my colleague, Yana Shute about how to define AI in regulation. If you’re really interested in this, you could have a look at that. And that says roughly the following. So in the EU AI Act, and I think in regulation in general, the thing that you want to do is, ultimately the thing that you want is you want to have some way of identifying the systems that you’re going to impose some requirements on.

Markus: In the EU AI Act, most of the work is not done by the term AI, most of the work is done by other stuff. And so, what it says about AI, it’s this very, very broad definition. It does say that it’s like, I don’t quite remember the exact phrasing, but it says something along the lines of it’s software that performs these kinds of operations, or that can, for example, perform these kinds of operations.

Markus: But then most of the work of the definition or most of the work of setting the scope of the regulation, so deciding what systems the regulation applies to is just this list of high risk systems where it says like, “Okay, well, if you use an AI system, ugh, here’s a rough definition of what an AI system is in this context.” Then you’re going to have to comply with these requirements. And most of the work is being done there. I don’t know, I’ve seen people say things like, “Oh, well, if I just do Bayesian updating on a piece of paper, is that AI? Or am I covered by the AI Act?” And I don’t think that’s the case.

Michaël: People who are very smart, very fun at parties. I think the EU AI Act also mentioned something about general purpose systems. Do you think they just mean AGI or just general systems in general?

Markus: So this is a point of contention currently in the discussion about the AI Act. So roughly, the first suggestion was something like, “A high risk AI system is a system that is used for one thing. And if that one thing that it is used for, is a high risk thing, then you need to comply with certain requirements.” And then people might be like, “Oh, well, what about systems that can be used for multiple things? What happens in that case?”

Markus: And so then, one point of contention is whether it should be the case that these AI systems that are used for lots of different things, some people in particular, some tech companies put out things saying like, “Oh, we should have an exemption to this.” And tried to try to push for that. During this process, there was like a part of the presidency of the council, I think it was the Czech presidency. They put out a suggestion that would put an exemption for general purpose systems and just put the duties or the responsibilities on the actor that takes the system and applies it in a high risk context.

Markus: Then recently, there has been some contention about this. And so then, there’s the French presidency of the council. It goes back and forth and they fight it out a little bit. They made a suggestion where they said, “Oh, no, no, no. We suggest that general purpose systems, so systems that are used in a bunch of different contexts, they will also count as high risk or maybe it’s a slightly different category, and they will need to meet certain requirements.” And they won’t need to meet all of the requirements because some of the requirements are really hard to meet if you aren’t the person, or aren’t the actor deploying the model, but it’s like a smaller subset of the things. And currently, this is being debated back and forth a bunch. And over the next year, or maybe six months or something like this, there’s going to be some kind of decision about this question.

Towards An Incentive Structure For Aligned AI

Dividing Up The Responsibility Between The Actors

Michaël: Is the idea that basically general models, like we talked about financial models, will be constrained because there would be applying a bunch of different tech contexts. And in some contexts, they will be in high risk and have a very bad outcome.

Markus: The case against having high risk systems being a regulatory target, or those having requirements imposed on them by the regulation is something like that. It’s just like, “Well, I’m just developing the system and other people are doing stuff with it. And so, I can’t control what they’re going to do with it,” is the argument of developers of these systems.

Markus: And then, the argument on the other side is something like, “Well, there are a bunch of things that like if I’m taking a system, if I’m taking a GPT three or something like this, and I’m trying to produce a product using that system, and then I’m using it in a high risk context, even though I actually don’t know if there are any high risk context that OpenAI currently allows, but assume that I’m using another system that allows that, maybe I’m using OPT from Meta that was released recently. Maybe it’s a chat bot for people who are depressed or suicidal or something like this. It seems like a very high risk context.

Markus: And then I’m imposed with these requirements to make sure that my model is robust or that my model is sufficiently accurate or that it doesn’t discriminate in certain ways. And that seems like maybe that’s super hard for me to do if I just have access to the model. Maybe I only have access to an API, maybe I don’t have access to the model itself, or maybe I’m a really small actor, and so I can’t actually do all of this work to figure out whether my model is not meeting certain requirements. And so, if that’s the case, then you might think that the reasonable way to solve this problem is that you find a way to divide up this responsibility between the actors. And maybe you want to do that via regulation, which I think seems plausible that you want to figure out a way to split up this responsibility.

Markus: Another response that you might then have, that the folks who don’t want this regulation might have is something like, “Oh well, that problem we can solve in other ways. We can just solve that via contractual mechanisms. And so, if I’m selling a API access to a model, then I’ll say like, “Okay, well, you’re going to have to meet all these requirements. I’ve already done the work on the model itself. And then, you do the next bit of the work that’s required to figure out whether the model is compliant or not.”

Michaël: So the tech company does some other work and then, the regulator does the other part.

Markus: Or the deployer, like the company or the person who’s actually putting it into production and making it interact with the real world or with people.

Michaël: The company using say, the OpenAI, API for GPT3 to sell products with their API.

Markus: Exactly. I don’t know what the right answer is. My guess is I would like there to be some requirements specifically for these general purpose systems, probably you need to figure out how to carve that out or how to get that scope right, but my guess is that putting some responsibilities on those, it’s going to be the right way to go. Partly because I expect that to be important in the future or important as these systems become more and more capable. And especially if it’s the case that these systems become a very central node of a whole bunch of systems used all over the economy, then if they fail in some way, or if cybersecurity isn’t good enough, they’re going to impose these externalities all over the economy that I think might warrant having specific regulation of these particular systems of these particular actors.

Markus: And then, you’d have to figure out some way of dealing with like, “Oh, well, if I’m just an academic and I’m just like publishing something on GitHub. Then I shouldn’t be liable for what someone does with my model.” And I think, you want to do something about that.

Building Robust Models Through Auditing and Scrutiny

Michaël: Right. How much of direct do you think is about reducing business rates from AI? So how much of regulating the things just like EU AI or other regulations is actually about reducing existential risks from AI in the long term versus just slowing everything down, so buying us some more time to develop safe systems?

Markus: I think in terms of when people are thinking about, and working on these things, none of those things are what people have in mind or they have in mind. “Oh, these systems are causing problems today and then they’re hurting people, what do we do about that? In terms of the actual, like if how I think about the value of these things, I’m not really sure. I think most of the stuff that I spend time on is thinking about like how do we make sure that these regulatory structures that we set up today can handle or can set us up for being able to deal with more capable systems in the future?

Michaël: So another way of having more robust systems is having some external team, like doing some auditing or red teaming, I think you said at the beginning.

Markus: Yeah.

Michaël: Would you want to have more people doing like or have you seen more people use these at companies?

Markus: Yeah. My guess is that we’re moving in the direction of more of this kind of stuff happening. I would like there to be much more. And so, one bet that I think I forgot to mention that we might pursue of this team is this sort of, the bet is how can you make it the case that the world’s most powerful or impactful models, that those models receive external scrutiny without that requiring that you give that model to a whole bunch of different actors? Partly just because these companies, you need to make these kinds of governance systems incentive-compatible. And so, I think you’re going to be hard pressed to get Facebook to just give everyone their recommended algorithm, or their newsfeed algorithm.

Markus: And so, I think that I’m very excited about is how could we make sure that there’s more external scrutiny on these kind of things with the thought of, as these systems become more and more capable, it’s probably not going to be enough if you’re just relying on the developer and the deployer of the system themselves to do the vetting and do the work of figuring out whether their system is safe or not, or whether it’s going to cause harm in some kind of way. And so, I would love us to start moving in that general direction.

Markus: To some extent. Some of these companies are starting to do more and more of this. I don’t know how much of that is public or something that they talk about a bunch. And I don’t know, like they said, which my guess is I want them to do significantly more of it. And so, a thing that we’re probably going to work on or that we might work on is how do we make sure that more of that happens?

Markus: And so, an example of a thing that I would like to be the case is, I don’t know. When the next large language model gets produced by DeepMind or OpenAI or Google, when they do that, I would like it to be the case that… They usually have a section where they’re like, “Oh, okay. Well, is this model biased in some way? Could it do something bad? Could one use it for disinformation, whatever? These things, they usually have a section that talks about that and they do a lot of work on it. It seems great. I want them to continue to do a lot of work on it.

Markus: In my ideal world, that section is contributed to by external folks or maybe even better. At the same time that they publish that, their main paper, there are other people who get to publish papers, talking about like, “Oh, I ran these other tests that the company didn’t do, that showed up these results.” It seems very plausible to me to do really good work on how these like really, really big models, how they behave and trying to understand them, trying to make them more interpretable, trying to make sure that they don’t cause harm in certain ways. That’s probably going to be much easier to do if you have access to the models.

Markus: And so, I think that I really want to happen is these companies to give more researcher access basically, to the models. I don’t know how you could do that in practice. I think an example of a thing that maybe could work is if the companies are very, very worried about diffusion of their models or something like this, or very, very worried about people being able to get a hold of the weights or copy the model or something like this, which it seems plausible that they should be worried about as if they’re not open-sourcing it, because maybe they want to make sure that they can make money off of it it is. We’ve kind of solved this problem in other domains.

Markus: So for example, the US census where you try to figure out how many people are there in the US and what are some important traits of them, and you do this every couple of years. Or maybe it’s more than that. I don’t actually know their rate. When they do that, that data is super sensitive and anyone can’t just get access to that data. And so, what they do is they have a whole host of political scientists who get to work on this data. And the way that they do it is that they have physical machines in rooms where you can’t bring in stuff, and you go into that room and those computers have the data, and you get to run analyses on it. Maybe you get to run R or Python code on it. And then you get to output some results. And then maybe, you can send some of those results to your home laptop, so you can write up your paper.

Markus: And I could imagine that, we would just want something like that. Maybe you could do that with the virtual machines in the AI space. I’m excited about trying to make that happen or agitate companies to do it.

Michaël: So just have a big room where you have terminal computers and you have people trying to see if the model is aligned or not?

Markus: Exactly. Anyone who is doing work that looks like scrutinizing these models. There are a bunch of people who are doing interpretability work with an aim to understanding how can we align or how can we make sure that more powerful future models don’t cause a bunch of harm. People who are doing work on like the fairness and the bias of current algorithms or the explainability of current algorithms, I would imagine that they would also benefit a whole bunch from being able to actually work with these frontier models.

Markus: And so, in my ideal world, then all of these folks would be able to get access to these kinds of models and to be able to do the research. And then setting up a system where that can be done without these folks necessarily getting access to, or without you needing to open source the model, seems like that would be great to me. Or would be a really good way, like move for a really good direction for the world to move in. Maybe we’ll do some work on it. Maybe we can, I don’t know, try to get others to be excited about this view as well.

Markus: I think there’s also some stuff that I want to happen in this space that looks something like, “Can we figure out better privacy preserving ways or using privacy enhancing technology to solve some of these problems?” Where in the ideal world, what I want is I want it to be the case that someone or anyone can access some API. In that API, what I can do is I can just put in some code, maybe I put in some Python code, that is going to run some kind of analysis on a large language model, or on a big old neural network.

Markus: Maybe they don’t see anything of the internals of the model, but they get the output for what happens when you run that code. And then, if you can set up that kind of system, then that might be a really, really good way to make sure that other actors, like external actors can vet your model without being worried about this kind of diffusion.

Can We Enforce International Laws For Autonomous Weapons?

Markus: And that might also be super important. You could imagine future worlds where you might do that in much more adversarial context. Maybe there’s a future where lots of countries have lethal autonomous weapons but we want to make sure that these lethal autonomous weapons are acting in accordance with, or at least not grossly going against international law or humanitarian law. You don’t want them to be programmed to specifically target people of a certain gender or color.

Michaël: So you can have autonomous weapons, but they’re not allowed target a certain group of people?

Markus: Yeah. And ideally, maybe we wouldn’t have autonomous weapons, but maybe it’s difficult to get everyone to agree on that. But maybe I think that you can get everyone to agree on is like, “Okay. Well, we’re going to give you this privacy preserving access to our models or to our systems, and you can run certain analyses on it to see whether it seems like it’s doing the bad stuff. And if you can just make sure that there’s sufficient trust that you’re actually running these analyses on the actual deployed model, then this could be really, really helpful, or could be a stabilizing thing, or could be a way which you could build trust or be able to implement international agreements that could be important.

Michaël: Is there any regulation on autonomous weapons so far?

Markus: I mean international regulation, no. Or international law stuff, no. There’s a bunch of discussion about how does the international law that we have, apply to these systems? But in general, international laws, it’s a difficult thing because there’s no one to enforce it. And so, it’s more a lot of the times what it looks like, it’s norm setting or maybe if you grossly violate international law, then that might give other actors reason to intervene.

Markus: In Syria, you start using chemical weapons and then the rest of the world gets angry with you. Or things that you’ve done something really, really bad, and it increases the chance that you’ll be invaded or they’ll intervene via air strikes. And so, that’s the only kinds of mechanisms that we can enforce international law.

Final Thoughts

The Most Likely Death

Michaël: If I were to tell you that, I don’t know, the year is somewhere in the 21st century and Markus Anderljung, I don’t know how to pronounce your name, I’m sorry, is dead. What would you say is the probability that you were killed by autonomous weapon versus autonomous AI, or those kind of things? What is it like the scenario you might imagine for yourself? Just die old in some kind of hospital room where you can play with DALL-E 10?

Markus: What was the year?

Michaël: I don’t know, somewhere in 21st century.

Markus: I don’t know how I’m going to die. That would be probably great to know. I think I would probably say yes if someone asked me if I wanted to find out, seems like that would be useful knowledge. I don’t really know. What’s the chance that I die from a lethal autonomous weapons thing? It seems low. Maybe if I end up being super important or something like this, which seems unlikely, but if I end up being super important, maybe I’ll get assassinated at some point. We’ll see.

Michaël: Cool. Hopefully, people can watch this video and see the alive Markus before you get very famous.

Markus: Exactly.

Michaël: At the head of the entire universe government.

Markus: Exactly. When I become head of policy for the world.

Last Take On AI Governance

Michaël: Do you have that final speech about your takes on AI policy or just AI in general?

Markus: I don’t really know. I think all of these questions I think are really, really important. I think if you are a person who is interested in how AI is going to have an impact on the world, I think it’s very plausible that work, that tries to figure out some of these style of questions, questions about that is on the governance side of things, that is about how we can make sure that the world takes good, useful, sane actions on how to deal with potential risks from AI and how to communicate those risks and all those kinds of questions, I think are very, very important. And I think that the case in favor of you thinking about whether you can contribute to that kind of stuff, I think is high, and I would be excited about you doing that. And then hopefully, we can figure out how to govern this technology as it gets more and more capable.

Michaël: Cool. It was a pleasure to have you.

Markus: Likewise. This is great. Thanks, Michael.