..

Holly Elmore on AI Pause Advocacy

Holly Elmore is an AI Pause Advocate who has organized two protests in the past few months (against Meta’s open sourcing of LLMs and before the UK AI Summit), and is currently running the US front of the Pause AI Movement. She previously worked at a think thank and has a PhD in evolutionary biology from Harvard.

(Note: as always, conversation is ~2h long, so feel free to click on any sub-topic of your liking in the outline below and then come back to the outline by clicking on the green arrow)</

Contents

Protests

Background on Holly And Pause

Michaël: So for people who are just joining, it’s a podcast Twitter space with Holly Hillmore, which I’ll call here an advocate of AI pause. People will be asking questions and speaking in the middle. Probably at the beginning, I’ll just start by talking with Holly a little bit, one-on-one. But for people who are not familiar, maybe just give some context about what’s AI pause and maybe Holly, you can start with who is Holly Hillmore.

Holly: Okay, I’m Holly Elmore. You guys might know me from Twitter. When I say pause, I mean an indefinite global pause on the development of more advanced AI than we currently have. So training that takes significantly more resources. What I’m asking for is a global indefinite pause. Before this, I worked at a think tank and I have a long history in effective altruism. That was where I was exposed to AI safety information over the last 10 years. I didn’t personally work on it until April with the FLI pause letter.

These polls came out showing that there was just wild support. We’re regulating AI and we’d always said in AI safety that the reason that we weren’t pursuing popular support or government regulation was that the public just doesn’t understand the issue. It bounces off of them. If you tell politicians, they think you’re crazy and you lose your capital to have any influence to help the issue at all.

But when it was clear to me that the overtime window had shifted, I was, okay, absolutely. This is the next move that we should be doing. And then people weren’t as into it as I thought they were. And so I ended up ultimately doing it myself, working on incorporating Pause AI US to be the vehicle for that kind of work in the US.

The Meta And UK AI Summit protests

Holly: The first protest I did was at the Meta building in San Francisco and it was against open sourcing of LLMs. And then the second protest, it was in a park in San Francisco and it was aimed at the UK AI safety summit and it was part of a total of seven protests around the world that were aimed toward the summit, trying to raise awareness of the summit should be about safety as the summit approached, it kind of went back and forth. We hear a really safety focused message and we think, good, the summit will be about coordinating around safety. And then there’d be a message about, well we don’t want to stop industry and, we’d get kind of worried, don’t lose focus guys. This isn’t coordinating about innovation. The point was supposed to be safety. The intent of that protest was to be part of a worldwide thing and to have that message that the safety summit should be focused on safety. That was the one where we wore orange shirts and got a little more internet attention.

Michaël: Was the first one 10 people in front of the Meta office giving flyers?

Holly: 25.

Michaël: Sorry, I didn’t count. For the UK AI summit. Did people talk about the protests during the summit?

Holly: It was a lot less clear what the impact was there. Meta was the first protest I had ever done. People learn a lot it doesn’t have to be a big success, but actually I think it was more successful than the second protest. That might be because it’s… I’m not sure. We didn’t get as much media attention as we thought we would. Maybe that was because the Meta protest was kind of, that was the, the novel, first AI safety protest in the US or first Meta one. And then it was less novel, maybe also less clear what the conflict was when you’re in front of a building and there’s this clear narrative of, we want them to stop doing what they’re doing versus we’re in solidarity with other people at other locations talking about the summit in another country.

It might’ve just been too tenuous. I really don’t know. I have not cracked what makes the media interested. They were very interested in the Meta protests. Much more than I anticipated. I almost couldn’t handle the level of interest that they showed. And then, I was more ready for that to happen again for the second protest. And then we experimented more with different ways of slicing and dicing the images here and trying to get that kind of engagement. I am not sure that’s the direction I’ll continue to go, but it was interesting to get different things out of the different events.

Michaël: You said you had a lot of media attention. Do you have journalists coming and asking you what’s pause AI, what’s X-risk, those kinds of things?

Holly: Yeah. A lot of I suspect that with Meta, especially that a lot of journalists, already had things they wanted to say about Meta and they just, this made it a story and, so they came to me and they wanted my opinion on other ways that Meta had pissed people off and how that fit in. You have to be aware of their agendas or what makes what makes it news as far as, as they’re concerned, but definitely the conflict with Meta was more interesting to journalists than just in general talking about what the government should do. What, uh, government institutions should do, which I’ll bear in mind my next planned protest is that OpenAI, uh, in part for that reason, it just seems it’s easier for people to understand what your complaint is when the characters are also more clear.

Protesting To Correctly Communicate AI Risk To The Public

The Blurred Lines Between Activists and Insiders

Holly: If you’re just at an AGI company, it’s kind of clear, who the characters are, they’re the activists and there’s the company. I mean, imagine if the climate change activists and the oil company executives all hung out together and dated and went to parties together, it’s just… it’s real.

It’s very strange. We’re the people who care about AI safety. It’s over this huge spread of interest and it’s over this huge spread of personalities.

People Feel Protesting Is Embarassing

Holly: And it’s a very common tech personality to think that protesting is kind of the blue tribe thing. And, that’s not really for them. And if you’re smart, you figure out how to make enough money or do a backroom deal. You won’t have to show up.

And there’s a really big difference between the kind of person who thinks, yeah, let’s just have events and they’ll slowly grow, which is how I feel. And then there’s the kind of person who feels it is embarrassing, who would never do it. If too few people show up, if your room looks empty, that’s it, you look weak. That’s over. So you can’t start that way.

I just don’t really get that. I think that that’s just clearly not true. All my organizing has started slow and small and gotten bigger. But yeah, there’s something that’s very vulnerable and exposing about protests to a lot of people.

And I think that’s fine. You know, they don’t have to protest.

Without Grassroot Activism The Public Does Not Comprehend The Risk

Michaël: I what you said about the oil companies and the climate change activists coming together. I feel it’s, it’s hard to be on both sides and make people think that we’re not doing both of these games at the same time.

Holly: Right. Yeah. I think it’s very, so one reason that I thought we so desperately needed grassroots activism and things protests is that for, for just your average person looking in on this issue, not having any insider knowledge, it’s very confusing to them to see as the situation that we ended up in with AI safety and it makes sense if you were historically, if you’re there, it makes sense that things evolved the way they did.

It was hard to talk about AI safety. You couldn’t just take it to your representative. They would think that was crazy, you know? So people, what was left is we need to solve the problem technically ourselves. We need to influence the people who are building the AI. And so that’s what the AI safety community has been geared toward for a long time, for at least 10 years.

And. So people, now that people do know enough about AI and it’s conceivable the risk of AI to the average person, you’ve seen stuff ChatGPT, it’s very confusing to them to see people working on safety, being so in bed with the companies.

It’s do you really think it’s dangerous? It’s, I understand that you can’t say stuff to undermine your company when you work at OpenAI, but it’s getting to the point where that’s confusing people too much.

They don’t understand the level of danger that it’s going to be something that you think that they could be in because of your actions.

Normative vs. Descriptive Views On Pause

On The Actions Available For An AGI Company CEO

Michaël: Yeah, I think people have don’t really understand how likely tech CEOs think existential risk from AI is.

Holly: Because they just don’t understand how somebody could be moving towards something they thought was that risky. I think they’re just, they just don’t understand the level of risk that these people are excited for.

Michaël: Yeah, as we’re talking about. Sam Altman, I kind of made a Nash equilibrium, not Nash equilibrium, but a game theory thing in my in my head.

Michaël: And I think for him, let’s say you think that if OpenAI builds self-improving AI first or various super intelligent AI first, he has 10% he can decide how much risk he’s taking, right? If he’s very much in the lead, let’s say two years ahead of everyone else, in the best. Best case, he can decide if you spends two years on safety, a month, 10 months.

And from his perspective is race as fast as you can. And when you reach the thing, you’re you slow down as much as you want until you make it. So you if you only spend a month on safety, you will have I don’t know, 10% risk. And if you spend 10 months is 5% risk.

So from his perspective is maybe 90% chance of becoming God. By waiting a month and 99% if you wait two years, so maybe that’s the optimistic case of people were very optimistic about everything.

People Concerned About AI Risk Joined The AGI Companies And Stopped Being Vocal

Holly: And they brilliantly got AI safety people on board with them and I just this is terrible, but you know, it seems the right idea at the time that somebody should be I remember when people were discussing whether it was okay to go work at these big labs, but of course if they’re going to it’s good for them to have safety teams, right?

And it’s good to be able to influence them. But now horrifyingly what has happened is that they’ve captured the people who cared about it and gotten them on team OpenAI.

And and they’ve convinced now. I don’t I’m not an expert on this but it seems very convenient that people have become convinced that the way to do a safety is to just is to develop and then do evals.

And so now the way that people talk about safety and what’s considered cutting edge is just a form of development you advance capabilities and then you test them.

And if you do that it’s small enough. Then hopefully you catch stuff before it’s too dangerous, but it’s still it requires building stuff that you don’t know is safe first, there’s no alternative.

I agree that if there’s not as easy of a more as plannable of an alternative there are more journal research directions, but there are ways to research safety that don’t advance capabilities and that just gets treated oh that’s so unrealistic because how it’s a business they have to keep.

Pausing Because Making Enough Progress In AI Safety Research In A Short Timespan Is Risky

Michaël: Have we actually advanced safety without advancing capabilities? If we look at the past, let’s say 20 years of safety research. Once we started having an idea of what human level AI would be using transformers and larger language models we started making progress on what actually we need to do.

Holly: But have we really made the progress we need to make?

Michaël: No, no, we haven’t. I’m saying I’m saying right.

Holly: Yeah, I’m it’s unclear to me. I don’t know. Yeah, so I would question them by saying that.

Michaël: let’s say we pause AI completely we don’t train models larger than GPT-4 for 50 years.

On The Possibility of Pausing Progress

Michaël: How confident are we that we’re going to make enough progress so that when we you know, unpause we’re going to be safe. I don’t know if that makes sense. How much progress can we make without without advancing capabilities?

Holly: Yeah, so I don’t know how it’s going to turn out. Of course if I did I would tell people the answer but yeah, so I mean it’s it seems there’s ways of looking at different architectures that are more safe that would be kind of back to the drawing board and there would still be a challenge to stop people from using this possibly unsafe architecture that we already have.

But yeah, it’s not it seems to me theoretical approaches might still bear fruit. We might after 50 years be more confident that this is something that we’re not going to get an assurance of ahead of time. it’s always going to be inherently risky maybe then we decide that we try to just not build this kind of technology.

And yeah, people will often think that’s an unsatisfying answer. And they’ll be oh, that’s not realistic. You can’t just have a law and you can’t have a butlerian Jihad indefinitely. But I just I don’t know the reality could be that we can make technology that we can’t control and that we make it powerful enough.

if it inevitably destroys us or ruins our world. And what else are you going to do lay down and die? I just that that could happen. I do feel that So I think it’s quite likely that giving AI safety without capability advancement that kind of research the attention and the money it deserves would be very different than everything before 2017. That was a very marginal research community trying to do a very unusual thing.

Urgent Need for AI Safety Funding and Attention

Holly: If this you know got this sort of CERN Manhattan Project type attention that it needs it could be very different.

Michaël: Yeah, I think there’s more and more people that know about it. I’m not sure. If the amount of traction we got in the past two years about the risk translated into more people working on it.

If there’s 10 times more people were aware of AI alignment and I’m sure if you have 10 times more people working on alignment now. So I think that’s we still haven’t scaled as much as we need but I’m thinking about yeah, the pause emoji and the and the pause emoji.

So I think that’s we still haven’t scaled as much as we need but I’m thinking about yeah, the pause emoji and the the stop emojis.

Thoughts On The 2022 AI Pause Debate

Michaël: I pinned a tweet about the diagram I made which is from Scott Alexander from the debate and it seems from hearing you the Butlerian Jihad for a long time is close to a stop emoji

Holly: I should say I haven’t read Dune. I don’t know everything that was involved in the Butlerian Jihad. So I don’t want to be behind it so much. But yeah, if after 50 years of pause, there was no progress. It seems more clear that you should stop there’s not going to be a way to do it safely.

Yeah, so I prefer to say pause and actually told Scott to put me in stop just based on his how he had divided things. But he was kind of fixated on different versions of pause that have not become the dominant name.

when I say pause, I mean, indefinite global and I and ideally mediated retreating through the UN nuclear non-proliferation. There’s he kind of introduced this term surgical pause, which is causing me a lot of problems. Because people are why don’t you do a surgical pause because we don’t know what to do. I mean, we don’t but there’s not just a few things we want to get done.

We want to I mean the framing that I’m trying to achieve is realizing look, it’s not there’s this idea, especially in tech and among Libertarians and gray tribe and EA effective altruism and rationality types.

There’s this idea that well, it’s their right to build whatever they want. And we have to have this big we have to have this huge compelling reason to stop them. And we have a compelling reason to stop them already.

We don’t have to let them build as much as they can and only inconvenience them a little. I think it’s it’s time for the governments of the world to say stop we’ll figure out if it’s safe and if this is okay. And then you can move forward after that.

So yeah, so the number of different pauses that Scott introduced it became quite confusing because that’s very appealing to people who think that way.

Michaël: In the debate week, there was a few people arguing for different positions. And I think some people were arguing for a surgical pause of timing it.

The position is more posing now or advocating for a pose now is not very strategic. And there’s maybe a level of strategy of maybe we should call for a conditional pause or time it at some point.

Timing an AI Pause - Impossible or Risky?

Holly: I disagree with this. So much which you know is a good thing to discuss here. I I think that the search the idea of starting the pause at the perfect time. I think is crazy. It makes no sense.

We don’t know when the right time to start is. We don’t know if we knew when it was going to start being dangerous. We we would have already solve the problem in a lot of ways.

you it’s in it’s completely unrealistic with the way that you know, government works you have to start asking now for something that you want almost ever it’s really hard to say oh only when. It’s actually dangerous similar to you know, the the model of breaking right before you hit the Cliff with your AI advancement.

the idea that you’ll know where the Cliff is is just so ridiculous. if you knew that then you should already be able to solve this problem, but you don’t know where the Cliff is.

So I people want me to there’s a sense in which I think they just want me to agree that there could still be benefits to learning more now. And then if we and then pausing at the right time after we had learned as much as we safely could would be the best thing, but that’s not a policy proposal.

We just don’t know how to time that and I think that idea is just it’s just a gift to people who want to get in the way of a safety and I it’s something that’s kind of theoretically appealing, but I think it’s a terrible idea.

So that’s how I feel about the surgical pause and I was kind of surprised that he’s got from what I remember kind of endorsed. That idea and I’m David Mannheim talks about that idea and I respect the way he’s talking about it.

He’s saying essentially there there could be benefits to knowing more there’s benefits to development now that we could use during a pause to figure out more safety stuff, but I still think it’s to even suggest that we know when that point is is wild and just makes no sense. Sorry go on.

Michaël: I think that that’s valid. I was thinking most a mistake as a post hoc realization now we’re in 2024 or even in 2023 after a GPT.

It was much easier to convince the public or politicians that there was a risk because there was a GPT, right? And so we can say that it was better to advocate for a pause in 2023 than in 2021 or 2019 because basically you were really wasting your time and energy and or maybe it may be to be you just a much less efficient, right?

So if you look at you know at this we can think that maybe so this is most about outreach but then about AI alignment. We can also say that we’re making much more progress from being 10 times more efficient now than we were in 2019.

Holly: So maybe it will continue and to be this way and we would be 10 times more efficient in two years trying to pick winners and the timing of these things seems doomed to me.

I think what we have now is time to make our case and we should just make a case that will continue to be true.

When I say I don’t know safety got solved. I would stop advocating for a pause but for now we should pause until it’s safe. And I think trying to be more clever than that could be fatal.

Assumptions About Warning Shots and Advocacy

Holly: I know so it’s very common. It’s actually in so a lot of orgs that people assume are against pause AI are not and they’re going to name names because you know talk privately.

Holly: But they’ll say things people will tell will come to me and say that my strategy being confrontational with labs is going to hurt this other organ really want them to succeed. it’s important that they have good relationship with us.

Holly: They don’t necessarily disagree with me. You know when I talk to them, but they say things well, why don’t you just wait until there have been warning shots because then it’ll be easy.

Holly: the people will just rise up a lot of people have this as part of their theory of change. It’s not that they’re against advocacy. Even if they’re not talking about advocacy now, they think that’s just going to happen almost for free later because of warning shots. And it’s very much not true that lasting important social movements just happen spontaneously.

Holly: it kind of it often is sort of good for the narrative that looks that way. But that’s generally not how it works. There’s people organizing there’s already a network in place.

Holly: There’s already people who are ready to take advantage of the warning shot when it happens. So I’d say that’s it’s not a reason not to start to do advocacy now or to to wait and have your ass be only dependent on a warning shot.

Holly: Second thing is I’m not counting on any warning shots. I think I’m concerned about the possibility that we don’t get any warning shots. The next warning shot is the shot.

So so that’s another reason I don’t want to depend on them. But I also think a lot of a lot of people who maybe wouldn’t necessarily know it do you think that advocacy and the people you know expressing themselves on a safety will be important, but they just have longer timelines

Holly: and they’re kind of thinking that’ll be much easier after the warning shot has occurred. I agree with I also think the easier after warning shots have occurred, but I just think we need to start now.

Michaël: Yeah, I think it’s starting now is good. I also think that looking if in the future you can look someone who is kind of prescient people will respect you more

Michaël: and if since I think people in that that care about air risk also care about pandemic risk before we got some points on the internet about oh these guys were arguing about pandemic before covid and if we can say hey, I think those models will be this deceptive in two years and they might lie about this problem or have this agentic behavior.

Michaël: And in two years we have this Benchmark and they actually pass it. We’re hey we talked about it. We knew it was going to happen now is the time. To actually do the work.

Michaël: So I think we can do both right? There’s a one thing that is kind of important to me is I think for me, it’s a spectrum the post thing the complete 100% pause is you put whole old note.

Michaël: Maybe every every human die would be the worst case scenario and then you can have the worst authoritarian state where people just go back to not using computers.

Michaël: And then maybe the more intermediate thing is we don’t train large number models or large things on GPUs and we stopped developing bit of ships.

Michaël: And I feel a lot of people will complain about pause thing that a lot of the science a lot of the open-source a lot of the algorithmic progress will continue. Even if we stop training large models was with the same amount of flops and kind of things.

Michaël: I feel from an a normative thing we can say that yeah, we should probably stop everything but from a descriptive or we should think about what’s actually possible to implement why can we actually ask governments before 2030 to implement

Michaël: and yeah, what would be an not ideal but let’s say 80 percentile of how good the pause could be.

On The Challenges of Implementing an AI Pause

Holly: The hope was that just compute caps wow, this is kind of a new thing before too many companies are at that Frontier.

It didn’t quite the best thing that could have happened didn’t happen. There wasn’t just a further and at the UK summit. They’re yeah, let’s just catch everyone’s G4 is good enough so that didn’t happen but that would have been nice.

A better ask would be in terms of capabilities if we knew again, it’s just I we don’t really know what capabilities are that are going to be indicative of danger. We don’t we haven’t solve safety in alignment.

But asking for very intense testing tasks, you know that that a model could do… these tasks autonomously… running a town then that’s not safe.

We don’t want that some kind of high ceiling that would be I mean, that’s maybe 60th, you know percentile good outcome for me.

I think another reason to start the pause as soon as possible is that the pause should be robust. So you don’t want to be in a situation where if somebody just gets together a bunch of loose compute, they’re able to make something that you know, maybe actually it breaks through and it is it’s transformative enough or it’s super intelligence and it poses the danger and that just happens while the global pause was underway.

If we pause and we’re super super close to the threshold, then maybe that could happen if we pause and we’re not close to that threshold, then it’s gonna be a lot harder to the one there won’t be as much compute because it’ll stop having their biggest customers for to do training runs.

If you’re not allowed to train new models at the frontier, you know hype investment development around hardware would reduce

And so I wouldn’t be as worried. Of course, there would still be improvement to algorithms. You would expect it to get at least somewhat better even if there’s not a big economic incentive legally being able to build AGI, but having a pause start early enough means that there’s enough cushion in terms of compute and algorithmic progress so that somebody breaking the pause doesn’t immediately get us there into danger zone.

Michaël: What’s the time frame? Is it happening in 2026? What’s the realistic case?

Holly: I really don’t know what I mean for me I pause the guys line is just as soon as possible. I really am not sure what’s realistic.

The more simple the ask is the more it seems realistic that there could be traction behind that ask and then it kind of gets like actually going to law.

There’s a lot of exceptions and stuff, but you still I would like be to be clear the positive I ask is stop training until it’s safe. Stop stop developing capabilities until it’s safe, but I think that it would be really good to do anything that slows development down. I would be very happy if my actions led to a compromise measure that just slows things down.

I don’t know something all hardware is licensed and there’s very elaborate rules about how to you know, keep up your license and it’s not that people can’t develop per se, but we would be able to stop the use of hardware is we got into you know, dangerous territory and so so that’s why I would prefer the goal that everyone agrees on is not developing capabilities further until we have to understand safety, but there’s lots of there’s lots of things that could happen to slow development to give us control over crucial things in say, we get a warning shot what kind of emergency powers do you know, does the US president have what kind of emergency powers is maybe a governing body, you know representative of the world have it’s not ideal to wait till there’s an emergency but to at least have a plan if there is an emergency what can they do? Is there a way that they could pull a kill switch do we have the most basic? Safety in place so that they could stop models that appear dangerous.

Yeah, so to talk I as a representative of pause, I generally I see it’s not it’s confusing to people get into this kind of specific because they come away thinking that I’m advocating that and so I usually don’t accept for this sort of audience. But I yeah, I mean I have a whole ranking in my head with a lot of you know, what would be at least helpful? Compromise what would be ideal? What and then I think that the most effective way for my position doing advocacy to help get any of those is to be pushing more of I think positive is still fairly moderate, but you know something that’s uncompromising just positive.

On practical implementation of regulation

Michaël: I the licensing idea and I think a lot of people were building this kind of things the AI engineers or researchers that train things on GPUs and I think they have a very practical mindset and they think oh how how it is going to be implemented. how am I going still going to be doing my GPU trainings next year? And they’re oh, okay. So what kind of GPUs are going to be licensed? what is going to be the licensing can I still buy a hundred eight hundred can I buy ten and and can I still do open source? Can I still share the GPUs with my friends on the internet? And I feel when we enter into the those details we realize how hard it is to to actually stop people from from doing what they’re doing and I think I think from a a lot of my friends are AI engineers and I think this is this is a I guess their concern is when you go into the details because it’s easy to advocate for something as a very kind of abstract, but it’s harder to give a realistic scenario.

And I think the pushback I have is it’s not because I think it’s bad to advocate for a pause. It’s mostly because I’m concerned about how how much progress we’ve done. And I haven’t seen any public statement from the AI summit that made me more optimistic about things slowing down. So if you can give me evidence of there’s been some progress on this front. I would be oh yeah, we’re making progress. But as far as I know I did we did maybe there’s being a little export controls and stuff in the US mostly. But yeah, do you have any other?

Holly: Yeah, I mean the executive order is the thing that came out of nowhere the most for me and made me think wow. Oh my gosh, you and I I was none of that it’s kind of soft stuff but it happened pretty soon I to me everything is how I still feel I cannot believe how much cultural clout the position of pausing has gotten and I can’t believe how much progress AI safety in general has made in the last year in terms of you know, just general recognition I feel this isn’t even really something we did but the polls revealed, you know starting in April that people actually have a fairly sensible mindset about this a lot of tech people assume that everybody’s going to want to build AGI.

Obviously everybody sees that AGI we could live in heaven forever if we master AI alignment. And so that’s the most important thing but actually most people don’t you don’t have to overcome that for most people they’re not caught up on that and they actually have a fairly sensible attitude toward the risk to them mostly their answer is just no why would I why would I do that? If you ask them is it worth it to get these improvements for this risk and so I’ve done for the last year really felt nothing but better. about the whole issue and more hopeful.

Michaël: Is it only for the pause or do you actually talk to those people?

Holly: Yeah, I talked to those people and I think it’s good for the volunteers to talk to people as well. I try to set up things where these are not the highest impact events that they’re sort of social events. Where you know you hand out flyers and you talk to people about what’s the risk here at the protest. There’s always some person playing that role and yeah, and then I of course speak to a lot of people on the internet, but yeah, and surprisingly the thing that I always come away thinking every time we do this is just wow. Actually people were way more familiar with it than I thought and they got it was more than I thought.

I think because it’s a because AI itself is you know requires some expertise to really understand. we also let ourselves believe that protecting people from dangerous things is some kind of you know genius level insight and yeah, most people really get it. They think as soon as they understand the threat model or they don’t sometimes they don’t even have a realistic threat model, but they’re just that sounds really powerful. I’m not okay with that. I don’t think that you know, I if we’re not able to control it, how would we be able to control it? It’s more powerful.

So I usually come away thinking yeah, actually the basic point here that there’s more making something of high capabilities and we don’t know how to ensure that it’s not dangerous so the default is that it’s probably not going to do some things that are incompatible with us and the more powerful it is the bigger those moves will be and the more that hurtful they could be to us. Actually most people get it. And so that I mean so so that’s not really a sign of progress. I think necessarily it’s progress. I guess if anything you could attribute this to the warning shot. Probably of chat GPT, but it makes me feel much every time I do an event of protest. I do something where I’m trying to talk to a lot of normal people off the street. I feel way better because of that.

Michaël: Do people understand the whole human level AI scenario the day believe the crazy future you discuss with them without any more arguments?

Holly: Well, I don’t make it sound as crazy as we because I also don’t to me the scenario doesn’t have to be everyone the species is extinct for it to be bad enough to do something

Extinction Is Not The Only Concern But Used To Justify Action

Holly: where it’s so you know if you listen to eliezer talk about this I think there’s a lot of things going on I think for one that he really does believe that you know what the singleton that it’s most likely fast takeoffs most likely once you have something that’s capable enough at all it’s going to do all these things to make sure it’s a singleton and it might that might include instrumentally extincting everyone or it might just include you know using our resources for something other than our bodies but to move the needle in his mind it has to be everyone’s extinct.

If everyone’s not extinct then it’s just a growing pain maybe you know and we do eventually you know our future’s not over you know so all of that value of civilization and stuff is not over I think it’s worth I don’t feel that I’m not so I don’t want to put words in eliezer’s mouth but that’s you hear that distinction between it’s because it’s an extinction risk that we’re allowed to take these this action if it wasn’t an extinction risk you know that might get into more conflict with people’s values about progress or something that again not to put words in any one person’s mouth this is just the kind of thing you hear.

I think that just making a really powerful entity not knowing how to control it is bad enough you know and so when I talk to people I’m generally I’m not claiming that just you know this kind of super intelligence will have the capacity to make us extinct and that’s why you should care because of extinction I am saying hey these companies are making this product which sure you know it might you might be enjoying some benefits from it now but what if this happens it’s making a really powerful intelligence intelligence is the edge that we have over that’s why we have the position that we do in our ecosystem if we lost that edge you know what would happen to us if if it was just you know it doesn’t have to even have ill will if it just doesn’t know how to give us what we need and what would actually be good for us if we don’t know how to tell it that what happens yeah so and most people are very receptive to that I mean it’s very obvious to them to not make a super powerful entity that is independent and you don’t control

Different risk tolerances

Michaël: how much risk we can take and how much having one percent of people suffering from a catastrophe academic outcome is is worse than 99 of people having a a little bit better life from a better technology and I think if you’re if you’re really pro-social and anti-suffering and and anti-risk you will be very careful about everything and but but if you’re let’s say the cliche of very optimistic about tech and and you only think there’s a one to ten percent chance of of bad outcomes from ai um

I think I see that I see the conflict I’m not sure how how you were positioned during covid but as a young person during covid it was hey do you want to spend three years depressed in your room so that there’s a one in ten thousand chance of your father not dying this is very bad statistics I i don’t remember exactly the numbers but basically I think I think I see the young people in during covid the same as the tech people for the ai regulation for them is hey you you’re delaying everything but many years and being up being kind of asking them to be more careful when they think is evident that the thing is going to be positive for for humanity and

I think that’s as much as we talk to them about hey it’s actually dangerous it’s kind of your mom or your grandma being for for for them is actually 20 chance of dying from the pandemic and and for us it’s actually 20 or whatever doom percentage we have in our head and we we can talk to each other as much as we can but in our brain it’s gonna be a very different a very different math I kind of I don’t know I don’t know it’s like

Holly: a calculation yeah and a lot of people in ai safety I sense there’s a deep conflict because their values are that it’s worth the risk it’s worth risk to take to make progress it’s worth you know job displacement and then you know figuring out a new order of society to have a bigger pie in the end and and I you know I think that that’s been true you know many many times

Reconciling progress and safety

Holly: Actual progress involves making good judgment calls about what kind of stuff to release onto the market, making better products than maybe you have to. There’s a lot of judgment that goes into what we later look back on as progress. Moving forward, it’s not progress to make a shot at the end of it if it explodes, even if you did it faster. With a lot of stuff that we consider progress, NASA and going to space, it looks different when you’re working there. There’s all of these regulations, and it’s so hard to deal with all the red tape. They still have not zero accidents but try to improve safety. It’s just compatible with this phrase that I think is a good business phrase, “Slow is smooth, smooth is fast.” So, there are ways of thinking about progress and not going as fast as possible. We have to be careful not to allow progress to be defined for us by people who are against it.

In an ideal world, the pause we enact allows us to one day have AGI that is safe because we had the time to do it safely, not just implementing untested models that went crazy. Frankly, the reason I’m interested in pausing AI is to save everyone’s lives but also preserving the chance to have that bigger, better future. That’s my take on it. Although, I understand how it feels to people who don’t agree with the risk or have a higher tolerance for risk. It feels I’m just being a fuddy-duddy and getting in their way. We don’t always agree. I’m hoping that this just goes to a democratic process.

Michaël: Progress is being able to build technology that does good things. If we build AI that does what we want, an LHF model on Chat GPT that understands our instructions and is honest and helpful, this is progress because we know how to steer it correctly. Having a model that does whatever you want a space malware, seems less progress. Scott’s post about regulating AI compared it to regulations the FDA or SF housing. If your idea of regulation is not building any houses in SF to prevent bad actors or having an FDA process that is super long, then maybe it’s good only if you think AI is a tech that will destroy the world. We don’t want the world to be destroyed. But if you think AI is any other technology, then having the same level of regulation seems dumb. How you think about AI will define how you think about regulation.

Holly: There are a few cruxes that get in the way of understanding why you would think differently about AI and regulation. Usually, it’s assumptions about technology. I have a blog post about forecasting from the category of technology instead of thinking more mechanistically about AI. People always say, “This time’s different.” Are you saying that nothing will ever be different? There’s a strong argument for mechanizing intelligence being something different. Maybe there’s a way to show that it isn’t, but we need time to go through that argument rather than falling back on this category that we think always turns out fine. Also, if you define technology in slightly different ways to include weapons, it’s not always fine. It’s not always good that people developed weapons. We just have this narrative of progress, and that means anything that matches that pattern is also part of that positive trend. It’s never caused anything bad enough before; everybody who worried about it was considered foolish.

Twitter Space Questions

Michaël: We have one person that seems to be more pro tech working in ml that’s requesting to talk.

Michaël: hey yaroslav, you’re live

Is Existential Risk From AI Much More Pressing Than Global Warming?

Yarislav: Yeah, so I just saw this thing pop up and I was curious. But the thought I had every time I see these debates, the thing I’m wondering is there is a non-zero chance of AI wiping out entire humanity, which is infinitely bad, multiplied by non-zero. That’s pretty bad. But the other side is, well, what if without AI, we die from global warming or war, fighting for resources? So that’s also infinitely bad. So I’m just wondering in these debates, should we also include talking about how likely we think we may suffer from consequences of other things that provide existential risk, global warming? Okay. So we could weigh, well, without AI at current state, there is a 0.0001 chance that we’ll die from global warming. So that’s really bad. So we must weigh that against the small chance that AI comes out and destroys us. So should we talk about global warming and nuclear proliferation? And are there things that could potentially destroy us when you’re weighing things? Should we pause or should we not pause?

Holly: Personally, I don’t think that pausing in the near term, maybe if we’re weighing should we never make AGI at all, we should consider there are other X risks and just other ways that humanity suffers and AI could benefit them. But in the near term, I don’t think any other risk is nearly compelling enough to just say, yeah, let’s just make a buggy AGI. Let’s just make… Let’s just rush and just see what happens. I just think that the risk presented by that would be much higher. As far as extinction, I don’t think that global warming will make humanity extinct. I don’t think that the bar needs to be extinction for caring about an issue, of course. But so as far as extinction, I’m not worried about global warming. I’m not worried about global warming or war leading to everyone dying. So I think it’s less bad, but I do see that as possibly a risk with AGI. And just the other risks, just human suffering, human disempowerment, those are things I see as much more likely with AGI in the near term.

On Global Warming Increasing Instability

Michaël: I think I have an argument more in Yaroslav’s side, which is the basic thing of if global warming was… kind of fast in the next five years, we get temperature increasing a lot, then maybe it would be a worse climate to have AGI being built and more tensions. But I guess it’s all about timelines of how fast you think global warming will happen versus how fast AGI is advancing. I think Oli and I have this power of AGI being quite fast. Before 2030, we have something really crazy. If you just extrapolate the trend from the past two, three years… to 2030, it looks pretty different from where we are right now.

Holly: Related to this, job loss, job displacement is part of what Pause.ai talks about. And I personally care about it. A lot of tech people can’t believe that I really genuinely care about that. I do, but one, I care about it in itself just because it causes a lot of suffering and upheaval in society. But I also think… that it contributes to instability that can make X risks more likely. We don’t want to… If we get transformative AI in the next 10 years and it locks in a certain set of values, we don’t want those to be the highly unstable values of people who are like… A bunch of people are highly disempowered from their jobs. They don’t really have their stake in society anymore. It’s not clear how they negotiate their place in society anymore. I wouldn’t want that to be what gets locked in. So, yeah, speaking of exacerbating causes with AI, I think that job displacement, societal upheaval, people not knowing, having an agreed-upon social reality. I think all of those things could, while not X risks themselves, do contribute to risk.

Michaël: And that could go both ways, right? So let’s say all the artists now are fighting the AI artists and that they’re easier to convince that AI risk is real. Let’s say they have… It’s a very sad view, but… You can think the more people lose their jobs from AI, the more people will be actually convinced that AI risk is real. But this is a very cynical and kind of manipulative way of saying it. And I don’t endorse not looking at the terrible impact on their lives. But if you believe that most of AI automation right now will be for white-collar workers that… Let’s say it can automate people doing stuff online designer work or knowledge work. And it will take more time to do the robotic stuff or the AI research and more complicated stuff. Then it means that everyone is jobless in two years. But then we have all these people that will be able to convince that the harder thing is worth fighting about. But if you think that oh, actually doing the AI research is kind of the same as doing AI research. Doing the knowledge work, then we’re kind of doomed, right? Because the AI will be able to do AI research at the same time. It’s going to be able to do the New York Times article.

Holly: Yes, true. I guess it’s a safer warning shot. I think hoping for AI warning shots is no, we shouldn’t. But I mean, it seems inevitable. We’ll get this one and it will make people understand oh, okay, it can do what I’m doing. It can do what I’m doing. It can change the economy, take it more seriously.

Michaël: I think we got it already with GPT-4 or ChagGPT. I think all the politicians, people that are the Congress staff or let’s say journalists, they can all see that the thing writes English and process documents as well as they do, right? And all the programmers can see it because they write code. But I think for a lot of different things, it’s hard to see.

Is There Any AI Safety Level Of Involvement That Would Make It Ok To Not Pause?

Yarislav: So I’m wondering, do you think there is some level of investment in AI safety, which would make it okay to not pause AI development? So if we don’t do any safety, we definitely should pause it. But maybe there’s if we do enough safety research, enough resources, and I have thousands of people working on it, at some level of number of people, maybe would you consider Holly the pause is not necessary?

Holly: I don’t know what that benchmark is, but I can imagine being told we’ve cracked it. You know, we can do full interpretability now and we can actually know just from without running the model just from weights, what, what it’s going to do and what and we can have. And we’ve discovered there’s this deep architecture to it where that’s even, I don’t know, shed light on it. It’s we can tell if it’s good or bad. I can imagine there being breakthroughs that that make me think okay I mean, I guess we don’t need a pause or I don’t know, maybe I wouldn’t say we don’t need a pause, but maybe I would stop working on it. I would think that it was not something that needed more of a contribution. But yeah, I just don’t know what that would be.

It would have to be tied to, it couldn’t be tied to just number of researchers or amount of money. It would have to be because I really think there’s a possibility that the problem is not. Tractable or that because what we’re talking about is do what we want to be aligned with our values. what even are those really? I mean, there’s a lot of mysteries still about that. I just kind of wonder if fundamentally can can there really be stable alignment between something of our level of capabilities and something that vastly exceeds our capabilities. Or will the little areas of misalignment become too big because of that differential. I really don’t know. So I’m not confident if you just put enough money and time on it, that we would get an answer. I guess what I’m saying. I think the answer might be that there’s no way to make it safe. I think that’s rhetorically effective to know that, just to give people an idea of how much more it would take. But I would hesitate to promise that, oh, if you’re game. You know, me, this many people or this much money, because I really, I don’t know if they would find it. You know, they might just confirm that there’s no way.

Michaël: Yeah, I had a question about more commenting on the opening ice saga from November, December of the board resigning and all the thing that surrounded it. And because on the some posts you write, you talk about talking about the spirit of the law of what you want people to implement and not the letter of the law of how things are. Yeah, I think that’s a really good question. I think that’s a really good question. I think that’s a really good question. And I think what we’ve seen with opening eye is that there’s this kind of governance structure that has not been respected at all. Or at least the economic incentives were much stronger than everything else. And do you think we’re even if politicians agree on some regulation or some, some things, we’ll be able to stop this invisible Moloch economic hand, and the neural networks just wanting to learn more, they want more data, they want more compute and people will just throw more money at it.

Holly: Then there’s the concern: what if you stop AI development and it never takes off again? What if that pause means we lose AGI, possibly forever? It’s interesting how these two intuitions are so close for a lot of people. Either you can’t stop AI, or if you regulate it, it becomes so difficult that it stops, and humanity becomes too afraid of AGI to pursue it. I really don’t know which one is correct.

If AI development becomes economically burdensome, or for some reason, the promise of advancements just peters out after a few more iterations, maybe due to diminishing returns, who knows why. If that happens, would there be enough interest in doing fundamental research to bring something new to the forefront? Or would it kind of die out when people see it’s not progressing?

I really don’t know. And I don’t know what’s the appropriate historical case to compare AGI to. There are cases where a technology seemed inevitable and unstoppable. Then there are cases where, 40 years later, a revived technology is so impactful, it’s hard to believe it was ever abandoned. But it was, sometimes for random, situational reasons, making it difficult for the person originally working on it to continue. So yeah, it’s hard to say.

Will It Be Possible To Pause After A Certain Threshold? The Case Of AI Girlfriends

Michaël: I think there’s, there’s some you talk about those two cases of either we stop it completely, or the thing continues. And it’s not stoppable. And I think people will think in a binary way this thing that is very hard to stop. And if you stop it, then it means that we’re back into being Mormons, and not using computers or the kind of things. And I think, today, I don’t know how much you use to GPT. But a lot of people, at least in tech, use it on a daily basis. And it’s starting to be more like internet or electricity. And if you remove the language model from your life, a lot of people in character.ai that use their models to talk to their girlfriends will be crying and be Oh, I lost my wife. If you block the server for two days, and, and this is 2024. This will be in 2023, early 2024.

Holly: Wasn’t even two years ago that replica pushed an update and people lost their partners. And yeah, it was already yeah.

Michaël: Yeah, so I’m thinking if for the pause thing, if we if we go back to not using AI at all, even even today, I think a lot of people will be kind of disappointed or or sad a little bit or less productive, a lot of people are Oh, I’m coding so much faster now I’m So I think there’s some argument for we should pose really fast, because otherwise, people will just be losing their girlfriends.

Holly: I yeah, I so this I just informally call this entanglement with AI. And I do think yeah, that, right now, the polls show very high support for regulation, because people the framing, I mean, I infer because the framing is well, there are these risks, and people are we don’t need this. And people, it’s a lot of people. It’s a very natural reaction to be something is redundant. And therefore we don’t, it’s lazy or something to use it. And so people actually when they hear about new technologies that they think shouldn’t be necessary, they often even have a withdrawal from it. So I think we’re kind of benefiting from that as far as those polls, and as soon as people have a few positive use cases in their lives, even if they’re not really important, they’re gonna, even if they risk, they judge the risk as being much more important, they’re still gonna feel more positively toward the technology. And that’s gonna affect their willingness to put limits on it. And especially if it’s meeting emotional needs or something. Yeah, my goodness.

Human Adaptability Causes Goalpost Amnesia

Holly: So I agree. That’s a reason to do a pause soon. And it’s a reason to get to people with the risks now before they just get another issue with this whole landscape is just that people think goalpost amnesia is one thing I’ve heard it referred to as that there’s just amnesia about what people used to think and predict to remember the Turing test? everybody’s just so yeah, I didn’t the Turing test is every day, people are not sure whether something was composed by an AI or not. And I don’t know, we just used to think that meant something or that that would be a warning shot. That’s another issue with warning shots is that people imagine that stuff will be a warning shot that isn’t because people aren’t, they’re either not properly prepared, or there’s to understand the significance of it. Or they’re just already mentally moved on. They’re ready to accept more risk or their, their model of what machines can do just has updated and people forget.

So they don’t, it’ll be not that long before the American public has forgotten what it was before LLMs. remember when when DALL-E first came out, and we were seeing these really incredible images. And there was some talk about oh, well, illustrators be out of business. But some people were isn’t this what Photoshop did before? they didn’t, they didn’t know that Photoshop you have to do everything mechanistically in Photoshop, and that a human has to know how to do that. And they just weren’t that impressed by it. they already had sort of believed that they weren’t able to understand what the technology meant. That I think is a sort of a curse of knowledge issue with tech people and people in AI safety is that they have deep models of what things would mean, and what different warning shots would mean, but actually, the public, what impresses them is very different and and they just quickly update and move along. Forgot why I started saying that.

Michaël: They just see the concrete DALL-E output, the first DALL-E, and they’re oh, it’s cute. And it’s kind of low resolution, but they don’t think about the implication of something progressing exponentially for four years. if you show Midjourney V6, or whatever version it is right now to your mom, there should be what is happening?

Holly: That’s a photo. Yeah, it’s not even weird, because they think it’s just an edited photo.

Michaël: It’s pass the thing where it is now is now weird is now normal again.

Holly: Uncanny Valley. Yeah, it’s, I mean, the only way I know stuff is Midjourney is because it likes certain compositions. that’s it. The lighting is sometimes off.

Michaël: Yeah, we have a new a new person. AI safety policy advocacy. Haven harms. Do you want to say something? Share something to the group?

Haven: Hey, Holly. Max and I have been listening. You’ve been doing great. He has a question for you, so I’m going to pass you over to Max.

Max: First of all, thank you for the perspective on progress. I really liked that reframing, and I really slow is smooth, smooth is fast. But my question is, what do you think? But my question is, do you have an ask or a call to action, something that you want people to do if they’re concerned or interested in helping out?

Holly: Right now, it’s pretty general get involved, volunteer with me. We have a page on pos.ai.info for slack to these actions where people can get a template for sending email to their representatives, things that. I want there to be a bill in Congress that we tell people, we talk to politicians about supporting and that we tell people to call their politicians to support. We’re not there yet. I have a lot of faith and confidence in the Center for AI Policy, which is working more directly on trying to introduce bills that could be adopted. Or language that could be adopted into real bills. So that’s, I mean, my goal is that we’re eventually pushing toward legislation that was framed with this, with the Pause idea in mind.

Impossible Alignment Control Without Regulation

Michaël: Spinozon says, “Are there non-regulatory methods of ensuring alignment control? One reason I think posing is a lackluster solution is that it’s reliant on centralized power. But do you think there’s other ways of getting to alignment without regulation?”

Holly: Getting to alignment? Maybe. I mean I guess, I don’t know, it could be that just we’re one brilliant researcher away from alignment in theory. I think for Pause, it pretty much has to be government. I yeah, there’s nothing that I would endorse to unilaterally be able to stop AI progress other than democratic government. But for alignment, yeah, I mean, there’s what we’ve been doing this whole time is trying to get money and attention into alignment. I just, again, I’m not, I just don’t see, I don’t even know what timetable I would expect for solving alignment. I think it’s possible it’s not solvable. So I, while it’s good to keep pursuing, of course, I wouldn’t do that instead of pursuing a Pause through government.

Trump Or Biden Won’t Probably Make A Huge Difference For Pause But Probably Biden Is More Open To It

Michaël: I think in the U.S., the U.S. election is, did you see that? This year, right, 2024, we’re going to have maybe Trump versus Biden at the end. do you think there’s a better case for Pause if one is elected versus the other?

Holly: Well, Biden seems into it. I mean, I was very surprised and happy about the executive order. But with Trump, it’s you never know, he did do Operation Warp Speed, he might just decide I want AI Pause and and do, I mean, he’s just such a loose cannon. That’s why I don’t feel I wouldn’t, I don’t him either. But I just I don’t, while I think it’s possible that he might, for some reason, decide to take actions in favor of Pause it’s not, it doesn’t seem a plan to me to support Trump. But, no, I wouldn’t think it was over. He might, really, maybe then the image of Pause AI would or the whole ask would I mean, I’m guessing that IAC would have more sway with him, but I don’t know. It’s got more of a macho image. I don’t know that he likes protesters, but I don’t know that he dislikes them either. So yeah, I don’t know, I think just if somebody he cares, listens to is in favor of Pause, he could just decide to support it and decide to make an agency or something that is the kind of thing that he could have a lot of influence over. So I would not say guys that it’s over if Trump gets elected, we should keep trying.

Michaël: Yeah, I was kind of wondering if, if there is maybe some people can see reasons of wanting to push for a specific candidate, if they think we were in a very bad position, if someone was elected versus another, maybe it was worth repushing for one person.

Holly: I think Biden’s the better candidate for that reason, just more predictable. I think he takes the situation, he takes the issue seriously, but you know, I don’t know how much you could, it could be tempting as a moonshot to think you could convince Trump to do it, and then he just wouldn’t care about any other reason not to do it. Oh, Yaroslav? I think the general ask of Pause AI and the organization is like I’m

China Won’t Be Racing Just Yet So The US Should Pause

Yarislav: Yeah, I had a question. I’m wondering about the concrete politics of pause, so you mentioned potentially sending letters to the representatives. I’m wondering if you have an opinion, if it makes sense to pause AI, say, in the United States, if China doesn’t also pause? Does the unilateral pause make sense, or should we wait for them to pause as well? Thank you.

Holly: not a huge fan of worldwide treaty, and just unilateral pausing is not going to be like a total solution, because there are other people possibly still pursuing it. But I think that, I mean, personally, I think that the US showing leadership and being willing to go first is going to be important with China. And that’s, I mean, I don’t, I don’t usually comment much on this issue, because I am representing Pause AI US and I think we should pause either way. You know, I don’t think that it’s very strange, the what this implies about people’s epistemics, if they’re okay, well, we do need to pause. But what if China doesn’t pause? It’s well, so sorry. So what are you saying? So if they don’t pause, and we should just, we should try to die earlier by not pausing ourselves. it doesn’t make any sense. But yeah, so I think that the US is going to have to be willing to, I mean, we’re in the lead. You know, it’s going to have to be willing to slow down in order to inspire the confidence of others, and that we will need everybody else’s cooperation on it.

Yarislav: Yeah, do you know if anybody is actually working on convincing China to post? So everything I’ve seen so far has been Western. I’m just wondering, is it just hopeless or is there a community of AI Pause in China?

Holly: Well, the political climate is different, right? I mean, because you don’t have to convince the Chinese people as much. I mean that’s, that’s not as much how it works. But yeah, there’s a lot of… There’s a lot of engaging China that’s been much more successful than many Western powers fear getting China to talk about this. China has its own issues with so it has sort of a more immediate concern about controlling LLMs because it needs to control what they say about the Communist Party. And so it’s, they, their development is somewhat thwarted by that. That’s it’s one of the reasons that people believe that… You know, we’ll keep a lead for a while. They might not be as keen to do this kind of AI development as we are. they might just feel that they need to keep up is one speculation. I’m not, don’t know that much about this by any means, I’m no diplomat, but if it’s true that they just feel they need to keep up with the U.S., then the U.S. offering to pause would go a long way.

Michaël: During the UNSC meeting on AI safety. China was the only country that mentioned the possibility of implementing a pause. So I guess there’s a scenic view that might think that the reason why they’re saying this is because they want to, to come back. If you think you lost the race, you might want everyone to slow down and then you’re Hey, actually, so yeah, it’s, it’s unclear how much, for, for the race was opening up. People think that when opening is ahead and I took his regulations, it’s really regulatory capture, but now with China, people think that they’re wanting everyone to pause because they’re behind. So people have different intuitions, depending on the context.

Holly: I, there is just like I don’t know, zooming way out. I’m told by China experts and Chinese people that China does see itself as a much wiser, older member of the world stage, you know? And the Western companies, Western countries, it’s sort of upstarts and there’s some governance philosophy, related to that. And so I was told that maybe they’d be more open to a pause for that reason. they just, they have longer timelines. They understand the movements of civilization more I don’t know how much we need to flatter them, but there’s there’s that idea anyway. Um, but I really don’t, I really don’t have a special knowledge o

Michaël: You have knowledge on the protest because this is what you’re organizing. And I think we, we haven’t talked about the next steps, the opening, I, uh, protests what is it about and when is it?

The OpenAI Protest

A Change In OpenAI’s Charter

Holly: There’s going to be a protest at OpenAI, the OpenAI building in San Francisco on February 12th, and probably at the end of the workday. So we can speak to employees as they leave. It’s going to be about the OpenAI charter being amended recently to take out the part about not working with militaries and beginning of opening AI working with the Pentagon.

In general, it will be aimed at the employees, letting them know that this is not when there was an employee vote on the charter that affirmed, I think not working with military several years ago, a lot of people joined back then. This was the opening I, they were part of and now with, I mean, speaking of economic incentives, now I don’t know what kind of process they underwent if the employees were consulted at all about what’s in there, what’s in the charter about not working with militaries. But now, they will be having military clients.

So I think there’s going to be a tongue in cheek use of “OpenAI is nothing without its people.” That if this is not your OpenAI, you can, you could leave you could agitate from within. So that’s going to be the, the general rest of it. The documentation is not written yet or anything, but I’ll definitely be sharing it on Twitter. And if you want to mark your calendar now, it’d be February 12th, around four 30 Pacific time.

Michaël: Do, do we have an information for why they removed this from their charter or is it just more speculation?

Holly: I don’t know why. I don’t have any details on why they removed it from their charter. I would like to find out. It’s possible that people will come forward and tell me more about it. when I, as I talk about this protest, but just taking the Pentagon as a client. And then I think before the Pentagon news broke, there was there was somebody noticed that they just removed and military. It’s from the statement from people that they wouldn’t work with in their charter. Um, so it seemed something that was going to happen.

Michaël: So we’re sure that they’re going to be creating tech for the Pentagon.

Holly: I don’t know what the nature of the relationship is. Um, but they have, it has come out that they are working for the Pentagon. I don’t know what that means. it could mean that they’re providing shaggy BT for the Pentagon, but they did change their charter to say, just from. Forbidding working with military clients.

Michaël: It seems okay. I haven’t looked that much into it, but it seems risky to organize a protest on some information that we don’t have. Um, definitive statements from a penny eye about what they’re doing. And if it’s just oh, we, we think you might be working with the government and or with Pentagon and we’re not, I don’t know. I. I feel weird, accusing people of things that we don’t know for sure what they’re doing and what is the relationship they have with the Pentagon.

Holly: I mean, are we, should we wait until we know for sure what they’re doing? Cause they strategically prevent us from knowing those things.

Michaël: Yeah, yeah, yeah.

Holly: I think the protest works. I was just going to have it more general before it, the, this happens. Um, I was going to just try out making it because we really don’t have to, I think that I’ve made a mistake with having really news pegs. extensive documentation really bespoke protests in the past and let probably I could just do more general you know pause ai you’re part of the problem pause ai and we just roll up and we say that and we go and you can you know so yeah so this one the actually there is a gripe you know that has arisen with OpenAI changing its charter that we don’t know how and taking a military client so that’s going to become you know we’ll that’ll be at the top of the press release but honestly we’re purchasing them because they’re the lead you know ai developer

Max: Hey, it’s Max again. Can I jump in and say something about the change in OpenAI’s policy on this front? So they’re going to be, in theory, providing cybersecurity tools, which seems and they’ve maintained that they’re not going to be using AI for weaponry and things this. But what I think is really concerning about this move is that they’ve demonstrated that they’re willing to sort of unexpectedly increase the degree to which they’re working with the government on military sort of technology. And I think insofar as that doesn’t receive pushback, then the message is, ah, well, if we change again to be slip down that slip a little further, then that’s okay.

Holly: Thank you. Yeah, I think there’s a way to address it where it’s not about trying to prosecute the claim. It’s not okay for you to work with militaries. It’s not okay for you to just change your charter, which was supposed to… It’s not okay for you to disregard your board that tried to fire you, Sam. That was supposed to be your stopgap. So, yeah.

I imagine that it’s going to be a scenario where people can bring their own signs and put whatever they want on them. The overall feel of the protest ends up not being as unified as reading a press release about it will make it sound. But it’s just kind of a grab bag about OpenAI. I don’t know what to say about the board because I don’t know what happened. I wish I knew what happened. It would have been perfect for protesting, but I just honestly could not tell if exactly what I wanted was happening, or the opposite.

And yeah, so it’ll be a chance for people, if they want, to on their side and say something about the board. The thing that I’ll mention to reporters is the news peg will be the military client. And yeah, I’m hoping to make them just kind of more easy and replicable. Something where people don’t need to be briefed on a ton of information to be there. They can just show up. We get drinks afterwards. They have their sign. They even make it a sign party at home. And they’re able to… You don’t have to have a super deep opinion or deep knowledge of the issue to just have the opinion that, “Hey, you’re the lead developer of AI, and I want AI paused.

A Specific Ask For OpenAI

Michaël: Do you have any specific ask or outcome you would be happy with? if you were to meet with one PR person from OpenAI and, and talk about something is there some lower level? ask other than just hey just pause everything you’re doing I don’t know so i’ve

Holly: usually formulated one of these but they just it feels very performative whenever I do it because I know that they’re still not gonna do it but it’s it looks reasonable to people around me that I asked something but it’s also not the thing that I haven’t recruited anybody new or gotten anybody’s attention through a newspaper article with the small ass you know but it’s always just with the general idea of pausing if anybody has an idea feel free I yeah I didn’t think there was anything piecemeal I guess be more accountable to us for your charter I don’t know like

Max: Sorry, do you think asking them to roll back their involvement with the Pentagon would be a small ask?

Holly: I mean they could promise that they wouldn’t do weapons or something that they’ve already said yeah yeah yeah that’s that’s and that’s a good I think that’s an effective for the actual protest kind of small ass yeah great suggestion

Max: Yeah, it seems it would be on brand with the and you can say and promote a pause in an international cooperation if you have extra words or something. But it seems if you’re going to be out there being angry about the military involvement, if anybody asks you, then it’s yeah go back to how it was a few weeks ago.

Creating Stigma Trough Protests With Large Crowds

Michaël: do you do you think there’s um value in in pushing things on multiple days people will go we won’t stop going in front of opening eye every day until you go back onto your military thing or are you more targeting small events that are every month and you try to have a bigger crowd?

Holly: I’ve mostly tried to get a crowd I’m really just figuring a lot of this out I kind of have reached a plateau in numbers and ways to make smaller numbers go further repeating things in very close succession this could be great even just one person showing up at OpenAI for a long enough time you know could be good I think that probably a thing that we have outsized leverage on is affecting the employees or or affecting people’s likeliness but likelihood to take jobs there if it if it seems less cool a lot of I mean I know people who work at opening them and I understand their reasoning for why they started opening working at OpenAI but they would never have taken that job if there was social stigma on it and something that puts a little more don’t be part of this you know something that really kind of controls or makes it harder for OpenAI to find more talent might be good so I just somebody not that many people hanging out in front of the office a lot you know making them feel bad it might work

Michaël: I’m not sure there was a lot of stigma happening when the entire world was looking at them for the OpenAI board thing I’m not sure they were there were a lot of employees thinking about about safety issues or I feel everyone was more convinced that the opposite was was true that that they were not going fast enough or that the the board it was a safety line was kind of hindering them and something I’m not sure if there’s a way of pushing the stigma of changing their mind by having I feel it might just be make them more angry

Holly: Somehow at least I think it’s possible. I do think that I don’t know especially a lot of people on twitter reacting to I felt they were hurt you know they want to be the good guys they don’t not feeling that way and I think they I mean it seems OpenAI is an incredible place to work and people feel super supportive and they love it and they really don’t want to lose what they have and I’m sure that you know me being disapproving is not gonna overcome that but I’m sure that I’m sure that I’m sure that me being disapproving but I think it might affect marginal cases if because right now if you you have this amazing work environment you make you know a zillion dollars you get to work on cool stuff and everybody thinks you’re a hero and if we took that away it might not change everything but I think it maybe is something that we should do and it’s maybe something that you know with the size and composition of pause ai right now that we have you know more leverage to affect some other things so I think everything is about the public opinion

Pause AI Tries To Talk To Everyone, Not Just Twitter

Michaël: so the the OpenAI board thing everything was about the the court of public opinion and if millions of people on twitter are upvoting and sending hearts to your ceo and they seem that they’re they’re winning and and everyone is approving of them and I feel if if there’s 10 not 10 let’s say best case scenario hundreds of people in front of your company but then everyone on twitter is shitting on the people that are in front they might still feel they’re winning I feel like it’s very hard to get a shift on people’s opinions I was I i was watching those movies about about gandhi where you see how the everyone was following him for protest and there’s millions of of indians doing marching behind him or something and it’s when you have this massive support and you see the old support was a lot of people and I feel today we have twitter and it’s this is kind of our mass of of people saying yes or no and this might take a while to change as most people were in tech are on twitter and maybe you josephine on the street is maybe

Holly: not on twitter as much yeah that is too I mean trying to do that kind of influence is different than my other my general thrust with pause ai which is mostly you know normal people right and older people and republicans and so you know all of the people who are into pausing ai so yeah the that would be it’d be different I mean they certainly I don’t think they would feel as pressured if it’s my older volunteers you know there but yeah I mean maybe this is something I just perhaps just want my community to do but it’s it’s hard you know every a lot of the the leaders of the traditional ai safety community do work at these companies now but I can’t really see a way forward where it’s just gonna continue to be okay to be loyal to the community and to the community and to the community and to the community especially

If You Care About AI Safety Don’t Work For The AGI Companies

Holly: I mean I was I’m not gonna name any names of course but I was disappointed you know to see the ai safety people at OpenAI all tweeting you know that cultish thing you know OpenAI is nothing without its people and harding their ceo after I mean did they know why the board wanted him out that was supposedly the structure of this company was you know to allow them to do that and that was he bragged about that you know but then he just nobody it’s not you know okay fine I i sort of understood I talked to some people about maybe why it made sense to sign the letter and everything you know because you would want to microsoft to also have a safety team but yeah but certainly they lost hero safety status to me I mean they’re pretty compromised it’s they just think and it’s not you know this is just my my view of the situation but I don’t think that they’re fundamentally working on solving alignment or pursuing a strategy that would fundamentally make ai safe and so I also don’t think it’s that big of a loss if you don’t have people in there we’re already not doing that so yeah I would kind of want to force this issue a little more: if you’re really concerned about ai safety you don’t you don’t work with the agi companies

Michaël: If you care about ai safety you don’t work with the agi companies. I think that’s a good closing statement

Pause AI Doesn’t Advocate For Disruptions Or Violence

Michaël: I’m not sure I have much more to talk about except of more crazy questions about what is the animal advocacy thing of liberating animals from cages for ai if you have any crazy any more crazy ideas and protests but maybe that may be an info answer to talk about publicly on a podcast

Holly: I want Pause AI to stay in the line that I have for Pause AI: no disruptions. Not that I think disruptions are always bad, but I just think, you know, we’re kind of first in the space. I want it to be fair; I want it to be unimpeachable. You know, I want what we do to be non-violent, of course. Yeah, I mean, I wouldn’t advocate any violent actions, but I would maybe see an organization, further than Pause AI, that does stunts, for instance.

I do think stunts can be effective, but I just don’t think it should be us. I think there should be somebody that you can trust, you know, they’re not a big PR stunt angle on you. That’s skipping the basic message. I’m going to hold down the floor with Pause AI, do that for now. But I do, you know, having a background in the animal space, I just think that it’s undeniable that stunts of a certain kind, PETA-style stunts, do work and get a lot of attention.

You have to be really good to know how to use outrage and people hating you in the right way, but it can be very powerful. So, you know, people my whole life, people would find out I was vegetarian, and they’d be… and sometimes they would be, “Well, as long as you’re not PETA, you know, then you’re fine.” And so they just created the boundary, and anything they said, people would think they updated against PETA or backfired because they didn’t PETA, but actually what happened is that their view would shift without them realizing it about what was acceptable to do to animals, and that’s the goal.

I don’t pretend to be a master of all things; I’m barely figuring out the straight-up protests that I’m doing now, so I will not be doing that.

Michaël: so if people want to join 12 of february in front of a penny I at some time after people work so in the u.s 10 p.m or 6 p.m I don’t know

Holly: 4 30 probably

Closing Messages From The Twitter Space

Michaël: yeah thanks for people listening yaroslav do you have any any last message for the audience

Yarislav: Thanks for turning me on. I guess I thought this would be a super pro AI safety and then you wouldn’t let other people speak, but yeah, it was fun.

Michaël: yeah so thanks everyone did you have something to say holly?

Holly: I’m just gonna say thanks everyone

Michaël: okay see you

Hardware Overhang And Pause

Michaël: I feel I we we didn’t really go technical. I wanted to ask you about computer overhanging or this kind of things but

Holly: Oh yeah, that was asked for. Yeah, okay, so there’s a couple of things that are meant by “computer overhang.” I’ll start with the thing that I think is a real concern. So, algorithmic surplus, it’s sometimes called, is when algorithms will continue to get better at utilizing compute. This means you won’t need as much compute to achieve the same model. This is important. It means that compute governance, which is one handle for implementing a pause, is not going to be static. It’s going to get harder. You’re going to have to govern more and more to ensure that people can’t make the same kind of models, because algorithms are going to get better and better at utilizing less and less compute.

This is an issue. It shapes how a pause should be implemented. I think it’s the most serious technical objection to a pause. The reason people object to a pause with this is that they say not only is compute going to get more efficient, but it’s going to be more efficient, and there’s going to be more compute over time. Algorithms are going to get more efficient. Specifically, the angle with pausing is that if there is a discontinuity, so you’re not just training models as more compute becomes available, as algorithms get better through training models after an artificial stop, then you could get a model that’s so much better that our understanding of the previous models isn’t a really great guide for understanding this model. Maybe that model is the one that causes the problem.

So, in that scenario, a pause directly causes the model that we can’t control. I think there’s a problem with that idea, the idea that we would just continue to create that level of compute that we are with right now when there are customers filling databanks or data centers with these chips. That just wouldn’t be the case if there was a pause, especially if it was a compute-capped pause. Then, there wouldn’t be as many chips just knocking around, and there wouldn’t be development of algorithms as quickly.

That’s why I don’t, well, I acknowledge the scenario where there’s a sudden jump in capabilities if somebody manages to… I think one implication of this is that for enforcing a pause, you have to be really careful that people can’t get enough compute together to use algorithms in a way that could be potentially highly discontinuous, something that we’re not prepared to deal with. That would be an implication for enforcement. But I don’t think that the default on the lifting of a pause would be that capabilities have improved so much that we get to these discontinuous outcomes because I don’t think that there will be… I mean, these data centers are the main use of these chips now, by far, and there’s only so many Pixar movies, you know, that will… The chip being produced also has a very tenuous and difficult supply chain. That’s why there’s a new monopoly on the production of these chips.

So, for many reasons, I’m not concerned that if a pause were implemented, we would get that problem of the discontinuity. We would still get algorithmic progress. It is just something to be concerned about, especially when you’re thinking about using compute as your handle. I think there would have to be, also in any kind of pause legislation… and I think that’s a good point. I think that’s a good point. I think that’s a good point. I think that’s a good point. There should be something, a provision for algorithmic monitoring as well, even though it’s more difficult.

Michaël: yeah I think as you said there’s people sharing compute so people managing to you know do distributed training with a lot of different gpus from the entire world and and then there’s algorithmic progress and then there’s also let’s say hardware progress on paper I’m not sure if it makes sense but imagine someone managed to design a better chip on paper while there’s a pause on producing new ships when we leave the pause then they will be able to chip as you know a better gpu in two months instead of a year or something I think that that’s what people expect is there’s a lot of architectures that are being discovered that they’re more efficient and and give some performance and I’m not sure how much of these you can get on paper versus getting them on like you need to actually interact with the gpu and train stuff

Holly: my understanding of chip stuff is that what’s holding its back is mainly implementation it’s not you know theoretical insights about chips I think I think we need to be more we need to ask the experts about what they think I have a guy I talk to who’s great he just knows all about chips I shouldn’t didn’t ask him you know if you could if to share his name or anything but get yourself a person an industry expert who knows about chips It’s wonderful.