Blake Richards on Why AGI Does Not Exist

When people asked on Twitter who was the edgiest people at MiLA, his name got actually more likes than Ethan, so hopefully, this podcast will help re-establish the truth. Blake Richards is an Assistant Professor in the Montreal Neurological Institute and the School of Computer Science at McGill University and a Core Faculty Member at MiLA. He thinks that AGI is not a coherent concept, which is why he ended up on a recent AGI political compass meme.

LessWrong (12 comments): Blake Richards on Why he is Skeptical of Existential Risk from AI

Effective Altruism Forum (13 comments): Blake Richards on Why he is Skeptical of Existential Risk from AI



Michaël: Blake, you’re an assistant professor in the Montreal Neurological Institute in the School of Computer Science at McGill University. You are also a core faculty member at MiLA. We’ve had other people from MiLA on the podcast, including Ethan Caballero and Sonia Joseph, which I believe you have supervised or will supervise in the future.

Blake: That’s correct.

Michaël: When people ask on Twitter, “Who’s the edgiest person at MiLA?” Your name got actually more likes than Ethan. So hopefully this podcast will help re-establish the truth. Thanks Blake for coming on the show.

Blake: It’s my pleasure to be here. Thanks for having me.

AGI Good, AGI not now compass

Michaël: Yesterday, I did a post on Twitter about a political compass related to Artificial General Intelligence or AGI. And you ended up being placed on the “AGI Optimist”/”AGI Not Now” Camp. I’m curious, how do you feel about your position in the starts? Would you rather be somewhere else instead?

Blake: Overall, I think your positioning of me given those dimensions was not too off. It’s interesting for me to be placed between Gary Marcus and Yann LeCun in so far as… I would say, my positions are much closer to Yann’s probably than Gary’s. Not that I don’t agree with Gary on some things, but it’s just a funny sociological thing. I know Yann from the CFAR program that I’m a part of learning machines and brains program. And so I think I’m part of a similar philosophical bent that way. But overall, I didn’t mind the placement. I think the axes themselves, the dimensions that you picked are funny ones to carve the AI world up into to some extent, but it was a fun exercise. It certainly gave me a giggle.

Michaël: Would you say you’re closer to Yann LeCun than Gary Marcus or?

Blake: On most topics, yes I suspect so. So let’s maybe just start with that axis you had “AGI Not Now” versus “AGI Soon”. One of the difficulties with that access is precisely that one of the things that myself and Yann agree on is that even saying what AGI is, is difficult. And it’s not clear to me that’s a coherent thing per se, that you can easily carve the field up on. So, it almost feels like you need another access, which is “AGI is a thing”, yes or no. And then Yann and I could be off on the no part of the axis, to separate us from some of the others. Because, to clarify, I’m very impressed with the advances that have come about in recent years and I’m by no means anti-scaling laws or something like that.

Blake: I really am a strong believer in the power of scale for achieving a lot of things. And in that sense, I’m very optimistic about the state of AI. I think that we’re making a lot of advances consistently and some of the advances are coming easier than we had expected. If you had asked me three years ago, whether just scaling up transformers, trained in a self-supervised fashion on text, were going to give the kind of powers that they have, I would not have said yes. So it’s nice to have this feeling of optimism in the field that things are actually progressing well. And I have that feeling. And so in that sense, I think this is where I maybe differ from someone like Gary Marcus, who I would say takes a much more pessimistic view of the field and has tried to claim that deep learning’s hitting a wall that they’re just basic fundamental problems that we’re not solving.

Blake: And therefore dismisses a lot of the advances as being effectively tricks. I don’t want to speak for Gary, but that’s kind of my read of his interpretation. Is that a lot of the apparent advances are kind of tricks and not really fundamental advances. But I think there are advances and I think, as I said, scaling has turned out to be more powerful than we ever would’ve expected. But I don’t think we can say concretely what an AGI is. And that’s where I feel very uncomfortable saying that any of the current systems or anything are like AGI, because to me, that’s like asking, “To what extent are the current systems?” I don’t know, just… It’s hard answering that question. I know what the word means. I know what you’re getting at, but measuring that is not an easy thing.

You Cannot Build Truly General AI

Michaël: Got you. And I think Yann LeCun’s point is that there is no such thing as AGI because it’s impossible to build something truly general across all domains.

Blake: That’s right. So that is indeed one of the sources of my concerns as well. I would say I have two concerns with the terminology AGI, but let’s start with Yann’s, which he’s articulated a few times. And as I said, I agree with him on it. We know from the no free lunch theorem that you cannot have a learning algorithm that outperforms all other learning algorithms across all tasks. It’s just an impossibility. So necessarily, any learning algorithm is going to have certain things that it’s good at and certain things that it’s bad at. Or alternatively, if it’s truly a Jack of all trades, it’s going to be just mediocre at everything. Right? So with that reality in place, you can say concretely that if you take AGI to mean literally good at anything, it’s just an impossibility, it cannot exist. And that’s been mathematically proven.

Blake: Now, all that being said, the proof for the no free lunch theorem, refers to all possible tasks. And that’s a very different thing from the set of tasks that we might actually care about. Right?

Michaël: Right.

Blake: Because the set of all possible tasks will include some really bizarre stuff that we certainly don’t need our AI systems to do. And in that case, we can ask, “Well, might there be a system that is good at all the sorts of tasks that we might want it to do?” Here, we don’t have a mathematical proof, but again, I suspect Yann’s intuition is similar to mine, which is that you could have systems that are good at a remarkably wide range of things, but it’s not going to cover everything you could possibly hope to do with AI or want to do with AI.

Blake: At some point, you’re going to have to decide where your system is actually going to place its bets as it were. And that can be as general as say a human being. So we could, of course, obviously humans are a proof of concept that way. We know that an intelligence with a level of generality equivalent to humans is possible and maybe it’s even possible to have an intelligence that is even more general than humans to some extent. I wouldn’t discount it as a possibility, but I don’t think you’re ever going to have something that can truly do anything you want, whether it be protein folding, predictions, managing traffic, manufacturing new materials, and also having a conversation with you about your grand’s latest visit that can’t be… There is going to be no system that does all of that for you.

Michaël: So we will have system that do those separately, but not at the same time?

Blake: Yeah, exactly. I think that we will have AI systems that are good at different domains. So, we might have AI systems that are good for scientific discovery, AI systems that are good for motor control and robotics, AI systems that are good for general conversation and being assistants for people, all these sorts of things, but not a single system that does it all for you.

Michaël: Why do you think that?

Blake: Well, I think that just because of the practical realities that one finds when one trains these networks. So, what has happened with, for example, scaling laws? And I said this to Ethan the other day on Twitter. What’s happened with scaling laws is that we’ve seen really impressive ability to transfer to related tasks. So if you train a large language model, it can transfer to a whole bunch of language-related stuff, very impressively. And there’s been some funny work that shows that it can even transfer to some out-of-domain stuff a bit, but there hasn’t been any convincing demonstration that it transfers to anything you want. And in fact, I think that the recent paper… The Gato paper from DeepMind actually shows, if you look at their data, that they’re still getting better transfer effects if you train in domain than if you train across all possible tasks.

Blake: Whether that would be solved by scale is something that I can’t answer concretely, time will tell. My intuition is that you will never escape that basic phenomenon though. Because I think we see it in human beings as well, right? Human beings, despite being these incredibly general agents, ultimately all of us have certain things we’re good at right. Some people are really good at sports and they can play almost any sport. Michael Jordan went from being a pro basketball player to being a pro baseball player. Now he was better at basketball than he was baseball, but I couldn’t do pro sport in either sport, let alone switch from one to the other. But I don’t think Michael Jordan could necessarily do AI programming so well.

Blake: And I’m sure he could learn to some extent just as I can learn how to play basketball, but he’s probably a bit more specialized for sport. And I’m a little bit more specialized for programming. People just have these different skills, right? And I think that the fact that you see that in people that there’s really no one who’s truly good at everything is an indication that this is a general principle that will apply to AI as well.

Michaël: Well, if you look at people, I think some people are more considered polymaths and are somehow good at a bunch of things. But, if you had more time or a bigger brain… Computers are not limited by longevity and memory. So you could just have different programs for Michael Jordan and Blake Richards in different part of your brain. And I think that’s one of the criticism of Gato. Maybe the neurons are just for a personal task and you’re just choosing which ones to activate for a specific task.

Blake: Yep. And so I think that’s obviously a possibility. To some extent you could argue that our society at large is like that, right? So if you think of society itself as a larger collective intelligence, we have some people who are specialized at sport. Some people who are specialized at art, some people who are specialized at science and we call upon those people differently for different tasks. And then the result is that the society as a whole is good at all those things, to some extent. So in principle, that’s obviously a potential strategy. I think an interesting question that way is basically, “To what extent you would actually call that an AGI?” Right.

Blake: It means that you’re basically calling on different AI systems as you need them. And I suppose you could try to make it opaque to someone so that they don’t know that different systems are getting called upon. And it seems like there’s just this one intelligence and that’s fine. I suppose I have no problem with that, theoretically, but it’s slightly different from what I think people imagine when they’re thinking of an AGI, which is some kind of super-intelligence that just configure anything out. Right?

Michaël: Right. So basically you’re saying that having some same architecture that does a bunch of different tasks is super hard or maybe impossible-

Blake: That’s right.

Michaël: … even given the no free lunch theorem. And what will happen is we will have something like possibly Gato that can be switched to different modes depending on the task. Or humans that do like specialized things or narrow AI doing specialized things, but never like one agent doing a bunch of different things.

Blake: That’s right. I think that’s the case. I think there will be more generality to the systems than there is now. And I think scale will help with that. But something that is truly just like a single architecture to rule them all as it were, I don’t think will exist. The other issue I have with the term AGI, and this I don’t know how Yann feels about, but I actually also hesitate with the idea that we could ever know for sure that we’ve arrived at AGI. And I say that because I think that the word intelligence itself is insufficiently defined right now. And I’m not convinced that we’ll be able to define it in a way that will allow us to truly say, “Yes, this is intelligent. This isn’t intelligent,” concretely. Basically, my intuition this way is that intelligence is one of these words, like life, like consciousness that you cannot provide a formalized metric of.

No Intelligence Threshold for AI

Blake: And really what you have to do is use the approach that the American judge once articulated when judging an obscenity trial, when he said, “I don’t know how to define pornography, but I know what it is when I see it.” Or something to that effect. And I think it’s similar with intelligence, right? I’m not sure we can define intelligence in advance, but we all know it when we see it. However, I think different people might have different opinions that way. And so there’s going to be this forever game in AI that I don’t see stopping anytime soon, where some people are going to be like, “This system’s intelligent. It’s there, we arrived. And they’re going to be other people who are like, “No, no, no, we haven’t arrived. That’s not intelligent.”

Blake: And there’s not going to be any way to resolve the debate because there isn’t going to be an objective definition of intelligence that we can call upon. And so really what’s going to happen is that, as the systems improve, the percentage of people who are inclined to call these systems intelligent will increase. And then we’re going to get to the point where we have to ask ourselves, “What is the threshold at which we say we’ve arrived. When 90% of people say these systems are intelligent when 99% say?” It’s kind of a funny thing.

Blake: But, the result is that I also just think we have to accept that we probably can never say for sure, “AGI has arrived.” That idea that we get from sci-fi that there’s going to be this moment where the thing becomes conscious and all of a sudden that’s a mind, right. That just won’t exist in my opinion. And instead, we’re going to have this weird gradual progression into more and more people accepting these systems as intelligent. And then at some point, we’ll turn around and we’ll be like, “Oh, I guess we’re at what we might have called something like AGI 10 years ago.” But not everyone will accept that.

Michaël: Right. I think there are clear definitions for intelligence, such as the ability to achieve one’s goal? I think that’s what Nick Bostrom uses in his book. Intelligence is like… Yeah, if you have a goal of trading stocks and, it is very good at this goal, then you can call it intelligence. And I think the more general case is, if you’re able to surpass the ability to achieve one’s goals in a bunch of different goals, then in some sense you’re smarter. You’re more intelligent than humans. You’re more able to achieve your goals. And especially for goals that are economically valuable, that will be what we care about, because if you have AIs that automate 80% of our economy, that’s more important than 90% of people on Twitter thinking that if we have achieved AI, because then you’ll have maybe 90% of people that will be out of job. So, I think that’s… Maybe what will happen first is people getting out of job and trying to understand why they’re out of job.

Blake: So possibly, I half agree with what you said. This is my reaction to that. I agree that we will be able to say to what extent an AI is useful for certain applications. And whether it’s useful for potentially economically or other beneficial applications that we might have. And we’ll be able to say that very concretely, “This AI system is great at managing a factory. We were able to fire all of the factory managers. They’re asking themselves, “Why did we get fired, et cetera.” But I’m not sure that I accept that definition of intelligence from Bostrom. And I don’t think this solves the problem of defining intelligence for you. Because, if you’re trying to define it objectively, it’s like… Well, okay. So something that can achieve its goals, but then you’re basically just left with, “Well, what is a goal and how is it determined?”

Blake: Right? So, for example, we can play this game and this is exactly the game you can play with words like life and consciousness as well, where you can point to certain funny gray regions, right? So it’s like, for example, an insect will be very good at its goal of successfully eating some shit and reproducing for the next generation. Is an insect, superintelligent? It’s good at achieving its goals. Or what if… Would we say that when a rock falls down, it has a goal of moving down and gravitational potential energy? Why not? What defines a goal, right? At some point, you have to come back to contending with definitions of intentionality and agency, and then you’re in a whole other philosophical mess. I think basically Bostrom’s definition is not actually sufficient for formalizing anything. It’s fine for a pop-science book that gives people a nice intuition. But for AI researchers, Bostrom’s definition serves almost no utility in my opinion. And I don’t think we’ll ever be able to rely on something like that to say, “Ah, this is intelligent. This is not intelligent.”

Michaël: Right. So you were saying it’s not a practical definition because you could have a bunch of different goals and you could have emerging properties in cells that show intelligence without having any intent?

Blake: Yeah, that’s right. Or if I even just… Let’s say I take a model-free reinforcement agent, right. It’s sole goal is to maximize reward arguably. But it’s going to do all sorts of other things for us. Are those goals for it? Nowhere in the system is a program with an intention or a goal for it. But I’m comfortable with saying it’s intelligent if it can figure out that like, “Oh, to open the door, I have to get a key first.” There’s a certain intelligence to that, but it might not have arrived there because that’s a goal for it per se. It’s only goal is to maximize reward. And so I think it’s problematic to rely on the conception of goals as your definition of intelligence. And in general, I have yet to see a definition of intelligence and again, maybe I’ll be proven wrong. But personally, I have yet to see a definition of intelligence that approaches anything like a practical metric for AI research.

Michaël: Right. So I think what you’re saying is maximizing reward over time is a interesting metric because that’s like the kind of his end goal

Blake: Yes.

Michaël: Yeah. Terminal goal, we could say, at the end of his life if he maximizes reward… he will be happy. But then in the midst of doing that, he might learn how to open doors and maybe build things. And those will be the instrumental goals, the instrumental tasks he will learn along the way to do his end goal. And those things will be more closer to what humans do. And some of them will be necessary. Like, I don’t know… Let’s say manufacturing new servers or implementing AI software or all those things and the things that they will converge eventually like gather more resources or build more computers.

Michaël: Those are convergent instrumental goals. And, I guess, the main argument is that even if you can judge that the end goal that the human programmer put in the system are stupid in a moral sense or aesthetically, it might in the meantime do pretty scary thing pretty well and build a bunch of gigantic computers everywhere. So, if it wants to trade stuff on the stock market and invest in Bitcoin, Ethereum and end up making the market crash like what happened recently, you might not consider it intelligent, but it might have a big impact on the economy.

Blake: Right. Well, and furthermore, I think importantly, you would say that it’s not intelligent, but arguably it was achieving its goals, right?

Michaël: Right.

Blake: So this is where the definitions get difficult. I think that we, as AI researchers, therefore have to accept that the practical thing for us to do is to develop specific tests for specific capabilities, as well as use free form human interaction and human judgments. And we basically just have to ask ourselves to what extent do these AI pass specific tests, and to what extent do these AI lead humans to say, “This thing I’m interacting with is intelligent?” And that will give us this combined measurement of the extent to which that thing is intelligent in so far as we understand the word in a non-formal way.

Michaël: Do you have maybe better definition for intelligence or other concepts that you find more useful? I know that Yann LeCun prefers to use human level AI instead.

Blake: Right.

Benchmarking Intelligence

Michaël: Do you have other stuff like to talk about, with your students or research?

Blake: Yeah. So, I think that this comes back to the domain specificity question. So, this is again where I agree with Yann. I think, one perfectly coherent question that you can ask is, to what extent does an AI match human capabilities in whatever it is that we’re asking? And here it’s sort of an extended version of the Turing test. Right? And I think that it’s perfectly feasible to basically just try to achieve systems that match human capabilities, either as judged by humans in a Turing test fashion, or measured objectively in terms of human capabilities and compared to that. But the other thing you can do is you can define specific capabilities that you’re interested in resolving, right? So, if you’re in interested in developing systems that are good at solving IQ tests or relational puzzles, or in figuring out how to manage power, like electricity distribution on a power grid, then you can have direct objective measurements for these things.

Blake: And you can see to what extent your system is successfully performing those behaviors that your metrics are concerned with. So I agree with Yann, we can talk about human general intelligence. We can talk about the extent to which an AI might match that. And we can also… Where I maybe, I don’t know that I would disagree with Yann, because maybe he would agree to this. I think it’s also possible to define other types of intelligence. So we could say like, “Well to what extent is this system really good at relational reasoning or to what extent is this system really good at causal inference?” And they might be well beyond human capability in these domains, in which case it doesn’t make sense to talk about them as being human-level general intelligence, but you can at least say this is an incredibly powerful system for say causal reasoning. And talk about that type of intelligence, for example.

Michaël: So you would try to distinguish different kinds of intelligence and benchmark those?

Blake: That’s exactly it. That’s I guess what I would argue for. Is that we have to distinguish different types of intelligence. Some of them can be with reference to human beings. Some of them don’t need to be, and we have to accept that we going to benchmark these different types of intelligence. And in my opinion, no system will ever be good at every type of intelligence

Recursive Self-Improvement

Michaël: One type of intelligence that people care about is writing code. A bunch of developers are happy with GitHub Copilot. And the nice thing with code is that you can also code AI. Do you think we’ll ever have AI writing AI codes and at the end self-improving AI or is it too far-fetched for you?

Blake: I think it’s something that is possible, but per this specificity argument, my intuition is that an AI that is good at writing AI code might not also be good- might not have other types of intelligence. And so this is where I’m less concerned about the singularity because if I have an AI system that’s really good at coding, I’m not convinced that it’s going to be good at other things. And so it’s not the case that if it produces a new AI system, that’s even better at coding, that that new system is now going to be better at other things.

Blake: And that you get this runaway train of the singularity. Right. Instead, what I can imagine is that you have an AI that’s really good at writing code, it generates other AI that might be good at other things. And if it generates another AI that’s really good at code, that new one is just going to be that: an AI that’s good at writing code. And maybe we can… So to some extent, we can keep getting better and better and better at producing AI systems with the help of AI systems. But a runaway train of a singularity is not something that concerns me.

Michaël: I guess the argument goes something like, if you put AI very good at writing code and you give it enough time, it might build like a smarter version of himself?

Blake: Oh, I know that’s the argument. But the problem with that argument is that the claim is that the smarter version of itself is going to be just smarter across the board. Right? And so that’s where I get off the train. I’m like, “No, no, no, no. It’s going to be better at say programming or better at protein folding or better at causal reasoning. That doesn’t mean it’s going to be better at everything.”

Michaël: And if we stay on your example and we keep your assumptions that you cannot be good at everything, the AI gets incredibly good at writing code. And then he’s good at writing AIs that do specialized things. And it does like one that does trading, one that does politics and could we have some main AI that controls a bunch of other ones and as a group, they’re running the world or something?

Blake: I think that’s obviously a little bit farfetched, but not within the realm of impossibility, but I think that one of the other key things for the singularity argument that I don’t buy, is that you would have an AI that then also knows how to avoid people’s potential control over it. Right? Because again, I think you’d have to create an AI that specializes in that. Or alternatively, if you’ve got the master AI that programs other AIs, it would somehow also have to have some knowledge of how to manipulate people and avoid their powers over it. Again, if it’s really good at programming, I don’t think it’s going to be able to be particularly good at manipulating people. So I think that we could definitely get to the point where you have systems that we have generated that can call upon many other specialist systems to solve many different problems. And some utopia where we can actually have a society run by AIs for us is not totally infeasible, but that’s not something that’s where to refer back to your graph from the other day.

Blake: That’s why I’m ultimately an AI is good person less than an AI is bad person. Because I don’t really fear what AI will do that way. I think if anything, what I fear far more than AI doing its own bad thing is human beings using AI for bad purposes and also potentially AI systems that reflect other bad things about human society back to us and potentially amplify them. But ultimately, I don’t fear so much the AI becoming a source of evil itself as it were. I think it will instead always reflect our own evils to us.

Michaël: Right. So the humans will be the one controlling the AI and possibly doing bad things. The AI might just do specialized things and not be agenty or trying to do negative things to the world. And yeah, the problem will be the human biases that we into our AI not the AIs themselves?

Blake: That’s right. Exactly. That’s my intuition about it.

Scale Is Something You Need

Michaël: So that’s the AGI good or an AGI bad acts that, I think, people preferred in the comments, AI alignment hard or AI alignment easy. That’s more politically correct. And yeah, on the other axis, so “AI soon”… Sorry, “AGI Soon” or “AGI Not Now”, I guess the main proponent of the thesis that AGI will arrive soon say something along the lines of “scale is all you need”?

Blake: Right.

Michaël: Which is a joke with “attention is all you need”. So one of them was our guest previously Ethan Caballero, and you might supervise him in the future. Do you agree with Ethan on “Scale is all you need” or you think it’s just a funny meme?

Blake: I think it’s a funny meme. I don’t think scale is all you need. I think I would instead say, scale is something you need. That’s where I really… As I said, I’ve been very impressed with what people have been able to achieve with greater scale. And I think given that history as it were, it would be foolish of me to bet against scale not helping immensely. Will scale be literally all you need? No, I don’t think so. In so far as… I think that right off the bat, in addition to scale, you’re going to need careful consideration of the data that you train it on. And you’re never going to be able to escape that. So human-like decisions on the data you need is something you cannot put aside totally. But the other thing is, I suspect that architecture is going to matter in the long run.

Blake: I think we’re going to find that systems that have appropriate architectures for solving particular types of problems will again outperform those that don’t have the appropriate architectures for those problems. It’s true that transformers have turned out to be a much more general-purpose device than I would ever have anticipated, most people would’ve anticipated. I think that has been because the self-attention mechanism of transformers turns out to be an incredibly powerful way to do contextually dependent processing of inputs. And so it just turns out to be a useful architecture altogether. But my personal bet is that we will find new ways of doing transformers or self-attention plus other stuff that again makes a big step change in our capabilities.

Blake: So that’s why I don’t think scale is all you need. I think that you’re going to need still some careful thought put into the way you train the system. Some careful thought put into the way that you evaluate the system and some careful thought put into the architectures and potential innovations that you could use to make it work better. But, is scale something you want? Absolutely, scale helps shockingly well. And yeah, in that sense, I agree with Ethan.

The Bitter Lesson Is Only Half-True

Michaël: And do you also agree with the bitter lesson from Richard Sutton that the meta-learning methods are better than the handcrafted methods that people use on their own?

Blake: What I’d say about that though, is I think the bitter lesson as articulated by Rich is only half true in terms of what we’ve seen evidence for. Right? So critically, if you took the bitter lesson to its full logical conclusion, what you would’ve said is, “Well, we didn’t need anything other than multilayer perceptrons.” Because, “Hey man, they’re universal approximation systems. And so as long as we have big enough multilayer perceptrons with big enough datasets, we’re good to go.” But that’s not where we’re at. Right? At the end of the day, transformer architectures did help more. For RL meta-learning systems have yet to outperform other systems that are trained specifically using model-free components. So my feeling is that the history of the field, which is a big part of Rich’s argument, because Rich basically says, “Look. Of all the things that have happened over the last 20 years, the biggest, most important advance has just been bigger networks, more data, et cetera.”

Blake: That’s the bitter lesson. He’s right to some degree, but I’m sure that if you pushed him on it, he would easily admit… “Nonetheless, additional advances that humans came up with like innovations, whether it be self-attention, whether it be diffusion”, right? Because a lot of the current models are based on diffusion stuff, not just bigger transformers. If you didn’t have diffusion models, you didn’t have transformers, both of which were invented in the last five years, you wouldn’t have GPT-3 or DALL-E. And so I think it’s silly to say that scale was the only thing that was necessary because that’s just clearly not true.

Michaël: I think what you said though, was that you should focus on methods and algorithms that scale. So in that sense, transformers and diffusion are stuff that scale.

Blake: And that I agree with 100%. So I’m a big, big proponent of the idea that if you don’t test your stuff at scale, you don’t really know. And furthermore, it’s foolish to spend all your time trying to solve a problem in toy domains. I think it’s a big mistake that has been made in AI before. And it’s a mistake that I see, with all due respect, to many of my neuroscience colleagues. It’s a mistake that I see happen in a lot of computational neuroscience papers, where they end up spending a lot of time trying to figure out how to get models to solve problems that turn out to only be problems in the absence of scale. So yes, 100% you don’t want to make systems that can’t scale because if you do, you might actually be solving problems that don’t really exist

Michaël: When you do modeling of the brain or Reinforcement Learning in general, you tend to start with two-dimensional grid worlds or mountain car and 2D- sorry, one dimension mountain car, and then the true problems arise when you try to do robotics and you have maybe like 13 different variables in your environment. And I believe… So, you are neuroscientist trying to model things with AI, is that correct?

Blake: That’s part of what the lab does. So my lab both uses AI models to model the brain, but also uses insights from neuroscience to try to improve AI.

Human-Like Sensors For General Agents

Michaël: I remember seeing a question you asked on Twitter on what were the better environments to train your arguments for them to be general. So the environments might need to be embodied, interactive, or open-ended. Why do you think you need those three things embodied, interactive, or open-ended to have general agents?

Blake: Yeah. That’s a great question. So I think you need those features in order to have agents that are more human-like in their intelligence. Because I think that humans have been optimized for exactly those scenarios, right? Our brain is the result of two optimization processes, both the optimization process of evolution, but also the optimization process that occurs during our lifetimes when we learn. And in both cases what’s being optimized for is the ability to operate in an open-ended fashion, in an embodied situation with multifaceted problems and with interaction with other beings as well. And so I think that as we strive towards, to use Yann’s phrase human general intelligence, that’s the best route to do so to optimize for those same things.

Michaël: We need to build AI’s human-like or could we have intelligence without those human senses?

Blake: That’s a good question. I think that you could certainly have intelligence without human-like senses, but it would be a very different intelligence, right? If you want human-like intelligence, yes, I think you’d want some of those same senses. You’d want an agent that can see stuff, that can feel stuff and have a sense of shape, courtesy of that. These sorts of capabilities are what you’d want. Now, we’re never going to get all the way to human-like senses. We’re never going to reproduce fully the array of sensors that human beings have in their bodies. But nonetheless, I think we can go part of the way and the more we go in that direction, the more human-like the intelligence would be.

The Credit Assignment Problem

Michaël: In reinforcement learning or robotics, the main goal is to interact with your environment and try to grab objects and interact in the real world. One of the problems that sometimes arise is trying to figure out what gets you reward and what doesn’t get you reward. And one problem that happened in deep learning was how do you credit which neurons were capable of giving you the output you wanted, and in RL is what action gives you the reward you wanted at the end. And so the general problem is called as the credit assignment problem. And that’s one of the focus I think of your lab, one of the things you’re interested right now-

Blake: Very much.

Michaël: Do you want to just elaborate a little bit on what’s the problem and what you care about it, what is difficult and other things?

Blake: Sure. Yeah. So the credit assignment problem, which you gave a fairly good definition of just now, is indeed the question of how can you figure out what actions you took in the past that led to the outcomes you got? It was originally formulated with respect explicitly to actions and reward. How do you figure out what actions led to reward for you, or led you to achieve a goal I think potentially even… And I think it was Minsky who originally coined the phrase. But that same conception can also be applied to just neural activity. Because you can think of neural activity as being a form of action, right? So whether it is an explicit action taken by an agent or just an activation of particular neurons in the agent’s brain, you can always ask the question, which of these actions ultimately led to the outcome we got, whether it be negative or positive?

Blake: And the reason that’s such a critical question is because if you’re going to learn and you’re going to update yourself so as to be better at doing the things you’re trying to do, you have to answer that question, right? If you can’t answer that question, if you don’t know why certain actions led to certain outcomes, you can never select better actions basically. So the credit assignment problem is absolutely central to artificial intelligence, absolutely central. And this is very interesting for me.

Testing for Backpropagation in the Brain

Blake: I first got interested in it from a neuroscience perspective because in AI we actually have a pretty decent solution to credit assignment over short timeframes. And that is basically a combination of temporal difference learning and backpropagation. Done. Backpropagation helps you answer the question which neurons were responsible for the outcomes and something like temporal difference learning can help you answer the question of like, “Well, which of the final output actions led to the rewards that you ultimately care about?” There are still some interesting questions in AI about longer term credit assignment, which neither backpropagation nor TDs can solve very easily. But let’s put that aside for a moment. What got me really interested in the question of credit assignment was that even though we have these decent solutions in neuroscience, we have yet to figure out how the brain solves the credit assignment problem very, very much.

Blake: So we’re still in the dark in that. There’s of course, some indication that the brain does something like temple difference learning. It’s one of the most fascinating bits of interaction between neuroscience and AI that way. Literally, there are neurons in our brains that seem to be computing temporal difference like prediction errors. And that’s fascinating. So that’s probably one of the ways in which our brain solves credit assignment, but something that I’ve been interested in my lab is how does the brain solve that other problem of figuring out which neurons were responsible for the outcomes that occurred?

Blake: That’s the problem that’s solved with backpropagation in AI. And so then basically I got really interested at some point in it trying to answer the question, “Well, how does the brain solves the same problem that backpropagation solves?” And that led to a whole series of papers from my lab. And we’re going to continue to work on this for a while, basically trying to come up with potential theories for how the brain solves the credit assignment problem. And we’re not alone on this. There are other groups who are working on this. I think what’s neat is we’re at the point in the field where there’s many different potential solutions on offer and hopefully the next couple of decades, we’ll see some resolution as to whether or not any of them are in fact correct.

Michaël: Yeah. I love your answer. And yeah, temporal difference learning is truly essential in RL at least for the most basic models. And I think even when you consider even more complex algorithms using neural networks, they’re still using temporal difference learning. I’m curious what’s the answer for backpropagation in the brain? How do you study with neurons responsible for out outputting the correct actions in the brain? What’s the like experimental method here?

Blake: So, the approach so far both from my lab and other labs has been to even just start by theorizing, right? What potential mechanisms like physiological mechanisms are available in the brain that could allow you to solve this problem? So for example, one of the things that right off the bat struck me when I was looking at this question, is that there’s a lot of what we call feedback projections in the brain. So if you look, for example, at primary visual cortex, it doesn’t just project forward to higher-order visual areas and then to motor areas, those same regions project back to visual cortex. So there’s obviously the potential for doing something akin to back propagation, insofar as you could send information back through the networks of the brain to basically help the brain to figure out, how did this neuron contribute to say the action that I took?

Blake: So, myself and other people have been engaged in this theorizing of basically looking at different physiological mechanisms, whether it be the feedback projections that exist in the brain or various neuromodulatory circuits or whatever neuropeptides, a variety of things that could theoretically be used by the brain to solve this problem. The challenge that we face now is that testing some of these theories, experimentally is not always super easy. Some of the models don’t make very clear physiological predictions. Others do. And then I think there’s also just the… So, we have some predictions we could test physiologically. I hope they will get tested. It’s not always an easy thing in neuroscience, because it’s not quite like physics where there’s an intimate discourse between the experimentalists and the theorists. Sometimes the theorists and the experimentalists don’t speak to each other as well in neuroscience.

Blake: And so convincing experimentalists to test certain ideas from certain models is not always easy because they might not quite buy the idea that the models are really on to something. And in fairness to the experimentalists, it will typically take a lot of work to test any one of these predictions. So they only want to do it when they’re really convinced of the potential utility. My hope is that over the next 10 years, those of us who study credit assignment problems will have sufficiently compelling theories for the experimentalists that they will have tested them. But time will tell on that.

Michaël: So for now you mostly build models and write equations and then maybe run, Python scripts or PyTorch code? And you try to see if it maps the data that you have recorded previously or some other team recorded?

Blake: Well, precisely that would be one thing that you could theoretically do. And some people do that kind of stuff. And I think that’s not an unreasonable approach, but the best thing is if you can make specific predictions that people haven’t collected data to test yet such that you could truly see whether you can make a novel prediction that a new experiment does or does not falsify. Right? So, I can give you an example that way. In a recent paper, myself and Richard Naud from the University of Ottawa, put forward a theory for how the brain does credit assignment that relies on high-frequency bursts of action potentials in neurons. And that model makes some specific predictions about how bursts of action potentials should relate to the performance of an animal on a task. And Richard has actually been working with Matthew Larkum and others to test some of these ideas.

Blake: And I won’t say too much because they’re still working on the paper and it’ll come out, but I’ll just say some of the preliminary evidence suggests that maybe some of the predictions do hold up. And that’s exactly the kind of thing I want to see more of. Is that, we have these theories, me and other people. As you said, it’s exactly that we write some equations. We then create simulations in Python to show how the system should behave based on these equations. And then we would ideally compare to data from real brains and see whether the brain behaves the same way. And that’s basically the project to do.

Michaël: So you mentioned burst of action potentials. I don’t think everyone is well versed in neuroscience-

Blake: Often not. Correct.

Michaël: Just… Okay. Assuming they are for now. I know some researchers at Deepmind have sometimes modeled phasic dopamine as what would transmit reward prediction error among the brain. So, if I want to have candy and I opened the bag and there’s no candy, I will be disappointed. That would be a negative reward prediction error. And so that would be negative dopamine… I don’t know. I don’t know if that exists.

Blake: It does. No, it does. So the first paper that showed this connection between dopamine and reward prediction errors was a paper from Schultz, Diane and… Oh goodness, I’m forgetting the third author on it. But anyway, back in 1997. And what they showed is…. They showed a couple of different things, but one of the things was exactly that. If you get a monkey to associate a reward with a particular sound and then you play the sound, but the reward doesn’t show up, the dopamine neurons activity goes down as a negative signal, basically.

Burstprop, Reward Prediction Errors

Michaël: So we all know what dopamine is or at least if we’re interested in addiction, we know that dopamine is the thing that is present in our brain when we scroll Twitter. So what would be burst of action potential and how does that relates to AI or reward learning?

Blake: So the idea that Richard and I put forward is that these bursts of action potentials are basically a signal that carries information about backpropagated errors. Now, to explain what these bursts I’m talking about are a little bit more, so for those who don’t study neuroscience, your brain is made up of billions of these cells called neurons. You probably all know that. And these neurons signal to each other using a mixture of electrical and chemical signaling and the way that they signal to each other is using something called an action potential, which is this very rapid change in voltage that happens in the neuron and which can be propagated down the wires of the neuron known as the axons to communicate it to other cells.

Blake: So, action potentials typically occur at a fairly slow rate in the neocortex, which is the region of the brain that Richard and I were studying, and many people who are interested in higher-order cognitive phenomena study. So the thing about the neocortex, as I mentioned, is that the rate of action potentials tends to be relatively slow. If you look at an individual neuron, typically it’s firing somewhere between one to 10 Hertz in terms of its action potentials. So, one action potential per second up to 10 action potentials per second kind of thing. But sometimes they show very rapid bursts of action potentials that are on the order of more like 100 Hertz kind of thing. And these little packets, these bursts of action potentials, interestingly, seem to be driven in large part in the neocortex, by some of those feedback connections I was mentioning just a few minutes ago.

Blake: So Richard and I developed this theory that we called burst prop, which basically says that feedback signals in the brain will generate these high-frequency bursts of action potentials. And they will carry information for the neurons about their contribution to any potential errors in the system. And in this way, these bursts of action potentials will actually be akin to the credit assignment signals that you see in backpropagation of error. Hence the name burst prop. So, that’s the theory and like I said, there are some predictions that come out of that about the relationship between these bursts and then performance on a task that you should see that we’re currently trying to test.

Michaël: Right. So those bursts, they signal something like backprop and they go through the axons. And so you also have chemical backprop as well?

Blake: Yeah. So, the thing is there are so many different mechanisms in the brain that could potentially help to communicate these sorts of gradient signals or error signals throughout the circuit. Because indeed there are also various chemical signals that could potentially do that. We know that when neurons are activated, they can sometimes send what are called retrograde messengers, via little chemical signals to the neurons that had just recently signaled to them. This was the stuff that was shown by people like [inaudible] back almost two decades ago now, and maybe even more than two decades now. Goodness. And so, those mechanisms exist. There’s also really fascinating stuff. There was work by a guy named Jason Shepherd a couple years ago, showing that neurons will even create tiny little virus-like capsules that can transmit RNA between each other in order to potentially presumably send information.

Blake: And so you could imagine that, theoretically speaking, neurons could potentially communicate credit assignment signals with such mechanisms. Whether that happens is another question. There’s also beautiful work from the people at the Allen Institute for Brain Science, showing that there’s this very rich array of chemicals known as neuro peptides that can be communicated between neurons again, which they theorize might be used for credit assignment signals. So I think that we basically have many, many different potential mechanisms in the brain for solving the credit assignment problem, whether it be bursts or chemical messengers, or even RNA messengers. And we’re effectively going to hopefully see over the next couple of decades, which of these truly are involved in self and credit assignment in the brain.

Long-term Credit-assignment in Reinforcement Learning

Michaël: Is your research on credit assignment in the brain trying to inform credit assignment in AI? Or is it mostly to understand the mechanism in the brain?

Blake: For the stuff we’ve been talking about right now, it’s mostly just for understanding the brain. Because, when it comes to AI we don’t need to figure out how to do backpropagation in AI. We already know how to do that very well. Thank you very much. But that being said, we have a couple of research projects that are seeking to do better credit assignments in AI. One of the questions that I already alluded to earlier is the issue of really long-term credit assignment. So, if you take an action and then the outcome of that action is felt a month later, how do you connect that? How do you make the connection to those things? Current AI systems can’t do that. Period. And for me, it’s a really fascinating question about how to solve really long-term credit assignments.

Blake: And I think the answer depends of course, on our long-term episodic memory, for things that have happened to us and stuff we’ve done. And so we’ve got research in the lab that’s explicitly trying to build episodic memory systems to help solve long-term credit assignment problems in AI. Another area of credit assignment that I’m interested in and which maybe I hope we might actually be able to do better than backpropagation in some spheres is the question of credit assignment with really sparse inputs or inputs that have very sparse relevancy. So if you do back propagation in a situation where only a very small subset of your inputs are relevant to solving a problem, eventually backpropagation can do the credit assignment to configure it out, but it’ll take longer than ideal. And we’ve been exploring in the lab, some potential solutions to this to make the credit assignment more efficient in these sparse cases, again, using inspiration from the brain.

Blake: So, using some ideas that have derived from some of our work and how neural circuits process things, and I don’t want to overstate any of our results yet. So I will just say, I think there’s a possibility that we could discover mechanisms for credit assignment that work better in circumstances where the inputs are very sparse or their relevancy is very sparse. So there might be improvements to be made to credit assignment in AI. But I will say that overall, if you have inputs that are broadly relevant and you’re not trying to solve problems over long periods of time, that propagation works really, really well. And so we don’t need to improve that really.

Michaël: How do you benchmark or test those algorithms? Do you use, I don’t know, Mujoco environments over a long periods of time and try to walk with Hopper things or do you even go on the Montezuma revenge or DOTA2 route and try to do very, very long credit assignments? What’s the method here?

Blake: Yeah. So the answer is a little bit of all those things. We’ve got one project where we’re going to explore whether we can do a bit better on Montezuma’s revenge than current solutions because we think we’ve got a better solution to the long-term credit assignment problem. And yes, as you mentioned, we haven’t done a lot of work with Mujoco yet, but we do use unity engine environments, where we’ve got agents living in these 3D worlds as it were, and they have to solve long-term credit assignment problems. For example, we have one task which we call Orange Tree, which is basically a task where the agent, as its ultimate goal, has to pick oranges from a tree. To do so it has to successfully get its necessary equipment to climb up the tree and collect oranges. But between the stage at which it’s getting its equipment and collect oranges, all sorts of other stuff happens. So it turns into a long-term credit assignment problem, because it has to realize that in order to pick the oranges, it has to select the right equipment in the distant past, for example.

Michaël: Can you grasp the orange without the tools or you need to…

Blake: No, you need the tools. That’s the thing.

Michaël: Okay.

Blake: That’s right. So it can only get the oranges if it selects the right tools, but it only gets to select the right tools in a very far away pass. So it can’t do standard RL to solve the problem.

Michaël: Were you happy with how OpenAI solved with quotes Dota 2 or Uber solved Montezuma’s revenge or do you think the problems are still open-ended and it’s only brute force for now?

Blake: Yeah. So in the case of… Let’s start with Uber’s solution for Montezuma’s revenge. That one was interesting because effectively the solution largely consisted of a slightly better exploration algorithm. They didn’t really change the credit assignment at all. It was just a better exploration algorithm. My guess is that you could… Like if you look at the learning curves for it, it’s still very slow. So it can solve Montezuma’s revenge. It’s not an impossible thing, courtesy of their improved exploration. But, I bet, you could do better with faster learning, with better credit assignment. So I don’t consider it a totally solve problem. And indeed in that same paper, they also examined… I think the game’s called [inaudible] or something like that. And there they don’t actually solve the problem at all unless they give it domain-specific knowledge. So I think in both cases it was an important advance in terms of the exploration, but it’s not totally solved. With regards to Dota 2, I don’t know what their exact solution was. So I’m not sure I’m the right person to comment on it because I haven’t looked at that paper closely.

Michaël: I think they were just doing PPO.

Blake: Yeah. Right. Yeah. So, my guess is that it’s going to be a similar thing. There would be ways to solve it better with better credit assignment. But I think what the Montezuma’s revenge case shows is that sometimes, and this is where people like me have to be careful, things that you can convince yourself are a problem of credit assignment, turn out to be a different problem. So it turns out not. In the case of Montezuma’s revenge, like I said, the biggest issue seemed not to be a credit assignment problem, but instead an exploration problem. And so by providing that improved exploration, which I think they called… Oh, something about go…. No, now I forget the name of their exploration algorithm. But it was an interesting exploration algorithm. Basically, it said rather than just exploring randomly, what I want you to do is go some distance.

Blake: And then I’m just going to put you back at a certain location multiple times and have you explore out from that location rather than constantly randomly exploring. And that turned out to be a really big step for Montezuma’s revenge. Because I think the problem is, and the reason Montezuma’s revenge was so difficult for standard RL algorithms is, if you just do random exploration in Montezuma’s revenge, it’s garbage, you die constantly. Because there’s all sorts of ways to die. And so you can’t take that approach. You need to basically take that approach of like, “Okay up to here is good. Let’s explore from this point on.” Which is basically what Uber developed.

Michaël: So there were some tricks that made it work in Montezuma’s revenge?

Blake: Exactly.

Michaël: I think for Dota 2, what people were saying is that the game is so complex that you would need to have very smart agents to play at a competitive level and coordinate among a group of five because there’s so much communication between humans. They know how to communicate between teammates, and you were like, “No, you could never do it with brute force.” And the cooperation between teammates emerged from just massive amounts of training.

Blake: Right.

Michaël: And I think that’s possibly what people were saying with chess a long time ago. They were like, “To solve chess, you need this kind of intelligence that humans have, decision making and so on.” And at the time it was also mostly brute force. So yeah, I’m curious if you think that the things we think as intelligence in general cannot just be solved with scale and we just go back to our discussion before and yeah. So, more generally, would there be some event or discovery that could make you think like, “Scale is all you need.” We’re going to have smart agents that do a bunch of things and make you think more about existential risk. Would you consider something that can change your mind on this?

What Would Change his Mind on Ccaling and Existential Risk

Blake: Yeah. Okay. So I suppose what would change my mind on this is, if we saw that with increasing scale, but not radically changing the way that we train the… Like the data we train them on or the architectures we use. And I even want to take out the word radically without changing the architectures or the way we feed data. And if what we saw were systems that really… You couldn’t find weird behaviors, no matter how hard you tried. It always seemed to be doing intelligent things. Then I would really buy it. I think what’s interesting about the existing systems, is they’re very impressive and it’s pretty crazy what they can do, but it doesn’t take that much probing to also find weird silly behaviors still. Now maybe those silly behaviors will disappear in another couple orders of magnitude in which case I will probably take a step back and go, “Well, maybe scale is all you need.”

Blake: But my guess is that you’re going to continue to see some silly behaviors because I think we’re also going to be bumping up against the limits of the type of data that we have available to us. It’s a funny thing the data that we have available to us is both seemingly unlimited, but also weirdly impoverished, right? So when it comes to text or images, we’ve got basically unlimited data. But, how much interactive data do we have? It’s really hard to collect interactive data. It takes a lot of compute, a lot of resources. I’m not sure that we’ll ever have enough interactive data to make it be that scale is all you need.

Michaël: I think that’s an awesome conclusion for the podcast, living us with a bet on the future. It was a pleasure to talk with someone so well informed in neuroscience, AI, reinforcement learning. And yeah, I hope we talk soon and see you on Twitter.

Blake: Thanks Michaël. It was real pleasure speaking with you and indeed, I hope we see each other on Twitter again soon.

Michaël: Do you just want to plug your Twitter account or something so people can-

Blake: Oh sure.

Michaël: … see.

Blake: Anyone who wants to follow my Twitter account, where you can see me constantly giving my opinions on things that I probably shouldn’t be giving my opinions on. Also, some things that I have a right to maybe give my opinions on, because I know something about them. You can find me @tyrell_turing named after… It was a name I picked in my defense because some people are like, “Why they give yourself that name?” I picked it back when I was 18. It’s of course, a reference to Alan Turing and Tyrell from Blade Runner. And I’m not claiming that I am either Tyrell from Blade Runner nor Alan Turing. Not even close, but I just thought it was a fun, little nom de plume that I could use in general and it’s become my online handle.

Michaël: Yeah,I love it.


Michaël: This is the end of the episode. My current goal for the channel is to become number one channel called The Inside View. So if you’ve enjoyed this podcast, I would suggest living a rating on Spotify, Google Podcast, Apple Podcast, or subscribing on YouTube so that when we google “The Inside View Podcast”, the first one is not “An Inside View” About Sports, but “The Inside View” about the future of humanity and artificial intelligence. I wish you a great day and see you in the next episode.