Breandan Considine on Neuro Symbolic AI

Breandan Considine is a PhD student at the School of Computer Science at McGill University, under the supervision of Jin Guo and Xujie Si. There, he is building tools to help developers locate and reason about software artifacts, by learning to read and write code.

I met Breandan while doing my “scale is all you need” interviews at Mila, where he surprised me by sitting down for two hours to discuss AGI timelines, coding AI and neuro symbolic AI. A fun fact that many noticed while watching the compilation video is that he kept his biking hat most of the time during the interview, since he was close to leaving when we talked.

All of the conversation below is real, but note that since I was not prepared to talk for so long, my camera ran out of battery and some of the video footage on Youtube is actually AI generated.

Disclaimer: when talking to people in this podcast I try to sometimes invite guests who share different inside views about existential risk from AI so that everyone in the AI community can talk to each other more and coordinate more effectively. Breandan is overall much more optimistic about the potential risks from AI than a lot of people working in AI Alignement research, but I think he is quite articulate in his position, even though I disagree with many of his assumptions. I believe his point of view is important to understand what software engineers and Symbolic reasoning researchers think of deep learning progress.

(Note: you can click on any sub-topic of your liking in the outline below and then come back to the outline by clicking on the green arrow)


Symbolic Reasoning

Michaël: Do you think we need symbolic reasoning to get to AGI?

Breandan: If you have symbols, it makes things a lot more efficient in some way. It’s possible that you don’t need symbols. I think symbols are important if you want to make machines that are compact and that can interface with humans who also use symbols. So I like symbols.

Applying Machine Learning to SAT Solvers

Michaël: How do you actually do symbolic reasoning? What’s your research, and how do you actually implement it?

Breandan: We use these things called SAT solvers, which are, I think, originally where some of this scaling laws research were originally done in was the statistical physics community, and they were looking at phase transitions in SAT solvers. And so it’s kind of interesting. They have some similar scaling laws there, and it turns out that these have some interesting connections to constraint satisfaction in the discrete domain. And so with machine learning, they use a lot of continuous reasoning. And it turns out that a lot of that you can build on top of completely discrete things as evidenced by the hardware that we use today. It’s mostly discrete logic.

Michaël: So what’s a concrete example of something that we can use, something similar to the SAT solver or the things we’re using? Like, what are the concrete examples we’re kind of solving, the problems you’re solving now?

Breandan: Well, so let’s see. The state of the art in SAT solving is currently using algorithms that you can teach to a 10-year-old. So you don’t need any fancy machine learning. There have been efforts to try to build hybrid models, machine learning models that solve SAT-based problems, but this is still an active area of research. So we use things like unit propagation, CDCL solvers, and these sorts of things are very hard to beat in general. So if you’ve learned from a data set of SAT problems, you might be able to edge out some competitive advantage. But it turns out that the large AI labs, DeepMind and others, have poured millions and millions of dollars into these problems. And so far, if you look at the leaderboard and all the benchmarks, SAT is still unbroken.

Michaël: Yeah, so just to explain to people who are maybe not experts on YouTube or something, SAT solving is basically when you try to satisfy a Boolean expression?

Breandan: Yeah, you have a Boolean formula, and you want to find an assignment of literals [sic truth values] to variables in these clauses that satisfy the expression.

Michaël: How do we encounter this problem in real life?

Breandan: Yeah, so this comes up a lot in many different domains, from path planning to logical reasoning and different sorts of optimization problems, optimization research. They have lots of different solvers that can model problems from their domain into this uniform language. So there are a lot of these lowerings from problems in, say, like computer science onto SAT.

Michaël: So any like resource optimization or like constraint optimization will use a SAT solver then?

Breandan: Yeah. So if you have an optimization problem, it’s very likely that if it incorporates discrete variables, it uses something like a SAT algorithm. There are other things like SMT. SMT is a richer logic and can handle different kinds of domains like integers, the other formula. But a lot of that gets lowered onto SAT.

Using Symbolic Reasoning To Get Powerful AI Systems Is An Open Problem

Michaël: And so, do you think… when we will build robots that will be like as smart as humans. Will we need them to have some kind of optimal path finding? Or will they be good with just heuristics? Can we build AGI without any perfect logic or perfect optimization?

Breandan: Yeah, it’s a good question. I think it’s a big open question. One of your other signs says, “Is scale all you need?” I think these are kind of intimately related. So question is, all you need for what? If you want to be as good as a human being, then maybe it’s good enough. But if you want superhuman performance or performance that is generalizable to lots of different domains, there’s this hypothesis that you need some form of reasoning.

Deep Learning Is Limited Because Of Computational Complexity

Michaël: Are you in the Gary Marcus camp?

Breandan: I wouldn’t, I’m not really sure what Professor Marcus is advocating for. I’ve heard some of his views espoused here and elsewhere. But there is some hard problems in computer science and in logical reasoning that so far have proven resistant to machine learning techniques. And it’s not the end all to be all, but the idea is that over the course of millennia that humans have evolved over, we’ve developed some tools. And it would be a shame or maybe even foolish to completely ignore those entirely, because they give us efficiency gains in these known areas. And there are problems that scale much faster in terms of computation complexity than search over graphs. Something we encounter a lot in computer science are these undecidable problems. And the space of, say, Turing machines or the space of even graphs of a certain size grows much faster than we can build physical machines to realize. And so this realizability of many algorithms is a big concern. But it may be that that outstrips the scaling laws of machine learning in some sense. The machines that we can build or the hardware we can build, we need to be kind of conservative about how we allocate resources in these algorithms.

Michaël: So are you saying that, like, if you take the entire class of computer science problems, we see stuff that is much, much worse than just pathfinding and worse than NP? And that if we don’t have any similar reasoning and if we don’t use any tools from human math or those kind of things, it’s going to be pretty hard for any AI to solve those problems. And so if the exponent in the scaling laws is worse than, I don’t know, if the hardware doesn’t progress as fast as those problems scale, then we might not be able to solve those problems because the problems are growing too fast in complexity?

Representational Capacity Outstrips Machine Learning Model Ability

Breandan: It’s a possibility. I mean, it’s not the only reason why you might want to use other techniques. But just the representational capacity outstrips the ability of a machine learning model to fully represent for many problem domains. And so that’s one reason why I think you need something a little bit more like human reasoning that we’ve developed.

Combining Symbolic Reasoning and Machine Learning for Safety and Constraints

Breandan: But another reason I think that might be more appealing to folks in the machine learning crowd is that there are these safety properties, these constraints that we’d like to impose over all the solutions that a model might give you. And these come up in practical ways, like sometimes in strange scenarios, your self-driving car abruptly turns right or something like this. And if you want to rule out these kinds of errors then you need some sort of verification procedure over the neural network. And so there are techniques for doing that, that allow you to essentially specify some criteria over the inputs or outputs, a precondition or a postcondition, if you will, and rule out these edge cases or scenarios over all possible states that the neural network might take on.

Symbolic Reasoning And Machine Learning Can Be Blended Elegantly

And so to do that, you need some form of abstract interpretation or formal verification in order to propagate these constraints through the neural network. And so that’s one way that you can use symbolic reasoning in machine learning. They’re not incompatible things, right? It’s not like one or the other.

Breandan: You can blend them together in a way that is very elegant. And it’s not just kind of filtering or sorting. The fabric of the computation itself supports other kinds of propagation procedures. So in machine learning we do back propagation or error propagation. But there are other propagation algorithms, message passing essentially on graphs, that allow you to propagate other constraints on these domains. And that would allow us to do some form of reasoning. In fact, some people argue that machine learning as it is today is already doing some form of symbolic reasoning because the algorithm we use for everything is automatic differentiation. And that is a symbolic thing. So the symbols are a much lower level.

Automatic Differentiation: A Bridge Between Symbolic and ML Domains

Michaël: So it’s already doing computing differentiability or those kind of things in the background? Is that what you’re saying?

Breandan: Yeah, that’s right. I mean, the simple analogy is that at the very lowest level you’re doing floating point 32- or 16- or 8-bit computations on essentially digital hardware. So that’s symbolic. But it’s also symbolic in the sense that people who are writing these algorithms originally designed automatic differentiation as a symbolic procedure. It’s a symbolic transformation over a formula that takes your forward propagation algorithm and generates a dual or takes a formula, a function, and it generates another function. And that function tells you the error, the sensitivity at given points in the input.

Symbolic Abstraction and Neural Networks: A New Layer of Safety

Michaël: And then you can use this new function to adjust your first function on the forward pass. So basically what you’re saying about self-driving cars and minimizing error is that symbolic reasoning could help you build some new abstraction. It’s like a weird abstract graph. And using this new abstraction, you could use formal solving methods to check that the system is secure. And with current methods, it’s impossible to be sure that the system is secure.

Breandan: Right. Yeah. I think the current techniques are scalable in many ways. So they have different kinds of adversarial testing and different ways to probe the generalization ability of these models. But if you want to assert some universal quantification over all possible inputs, that is, there does not exist an input that causes the missiles to launch or the car to turn right or something like this, then you should think about using some of these tools.

Breandan: But in general, I think the two are very compatible. Machine learning and symbolic reasoning or this kind of GOFAI-style AI were kind of divorced in people’s minds, right? But I think there’s this kind of emerging idea that the neural and the symbolic are somehow deeply related. And by finding some of those connections, whether it’s doing symbolic reasoning with machine learning or using symbolic reasoning to verify safety properties of neural networks, you can tap into a large body of literature and computer science and statistical learning that is kind of symbiotic.

Using Domain-Specific Languages for Program Abstractions in Machine Learning

Michaël: How do you interface the two? Like, in practice, let’s say you have a huge neural network that predicts the next action of a self-driving car, and you want to build this other layer of symbolic abstraction. How do you make those two communicate? Like, one gives float to the symbolic reasoning thing, like one predicts the structure of the other. Like, what happened in practice? Hmm.

Breandan: Yeah, so this is an interesting question. I think this relates a little bit to the program abstractions that we use in machine learning. So if you’ve used something like TensorFlow or PyTorch or maybe JAX, then you’re using these things called DSLs or domain-specific languages that take the operations that you give it and perform them abstractly. And so that when you do a plus or you do a times, you’re either generating a graph that then gets transformed, or you’re doing this computation in the primal and the dual space, so it keeps track of two numbers simultaneously through the computation.

Breandan: So there are other kinds of algebras that you might want to use that allow you to take interval domains and propagate those, for example, or take, say, simple probability distributions and combine them in different ways. And so there are other frameworks that do things like probabilistic programming or constraint programming, where essentially you just write the procedure that you want a machine to execute, say, the neural network and the forward pass.

Breandan: And then this sort of gets compiled into an intermediate language that will be able to perform these computations. And at the same time, it’s kind of verifying that the bounds are within this sort of safety region. And so if this grows very quickly, then you can sort of quickly rule out a certain class of inputs or say whether or not the neural network, as it’s being trained, is safe in some way. So I think this is one concrete application to the area of AI safety.

Probabilistic Programming Languages for Inference and Verification in Neural Networks

Michaël: So the probabilistic programming language could help you both to inference and whatever you want to do with your neural network and verify things as they come. So you would still get the fun of the neural networks, of the usefulness of neural networks, plus the safety features of the more symbolic reasoning things.

Breandan: Yeah, possibly. I mean, I think in practice, what happens a lot is people train these things in wherever they train them. And they hand them to, they open source them, for example. And then there’s this research cycle where someone finds an adversarial attack, submits a paper, and then they have to retrain the model to provide some sort of defense against this attack. And it’s kind of a never-ending cycle there. But what you can do is essentially cut out a lot of the development time in finding these vulnerabilities, patching them. In some way, it’s kind of like the vulnerabilities in software.

Breandan: You find a lot of security vulnerabilities, someone exploits it to patch that, right? But if you were to write a neural network in one of these sort of type-safe ways in the programming language that supports these different propagation procedures, whether this is a domain-specific language and embedded in, Python, or a programming language that was designed specifically to ensure that certain correctness properties are met, something like Coq or Lean or Agda, these languages have type safety built into the language. And so as you’re writing the language, you can reason about certain properties that make it very ergonomic to use. And so you can get this assistance while you’re writing the code.

Scalability of Type-Safe Programming Languages in Neural Networks

Michaël: You can do, like, any neural network efficiently with Coq or Lean, right?

Breandan: So scalability in these languages is a big question. Can you scale these techniques up to the millions or trillions of parameters that you need? And so I think this is an area of open research, but it’s not out of the question that it could be scaled up with the right hardware in order to complement. So there’s like this co-design process of programming languages and hardware that happens.

Breandan: So machine learning has started the flywheel in some ways. They were able to leverage a lot of the hardware that was developed for computer graphics. And so the thought is that if you can develop these efficient compilers for other languages, in some sense, like, if you had a probabilistic programming language and were able to execute this very efficiently on physical hardware, then that would allow these techniques to scale up to much larger problems.

AlphaTensor’s Applicability May Be Overstated

Michaël: Recently we’ve had, I don’t know if you’ve seen it, but DeepMind released something called AlphaTensor, where they tried to improve, like, how matrix multiplication was implemented. And they claim to have beaten by 20 or 30 percent, like, how efficient matrix multiplication runs on current hardware. Some people have told me that it was kind of a scam or, like, the claim was too bold because of, like, mixed precisions or, like, how the thing was implemented. I don’t know if you’ve seen those claims.

Breandan: I’m familiar with some of the work that was done. I think they’re trying to do something very interesting. But the applicability might have been a little bit overstated in the press. So there’s the press release, and then there’s the paper, what they actually did. And so what they claim is they can do fewer multiplications over finite fields, which are a specific type of matrix. So, for example, you might want to use real numbers or floating-point numbers. And in that case, you run into certain sensitivity issues if you do these operations in an arbitrary order.

Breandan: So the algebra supports operations, you know, as associativity, and matrix multiplication is associative, right? But in practice, you run into sensitivity issues. So there’s certain transformations that are mathematically valid that are difficult to make them perform accurately on real hardware. So you don’t run into this problem with finite fields because there’s a finite number of elements. And so the floating-point arithmetic issue does not arise in this claimed result.

Breandan: Shortly thereafter, there was another paper posted to arXiv, which they can do the same results with one [sic two] fewer operations. And so it’s interesting that this ability to design more efficient algorithms is somehow married to the intuition of the researcher. So I’m a much bigger fan of their work on [combining human reasoning with neural networks](https://www.deepmind.com/publications/advancing-mathematics-by-guiding-human-intuition-with-ai.

Michaël: The one who’s like a mathematician and the AI that tries to prove some stuff in topology?

Breandan: Yes, that’s right. I foresee the two, the human researcher and the machine learning algorithm, say a mathematical researcher who’s developing new algorithms, as there should be a human in the loop.

AI Alignment

AI Safety and Alignment: Balancing Human Values and AI Efficiency

Michaël: So you mentioned multiple times AI safety, and I was kind of wondering how familiar you are with the entire AI safety field. Because one of the signs is AI alignment is what you need, which kind of responds to skills all you need. Have you heard of AI alignment?

Breandan: I have, yeah. So I think this alignment problem is…

Michaël: How would you define it?

Breandan: Oh, well, we used to have a researcher here by the name of David Krueger. And he was working on a thing called Bayesian hypernetworks. Okay, so I recently spoke with him after his foray into the AI alignment. It seems like these are somehow related. But the alignment premise, as I understand it, is that you want machines that sort of understand our values and don’t need a lot of guidance in order to specify them.

Breandan: And so if you can get that, then you don’t have to worry so much about these dystopian futurist predictions. And then there might be this bright new future of human and artificial intelligence working together collaboratively rather than adversarially. Because right now, I think there’s this maybe reasonable skepticism that it could develop into something that’s malignant, that’s not in our interest, right? Over time, it appears like it has some nice properties. It can make us more efficient. But in the long term, maybe it’s optimizing for an objective that is a net negative for humanity.

What AI Alignment Failure Might Actually Look Like

Michaël: So that’s like a subtle problem where we would not specify the objective very precisely. So everything seems fine, but in 10 years, then we understand something wrong is happening, and this entire society goes into the wrong direction.

Breandan: Yeah, I think that’s a fair description. The whole question of whether AI will turn rogue, and I think that’s a sort of caricature in some ways. For the more subtle thing that could happen is that things slowly start to become the work less efficiently and there’s a lot of small inconveniences that turn into just a net drain on society. And so this could be in the form of, say, advertising and different forms of human influence systems that are acting in support of an objective that we can’t all align ourselves with.

Automating Values And Laws With Code

The Importance of Encoding Human Values in Code

Breandan: Maybe that’s just to harness more energy for the computing industry, or it could be to make profit for a small set of individuals. But if we could harness these technologies in support of human reasoning, I think that’s sort of the anchor, is that if we can use them in a controllable way. And the control mechanism is programming, in my mind. So programming is the tools that we’ve developed for encoding our values into computers, right? The act of programming is taking some mental model that represents our values and specifying in a way that a computer can execute this.

Various Programming Models Serve As Ways To Transfer Values To Computers

Breandan: There are different styles of programming. There’s imperative or declarative programming, where you say, here’s imperative, is how to do it. Declarative is, here’s what I want. You figure out how to do it. And then there are many different programming models inside of that. So there’s probabilistic programming or differentiable programming, which a lot of machine learning uses now. There’s type-safe programming or type-level programming, where you can encode these values inside of constraints that are well-typed. And so it can have a computer check that. And so there’s many different styles of programming that have emerged. But the unifying thing is that they all are a way of taking values and giving them to a computer.

Code Research Encompasses Moral, Information, And Software Aspects

Breandan: There’s a lot of people who are doing research on code. Code is, it comes up in different settings. There’s moral code people use in society to regulate each other’s behavior, right? There’s code that represents something like information. Processing is a form of it, you have to encode some information. So they call this like an error-correcting code or just code - information theory, just a representation of some information. And then there’s code in like software, which is what we write and give to machines. And these are all intrinsically related. So you have error-correcting codes, helps you prevent some degradation or some noise in the channel. So you want to maintain some communication channel that preserves the information that you put into it, regardless of whether it gets hit by a cosmic ray or something.

Michaël: So there’s like code in physics and the hardware infrastructure stuff and like the DevOps side. And there’s like code that the programmer writes to like specify these values. And then there’s the code for humans, so like the law and justice and everything.

Breandan: Yes, yes, right. And in fact, the legal codes that people use to define our justice system, there is some speculation that this can be translated into computer codes, that they share some of the very same abstractions in some way, like if else sort of conditions about when a certain law applies. And so I think as we’re moving towards a more digital society, then we need to be very specific and careful about how we encode our values because misspecification can often lead to unintended consequences. So you end up with digital contracts that are ill-posed, that have many valid solutions, and then some of them were not what we actually intended. So this is a hard problem in general. You get even the smartest programmers, software engineers, coders into one room. And often they can’t agree on what the right specification is. So specification engineering has become kind of the Achilles heel of machine learning in some way.

Michaël: So are you thinking of like building DAOs? Like I don’t know what the DAOs, what the acronym means,

Breandan: but it’s just like autonomous organizations.

Michaël: Yeah, so in an ideal world, would you see like AIs and humans living together in some kind of DAO or everything related by blockchain and code?

Breandan: I don’t know. I think not everybody is well-equipped to write code in all cases. Maybe if the programming language were a little bit more natural, then we could enable a larger population of people to participate in this sort of digital economy. They could express their constraints or express their values in a way that is more like natural language. So that’s one direction where you get larger participation. And the other way is somehow through, by making it easier for machine learning models to understand the code that we’ve already written. So this area of code comprehension or code completion, code summary, is a very active area of research. You might have seen some papers from places like Microsoft, OpenAI.

Can Large Neural Networks Self-Improve And Create Monsters In Code

Automating Programming, Self-Improving AI

Michaël: So are you talking about like codex, Copilot, those kind of things?

Breandan: Yeah, that’s right. So in fact, I think you’re seeing rapid progress in this area of code comprehension and in machine learning. I think this is a positive sign because this gets us to a place where you can have a collaboration with a digital assistant, and they kind of check your ideas for consistency or make it a little easier to translate between different programming languages or just automate repetitive tasks that are a barrier for people to learn programming in some ways.

Michaël: But the problem is if it accelerates AI timelines in a way where we reach AGI or something even more crazy like self-improving AI, faster than we can implement safety measures or symbolic reasoning or things that could help us safeguard human values. So in my view, code generation is the fastest route towards self-improving AI. So if you get an AI that is capable of predicting its own code base, and then it’s basically like you have the problem of self-improving AI in front of you.

Michaël: If it’s just like you freeze the weights, you say, like, okay, predict your own code base. And then like, okay, now you have a bigger piece of code, like predict the new code base. You’ll reach something that can self-improve. And I think the main disagreement I have with your model of a subtle AI problem is that for me, the takeoff will be much faster because you will have those like self-improving AI tools that will be able to like, you know, code better than humans. And as if you have something that’s better than humans in coding at some point, I don’t know, it will be beyond human comprehension or beyond human performance in coding. So I don’t think we’re going to reach a society where, you know, everything’s a bit weird, everything is slow. I think it will be much faster once we get Copilot 4 or Codex 3, those kinds of things.

“Monsters In Code” Might Not Be Possible

Breandan: Interesting. Yeah, I mean, I think there are good reasons to proceed with caution. But I remain skeptical that there are so-called monsters in the code, right? Like you can find something inside of this space that is the space of things that you can write down in a few hundred lines that will prove an existential threat to humanity, right? First of all, this space is gargantuan.

Breandan: If you think about the amount of computation that requires to even come up with a very simple organism, right, a single cellular organism, this took galactic amounts of computation, literally. Like you think about all the stars in the galaxy and so on, right, and how many billions of years this took just to find a single permutation of atoms that’s within a very, very small region, right?

Breandan: And that’s still a finite space, right? It’s a very, the space grows extremely quickly in computational problems. So when you start to search for Turing machines, when you start to search through this space, you realize just how inconceivably tiny our machines are, physically speaking, that even if they were much more intelligent than we are, there’s good reason to believe that the things that they will find will not be a threat.

Taking Over The World Might Be Too Complex For A Single Agent

Michaël: So to summarize your argument, being a threat to humanity we’ve been experiencing a very complex, I think like complex human drives of survival or taking over the world. And the thing that we end up with, like single cell organism are already like very complex, but not as complex as a human trying to take over the world. And some machines will end up with like some very basic, like Turing machines, some very basic program that will probably not be evil or, because it will be like in this like very small space. And so I guess like your claim is taking over the world requires some very long program. This is the first claim. Second claim is current AI systems will not find very large programs.

Organizations Are More Efficient Than Superhuman Systems

Breandan: Well, I think of it this way. You have superhuman systems that are sort of, you can think of an organization or a large group of people as a sort of superhuman organism, right? And even with concerted effort across many human beings, like we’ve had statistical data that show that yes, if you have an organization group of people, it is slightly more efficient than having a single person if you want to accomplish some tasks. In fact, groups of people when so organized can move mountains, you know, think of all the feats of technology that we’ve developed.

Intelligence May Not Ensure Survival

Breandan: But in some way, I think the jury is still out whether nervous systems and complex multicellular organisms are an evolutionary advantage. We may find that plants and bacteria outlive us yet. So this branch in the evolutionary tree, while extremely productive, has been not really fully tested. And so there’s enormous energy expenditure in order to develop this capacity, a brain, right? And in and of itself, it does not ensure your survival. In fact, there’s this hypothesis that the reason why the galaxy isn’t teeming with intelligent life is because there’s some sort of disadvantage to having intelligence in that it’s sort of self-defeating.

Michaël: But plants don’t go on the moon or bacteria don’t survive without any like environment to survive in. And so yeah, I think it’s an interesting point that like humans might be like disadvantaged by like the fact that they’re more complex than others. So your argument is basically that maybe the solution to the AI will find maybe the most simpler than humans?

Breandan: I think that if you look at the long arc of history, that the advantage of having a group of people of things that coordinate, whether that’s human societies or some form of you social animals, then there is some small advantage to that, for sure. I think, in fact, that’s I think the best thing we’ve developed is our shared kind of moral objectives in some way, right? We can collaborate if incentivized, and I think we naturally do. It’s in our nature. But I think that people glorify intelligence because it seems like we’ve been able to do some pretty admittedly impressive things, build computers, go to the moon. And as you were saying, maybe it can lead to some small advantage. But even very dumb creatures, when organized in an orderly fashion, things that are very simple machines, essentially, then they can also do these same things.

Large Neural Networks May Not Necessarily Lead To Self-Destructive Behavior

Breandan: So having a large brain does not necessarily lead to something that’s self-destructive. Now, you might argue that things like whales and elephants didn’t have evolutionary pressure that we’re imposing on these artificial agents in some way. They don’t have the same objective functions. But I think there’s good reason to believe that large graphs, essentially, a brain is a large graph that passes messages across it. As you increase the size of this graph, then you can do more things in parallel, right? But it doesn’t necessarily mean that your computational abilities scale proportionally. So what I mean by that is that if you have a single processor that runs very fast, you can do one thing at a time, right? And then you scale up that to multiple processors that can do many things at one time. But in some ways, this is just a kind of linear scaling. And you might be able to argue that this allows us to do things like visual processing and things where you can parallelize this. There are problems that can be parallelized very nicely on these large graphs. But having a larger graph in and of itself doesn’t mean that you’re going to be able to get around computational undecidability or NP-hardness or things like this, right?

On Scale Being All You Need

Larger Graphs Won’t Solve All The Harder Problems Of Computer Science

Michaël: So basically having big multiplication enable you to do something like humans do, like vision, because you have square images and those kind of things, but will not solve all the harder problems of computer science. And you won’t have perfect decision-making from just like bigger graphs or bigger neural networks.

Breandan: It certainly helps. If you can parallelize things you can get some speedups on these problems. But don’t underestimate how quickly even tree search becomes. You think about bounded width trees or bounded height trees. If you look at trees, say, shorter than depth 30, that are balanced binary trees, there are fewer atoms in the universe than there are labeled binary trees, less than say depth 30 or so. So there’s good reason to believe that it’s not the end all to be all thing. It will help you to do things like through 3D vision. Maybe if you have more eyes, like an insect or something like this, you can sort of parallelize certain tasks.

Breandan: Certain tasks get parallelized very well. Sort of object recognition, pathfinding, these kinds of tasks. But when you start to think about the space of algorithms, then this is a completely different domain. And we’ve developed some very efficient algorithms for these things that even some of the brightest minds in machine learning with the most incentive that you could possibly give them, like if you could say invert a cryptographic hash function or say you could solve a problem from the class of NP-hard problems in polynomial time, provably polynomial time, this would be an astounding breakthrough. This is essentially the biggest incentive you could give somebody monetarily, as well as people are just naturally curious about these sorts of problems. So far, no sign.

Parallel Computation Might Help A Lot For Human-Brain Tasks

Michaël: I guess the counterargument for this, if we go back to an AI that could be misaligned with humans and pose an existential threat to humanity, is that as long as you have something that is human level, you don’t need to have a breakthrough of beating traditional methods on pathfinding or NP problems. If you can be human level, and I don’t think humans find the best path in a binary tree, if you just have something that is like a human, but with hardware, then you can just speed up by 2x or 100x by having 100x more speed. And at this point, you have a superhuman.

Michaël: And even if the human doesn’t solve all the things, if it’s just human level, but speed up, because it doesn’t have memory constraints or speed constraints, then my claim is that a 10x human or 100x human will be able to take over the world, because it will be much faster at coming up with strategies. And the second argument is that we’re mentioning about things happening in parallel and not everything can benefit from parallel computation, but our brain is massively parallel, right? So everything the human brain can do is because of how much stuff happens in parallel. So my claim is that parallel helps a lot for human brain things.

Breandan: Yeah, yeah, for sure. So we have some evolutionary constraints, right? Our brain has to fit inside a certain space. I’ve heard people make this argument that maybe if we had a larger one, then this would yield yet undiscovered progress. And I think, I mean, it’s a good hypothesis to test. Scale, in fact, I think in the short term, scale will win. In fact, I think for probably for the next few decades, scale is the name of the game in terms of hardware and computational resources.

Breandan: I think there’s going be bottlenecks of other kinds that if you think about like, if you get a group of human beings, you can think about groups of different sizes. So groups of size one, of size 10, of size 100. And it seems like some problems have this thing where if you put more people on them, it’s not going to get done faster. For software engineering, there’s this kind of, it’s kind of a communication bottleneck for human beings. But if you stick 100 people on a project, doesn’t mean that you’re going to get it done 10 times as fast as if you were 10 people were on the same project. Because there are 100 people writing software, but they all have to write software that is cohesive and runs and communicates efficiently.

Diminishing Returns For Parallelism: Amdahl’s Law

Breandan: So there’s bottlenecks of different kinds depending on the problem domain, right? That are very difficult to escape. Think about, for example, chess, for example. This is clearly computers have kind of, have exceeded human ability on this game, right? But if you, well, if you just think about multiple humans against one human, right? If you get 100 humans trying to play one human in chess, it’s not necessarily going to go one way or the other because they have to talk to each other and converge on a strategy. Same way in computer chess, if you parallelize an algorithm like Fritz or something like this, and you ask it to compete against a single threaded version, this is, first of all, very difficult to build parallel solvers for different domains. But second of all, there are diminishing returns. There’s this thing called Amdahl’s law. Talk a lot about scaling laws. So in parallel programming, Amdahl’s law tells you that there’s diminishing returns for parallelism, right? So there’s, just based on the topology of these computational domains, there are certain places in this graph where you have to join your answers together and you have to do some sort of, some massive parallel join, right? And if you add more threads to that, you’re not going to get the serial part done more quickly.

Algorithm Design Faces Bottlenecks Due To Sequential Tasks

Breandan: There’s this principle of computational irreducibility that’s going on, where you can reduce it, reduce it through the matrix factorization and other techniques in order to get this to be really compact and get everything that needs to be done in a parallel fashion in parallel. But at certain key points, you have to join the things together. And in the space of algorithm design, this actually becomes a big bottleneck because there’s a series of things that needs to be done sequentially. And you cannot do the next thing until you have the result from the previous thing in that chain. So neural networks, it kind of, it looks like this non-linearity between each layer is something like that, because otherwise the whole thing would just collapse into one linear transformation. There are other kinds of transformations that are like that, where just by throwing more parallel resources at the problem, you’re not going to get necessarily your result faster.

Computational Irreducibility and Scaling Laws Hypothesis

Breandan: So how does this all get back into the scaling law hypothesis? So the basic premise is that there’s diminishing returns for parallelism in certain problems. And there’s this thing called Amdahl’s law, which says, it tells you the maximum amount of speed up you can get on a problem that has certain serial steps that need to be done in sequence. And so parts of those problems that where you’re gathering information, maybe like you see the world, like each point can be done in parallel. So for this camera, for example, in the camera, they have these sensors. And in order to get the entire picture at once, they can process each point separately.

Breandan: But when you go to merge them together, let’s say you want to do some object recognition or some sort of decision procedure on this image. And there are certain points where, say you have to admit a decision, say at the very end of your neural network. And then you use that to do something else. So you chain some other cascade of actions onto that decision. But you need to get that decision first. So you can parallelize all the little pixels and stuff, but that decision is conditioned on, or other things are conditioned on that decision. So you need to get that. And so in some ways, you need to do maybe something like a certain number of, a dozen or a hundred different layers in the neural network, each one which can be sort of parallelized. But at the end of the day, you need to do that many steps. And so some say, well, once you reach this infinite, so there’s this idea, if you make the layers infinitely wide, and this sort of approximates an arbitrary function,

Limits of Neural Networks and Universal Function Approximators

Michaël: what does the theorem says exactly? Like how far is the limit from the theorem? Like how big can we make our neural networks before we reach the actual limit?

Breandan: Yeah, so I mean, it would be impossible to talk about actual numbers in terms of, a thousand, a hundred thousand, a million, right? It turns out that as you make it wider and wider, you approximate in the limit, this is kind of a universal function approximator. And well, yes, but there are different convergence rates and there are different things you need to study in order to get that actual number out. But in practice, we see the depth has a nice effect because if you make it deeper, you need fewer, you don’t need as many parameters. Just make this deeper and you could do things in sequence. But that imposes a sequentiality on the problem domain. And so I think, as I was arguing, is I think scaling laws will win out in the short term. In the next few decades, you’ll see different problems of this form sort of topple. But what I was arguing is that there are other problems in program synthesis that will resist this kind of parallel scaling. And you need other algorithms to do this.

On The Potential Impact Of AI Tools On Software Engineers

Surprising Success of AlphaCode and Codex

Michaël: Well, what we’ve seen so far with AlphaCode or Codex is that it seems that we can solve like basically like competitive programming tasks just by using Transformer, like we don’t need something else. Like, were you surprised or not by AlphaCode results?

Breandan: I think it was surprising initially. So this is kind of, I think people have referred to this as moving the goalposts in some way. Like as soon as it does something that, was that while you move the goalposts, say AGI requires something else. But in some ways, competitive programming is a narrow form of programming. And so we were saying, well in software engineering there is this analogy (Brook’s law), where it’s like, if you have 10 engineers and you multiply the number of engineers tenfold, then you’re not going to get the program written in 10 times faster. If you have 10 women, you’re not going to give birth to a child in one month or something like this, right?

Breandan: So there are problems that don’t have this ability to scale with the compute that you throw at it. The compute gives you an ability to saturate problems that are much larger than we were able to deal with before. But there are bottlenecks of other kinds in the algorithm design space that constrain how much bang you’re going to get for your buck in some sense. Like if you throw a GPU at a problem that’s inherently serial, you’re not going to get any faster on that serial chain of things you need to do.

Michaël: I guess examples in AlphaCode were programs that were maybe like 30 lines long. And if you’re good at programming, maybe you factorize your code in 30 lines, like each function is like 30 lines. So I don’t think you need to have much bigger context or something for your programs. So maybe you could argue that maybe you need 10 different functions for some things. And so maybe the program wouldn’t be able to see the 10 different functions to solve the problem. So you need maybe a bigger context window or something.

Differences Between Competitive Programming and Real-world Software Development

Breandan: The context window is a big thing that people are still figuring out how to play with because these things have a fixed attention width, right? And so if you can fit your problem into that width somehow, then it works really well. Competitive programming style problems fit into a very small space, like of characters in some way. The space of software that actually gets written in companies and things like this is a different kind of space. And so the claim is that you need to have humans communicating and collaborating on a large software project. And it’s not clear if there’s a better way to do this right now than just having a human in the loop.

Developers At The Cutting Edge Are Already Becoming More Productive

Michaël: Then the question is how much is the human being automated every year? So imagine right now some Google engineers gain 6% in the difference between the time they get the tickets to how much when they push it to production, they gain 6% between contribution. And we’re in 2022, right? So maybe in two years it’s going to be 10% and 15%. Do you think there’s a possibility of human programmers doing maybe 1% of the job? And if you don’t use Copilot, you’re kind of out of the loop, like you don’t get hired?

Breandan: Yeah, this is another interesting point. So there’s this myth of the 10X engineer. I don’t know if you’ve heard this, or like the, I don’t know what it’s called. But essentially, some people are more productive than others. They write more code, right?

Programmers Are Not Islands

Breandan: The metric that it turns out really matters in terms of what software runs on hardware and what software is productive is software that can scale in the community as well. There are community aspects of scaling that Google was able to tap into. They were able to build platforms for advertisers and things like this, but all of this requires an enormous amount of human collaboration. So one human who is like a Linus Torvalds style character, or someone they can make an impact in certain way, but programmers are not islands.

Breandan: In today’s connected world, there exists an ecosystem of this enormous amount of software complexity that you have to deal with. So running on operating systems and things like this, all this requires an enormous amount of protocol engineering. And interpretability is the name of the game there. If you have something that’s a black box and nobody can interface with, say a 10X engineer or something like this, who’s just very focused on one area, but they can’t communicate their ideas with other people. It turns out organizations have less need for these kinds of engineers.

Challenges in Measuring Programmer Productivity and in Evaluating AI-generated Code

Michaël: So basically you’re saying if you’re very good at coding, but are not able to think about the large scale picture of how your system is going to be integrated into the company software or open source, and you’re not able to talk about it or even write comments about how the thing works. But I think Copilot works comments for you sometimes, because it’s straight on all the GitHub data. So is your claim that even if it’s better at coding than humans, there would be some human side of explaining it that will not be solved so soon?

Breandan: Yeah, I mean I think it’s very hard to put a single metric on programmer productivity. It used to be lines of code, how many lines could you code if you’re right. But it turns out people who churn out a lot of lines of code tend to churn out a lot of bugs as well. And this is also true for these machine learning models, at least right now. In the sense that it’s subtly broken in some hard to understand way without actually testing it. So they’re trying to make progress on executing this code as well.

Breandan: There are things like neural execution style algorithms. One of them developed here in Montreal, the IPA-GNN or instruction pointer attention graph, a lot of fancy algorithms that allow you to design these in a way that is not just text. Right now it’s just generating text. But what they want to do is generate something that can be executed and you know how it’s going to behave when it executes. So they’re working in this direction, but it turns out, I mean, I think you’ll see in the maybe decades.

AI Progress Accelerating Workflow in Various Domains, Cooking Robots

Breandan: I think they’ll show progress and you’ll see people accelerating their workflow in lots of different programming domains, creative visual arts domains. Recently I have an AI kitchen that’s, they have robots making the food for us. And so I subscribed to the service, it’s called Jasper here.

Michaël: So the AI makes food for you?

Breandan: Yeah, yeah, essentially they have different recipes that it creates and the robot arms – you can actually go here in Montreal and see them chopping. Anyway, I digress. The main thing is that there are certain kinds of algorithms that these neural networks are very well suited for. Things like sensory processing and decision-making procedures in certain domains. And I think you’ll start to see lots of, a Cambrian explosion of these applications start to come out. I’m not so worried about the hypothetical scenario that machines might somehow become corrupted and turn against us. If anything, I think it will be our own fault in some ways if humans turn against each other.

Unintended Consequences and Reward Misspecification

Michaël: But we’ve mentioned something like reward misspecification or how to correctly specify things. The problem is not that they will turn against us by default but mostly that humans will not be able to write the reward function good enough.

Breandan: Yeah, I think that’s a valid concern for sure is that we have a poor track record. In fact, this is a very good point. I think human beings, when we’re writing software or designing different algorithms, we’re in some way short-sighted in that the software we write tends to have a lot of errors. Or when we try to design laws or something like this, we forget about certain edge cases. So if you go on Wikipedia, there’s a list of unintended consequences where people say, oh, they want to kill the snakes so they give people $10 if they hand in a snake that’s dead or something. And then people start breeding snakes so they get the $10.

Using Symbolic Reasoning To Help With Specification Design

Breandan: This kind of specification has cropped up in many different areas, unintended consequences. But this is where I think we can use a lot of different tools from computer science to help us. So some people believe, this is a hypothesis that some people have, is that it’s simply a matter of getting the machines to be aligned to our values in the sense that all we need to do is communicate our intent and then get the machine to interpret that and execute it. But it turns out that people in many ways, like you said, have a short, they’re myopic. They don’t see the consequences of many things far in the future.

Breandan: This is where we can use tools like symbolic reasoning to augment our ability to design problems so that instead of programming how to do it exactly, we can tell these tools what we want. And through this process of co-design, you tell it what you want, it refines your specification, you go back and change that. Then we can arrive at a more well-behaved system. It may not be correct in the sense that it’s hard to define what it means to be correct in all cases. And I think there’s a lot of opportunity there for leveraging other kinds of algorithms. So you say AGI is all we need, right?

Breandan’s position on AI Alignment: Symbiosis Instead of AI Servants

Michaël: So there’s one which is scale is all you need. This one is like AGI before 2030. And another one is alignment is what you need because it’s the thing that, okay, so the sentiment of alignment is that if you only do scale without doing alignment, you will end up with something that is very competent, but maybe not aligned. So alignment is actually what you need if you don’t want to die. That’s a sign.

Not Treating Computers As Servants

Breandan: Yeah, that’s very interesting. I see alignment in two ways. Kind of we align the computers to our will. In some sense, the alignment premise is the belief that machines should be bent to our will, kind of a servant relationship with a human being. We tell it what to do and it does that thing, a very command-control style interface.

Michaël: So yeah, I guess like one definition of AI alignments, one of the earliest one by a guy called Paul Christiano from, it used to be at OpenAI, is a robot, an AI is aligned with an operator H, so a human. If the AI is trying to do what the human wants to do, so there’s the thing about trying, so it doesn’t do it perfectly because otherwise it would be too hard, but it tries to do what the human wants it to do. And if this is the case, then we say the thing is aligned.

Breandan: So maybe here’s a question for you. Do you think that that attitude that the human is right and the machine just needs to conform, is that the only or right direction? Because, I mean, it seems to me that there’s a whole spectrum where the human could get some feedback about how they might change their constraints in order to become more in line with what the machine can do.

Using AI Tools Will Involve Some Level Of Interaction

Breandan: So there’s two things. One is what’s realizable by a machine, what it can do. And then there’s the feedback cycle where it says, well, maybe you’re wrong. In fact, we have this kind of argument all the time with compilers when you’re writing a program. It’s like, I want to do this. You can’t do this because I don’t know how to do this other thing. So it comes back to you and says, you need to change this, or you need to, so sometimes you win and say, ah, here’s how to do this. And the compiler says, okay, I can do this. And sometimes you lose in the sense that there’s no way to do the thing that you say because you find out it’s ill-posed. Your constraints, as you’ve specified them, cannot be realized by a machine.

Breandan: And so, that back and forth, it seems like there’s a bidirectionality there. The machine says, I can’t do this. And you say, here’s how. And then they say, no, this is not what you really want because it violates this constraint that you set. So that, we have a very hard time fitting context into our brain. Sometimes we forget about some things. So there’s this idea in software engineering that kind of relates to this, the N+1 problem. Is once you have a feature, you want to add a new feature. The combinatorial interaction between that new feature and all the features you developed previously is just very difficult to think about. So we think about a small subset of those. But it turns out that once you start adding lots and lots of features, sometimes they’re self-contradictory. And this is the same thing for constraints.

Symbiosis And Interactions Between AI and Human Systems

Michaël: So are you arguing for some kind of symbiosis between AI and human systems where basically the human will always have some output from an AI saying like, oh, the line of code you’re trying to write is not really correct. Or like, you’re telling me to do this, but I think you forgot about this constraint about your kid. And so maybe you should ask me this instead.

Breandan: Yeah, well, I think that there’s lots of opportunity for different interaction models. And there’s this idea that artificial intelligence and human reasoning is like an argument in some way. It’s kind of adversarial. But there’s a lot of room for a collaborative back-and-forth in that process. I think if you have the ability to use this in a hybrid fashion so that humans and machines can collaborate, our neurosymbolic things are kind of married together, you’re going to see a lot of interesting applications at the intersection of those two fields. And so it’s not all about, well, the human is right, the machine needs to adapt or conform. I think you’re going to have an interesting hybrid of the two.

Michaël: And so the AI will kind of emerge and have its own goals and tell the human about what it actually wants. And it will be like two people collaborating.

AI-Human Collaboration and Convergent Goals

Breandan: And yeah, I think so. That’s right. Yeah, I think this interaction between the human machine, hey, they’re going to have lots of nice instantiations of this in the form of augmented reasoning tools for people when they’re designing algorithms in terms of machines that interact with us collaboratively, and in terms of the whole ecosystem of human machine interaction. And that will have, it will be woven into the interfaces we use in a nice natural way.

Breandan: But whether this is something we should be very concerned about, because it will radically change how humans interact with technology or that interact with each other in many ways, is a good question that people need to think very carefully about. I think the concern is less about should we fear or worship these machines? I think a lot of the dialogue that happens in terms of this is in terms of, well, should fear or worship it, right?

Breandan: We should kind of glorify this intelligence or we should kind of say, it’s the enemy, right? But in some ways, I think the goal here is to understand it, is to come to a common understanding of each other. So the human should try to adapt in some way to the machine’s understanding of the world, but also the machine should adapt to our goals. And so their alignment is the two kind of things are not orthogonal, but there’s some combination, some linear combination of the human intent and the machine’s capabilities. And they will converge in the limit.

Breandan: And you’ll start to see humans have a lot more unrealized ability than we give them credit for. People who have disabilities or things that hold them back can be sort of fixed with machines in some way. In some way, we all have limitations, whether if you’re deaf or blind or something like this in sort of practical ways, but also in ways that are less difficult to verbalize. So we’re not all very good at chess. And so there’s different ways that humans can improve in domain-specific areas. And I feel this is the perfect fit for machines to augment our abilities.

AI’s Vastly Superior Intelligence and The Limitations of Human Intelligence

Michaël: I guess the main difference is like AI and human is that AI can be vastly smarter than us. They can out-compute us and have infinite memory. Like we’re bounded by our brain, right? And so I think this never will more look like humans and animals. It’s very hard to align all humans and all animals because it’s like a very different kind of intelligence, right? And you’ve mentioned interoperability before. If we don’t make those AI models that are much smarter than us interpretable, it’s going to be very hard to communicate. Like it’s going to be like behind human comprehension. So yeah, I agree that like in the short term, maybe like I just said this decade or maybe the next one, it’s going to be possible, this kind of collaboration. But as a human, we’re just like bounded by our hardware or maybe you can have brain-to-brain interfaces or those kinds of things. But I think there’s a limit to human intelligence and just going to be harder and harder to communicate with AIs.

People’s Integration of AI Technology With Moral and Ethical Understanding Drives Its Usage

Breandan: Yeah, yeah, it’s a lot of open questions for sure. And I’m glad somebody’s thinking carefully about them. And I think there’s reason for optimism as well. I mean, people tend to focus a lot on sometimes only the positive or only the negative aspects of a new technology. But very often, I think it falls to the people who are using this technology, how they want to use it and how they integrate it with their moral and ethical understanding of the world that drives how this technology is used. So like things that are very, very dangerous can be used. There’s often this double-edged sword to a lot of technology where you have people who are using technology that was intended to do something else for different purposes than the creators originally intended it to be used for, whether that’s positive or negative.

AI Technology Can Become Unbounded and More Powerful Than Humans

Michaël: I guess the main difference with this AI technology is that there’s possibility of self-improving AI, right? And because it’s unbounded in terms of intelligence, it can reach a point where it’s more powerful than humans. Like, let’s say a weapon or a nuke.

Breandan: A nuke is kind of dangerous, right?

Michaël: But a weapon, it doesn’t enable me to outpower all other humans. But if you have something that’s a technology that’s unbounded in terms of intelligence, then I don’t know, it’s not something about being misused. It’s something that can self-improve and become more than humans. Without any option for you to control it. I guess the main crux here, the main disagreement here is maybe you don’t believe in self-improving AI that can be the fastest per human in a few days or a few weeks. Maybe you think it’s going to be slower and more soft and subtle. Do you believe in self-improving AI in a few days?

AI Timelines And The Singularity

Self-Improvement in AI is Possible but Its Impact is Uncertain

Breandan: Well, I think it’s certainly possible to have self-improvement. And it’s not impossible that it could prove harmful to humanity as a whole. But whether it’s something that we need to be afraid of or we “welcome the AI overlords” in some way, or it’s an anathema against human civilization. Well, I think there’s lots of room for just unimaginable things that could be unimaginably good or bad or anywhere in between. And it’s very hard to say with certainty that as some people seem to have this conviction that AI is destined to have this singularity that’s going to either eclipse or absorb the human consciousness. That’s very unclear to me whether any of that is going to happen. I think there will be some interesting hybrid, but it’s hard to say what it will lead to.

Michaël: But this sign says AGI before 2030. Is this basically your timelines or do you have longer timelines? How many years before AGI?

Passing the Turing Test by 2025 Seems Unlikely

Breandan: Before AGI, yeah. So I actually had this bet with a friend of mine in 2016 that the Turing test would be passed, I think, by 2025 or it was 10 years. I think we gave it a decade. I think in the end that I’ll end up winning that bet because I predicted it would not pass the Turing test by 2025-‘26 and I’m looking to be on track there. I think it’s getting harder to detect in AI, but it has this subtly fever dream quality. So if you look at the images of the text, it’s still, there’s this still kind of, there’s a small gap, right? I think in the end that it will pass the Turing test and different senses, audio, visual, we’re talking about the taste test. But I think that there’s different kinds of AGI and everybody has a different kind of AGI.

Breandan: So 2030… it’s very hard to put some sort of bet on this because it’s like, well, people could interpret this however they want, right?

Michaël: Maybe you can give like any definition if you think it’s more operationalizable, you can just like have a specific criteria for AGI or something even weaker than this.

Human-Level AGI Could Be Reached by 2030

Breandan: Yeah, well, I mean, I think in lots of domains, 2030 is reasonable. Superhuman, I think AGI, what you might mean by AGI is like superhuman performance on everything.

Michaël: Okay, so I mean the human level, but I guess in my model, when you reach human level, you’re basically able to like double your hardware or and be like 2X human and then reach like superhuman quite fast. So for me, like those are kind of the same in the sense of like, I see a short gap between human level and superhuman level. Yeah, yeah, at that, I guess some other people have different views on this.

All Exponential Curves Taper, and AI Scaling Will Reach an Inflection Point

Breandan: Right, yeah, I don’t know. It depends on who you ask, but at that point you’re splitting hairs. I think human level is close to not, you know, it’s like this curve where human level is maybe, 75% of the way there, but it’s not going to exponentially just keep on going and going. All exponential curves taper. And whether that’s that twice human ability or whether that’s three times human ability, somewhere along the way, you will reach an inflection point where these algorithms, they don’t have the same scaling, the same rate of return in the limit. And so you end up with tasks that there’s no free lunch in some way, right? You’re not going to be able to just scale to your heart’s content and then everything will get better in every domain all over the place.

Michaël: Why not?

No Free Lunch Principle Implies Trade-offs In Optimization

Breandan: Because there’s this information theoretic principle, there’s no free lunch, that you can improve in certain domains, but if you look at the Pareto optimal front across all the objective functions that you care about, at some point you can’t increase one without decreasing another.

Michaël: If you only care about economically viable tasks, like whatever humans can do and these count in GDP, a specific domain is not like all the different tasks in the universe.

On the Limits Of Measuring AI Progress Via Economically Viable Tasks

Economically Viable Tasks Are A Limited View Of AI Potential

Breandan: Yeah, yeah. I think in these, if you look at the light cone of all future value, then you end up with a narrow perspective on the things we could use these for. We’re going to teach, we’re going to train these things to print money essentially or print time on the cloud or something like this. We really should be doing is somehow using it to measure how much knowledge has expanded. Because in some ways this might be controversial thing to say depending on your socioeconomic perspective, but money isn’t everything. Money is some way to measure economic productivity of human beings, right? But this is a poor way to measure what we can do with these algorithms. And if we can use them to discover new truths, then-

Michaël: But those truths will be possibly, if they’re valuable in some quotes, if they’re valuable, then they can be used to build new technology, right? Or are you saying that there’s some truth that is intrinsically good, but not useful?

Breandan: Yeah, I think that’s… So there’s different ways you could measure it and there’s no universal baseline. But I mean, I think a good way to start might be with the quality of life improves across a large subset of the population. So you could look at it from a utilitarian point of view. I don’t really agree with that as much, just it’s a personal thing. Everybody has their own philosophy in life. But I think utilitarianism is in some way, it doesn’t reflect everything that you might want or some people might want from their personal life…

Metrics Can Be Gamed And May Not Capture Rich Aspects Of Life

Breandan: I guess that what I was trying to say is that if you have a metric, then that metric can be gamed in all sorts of ways. So if you measure it, then it no longer becomes a useful metric. So it’s only useful if you kind of keep it secret and you don’t use it to perform any actions in the world. Because once you start to do that, then you propagate that value in some way. So like if you have some sort of edge in the stock market, the only way to keep that alpha or that predictive value is not use it.

Michaël: Wait, how does that translate to humanity as easy and not being utilitarian?

Breandan: So I think this has… If you say that this is the benchmark, this is what the machine needs to do. Like let’s say you use this as the… What’s the metric that economists use to measure quality of life in different…

Michaël: Quality adjusted life years, quality.

Focusing Too Much On Numerical Metrics Can Lead To Misspecifications

Breandan: Okay, so let’s say it’s that. Then you can optimize for that, but I’m sure it would not capture all the rich aspects of human, of life as we know it. I mean, it would capture certain key economic indicators of that. But I think it’s just… And it’s an impossible thing to… So you can’t make all the people happy all the time. You can make some of the people happy some of the time. And there’s a balancing act there where if you focus too much on these numerical metrics, you can optimize for, as you were saying, these misspecifications. And then you…

Michaël: So are you basically saying that we will not program AIs to do just only economically valuable tasks because it’s not in our self-interest? So we won’t get this kind of singularity because we will not program our AIs to do those things?

AI Will Be Used For A Rich Diversity Of Tasks, Not Just Economically Valuable Ones

Breandan: I think in some ways, it’s kind of a self-defeating thing, right? If you program it to do an economically useful thing for you, then everybody else can do that thing too in some way, which is a good thing, right? Open science, open source. I mean, if you have the energy, then you can train something from scratch. I mean, there are certain physical bottlenecks that I could imagine would also play a role. But in the interest of just predicting the future, I think there’s a huge, rich diversity of all the things we could use AI for.

Federated Learning Scenario With Multiple AI Agents

Breandan: And the idea that there’ll be one AI that’s kind of the overlord or whatever that assigns these different tasks, and you have a universal agent that’s acting to fulfill some objective that is like a thing you can write down that one person can write down and agree on. And I think that’s just sort of a fantasy. There’ll be so many different agents all kind of working towards their own individual goals. Like you have computers in your pockets right now, and it’s not unreasonable to imagine that there’s AI assistants that we could train for our own personal use.

Breandan: And there we all have our own objective functions. And so it’s sort of like a federation, federated learning kind of scenario where everybody has, they can share some parts of the data.

On Regulating AI, Slowing Down AI Development And The Singularity

How Fast Will The First AI Be Compared To Others

Michaël: I guess the main disagreement here is how fast is the first AI? So imagine OpenAI is working privately on GPT-4, maybe they are already working on GPT-5 or something, and they will be using GPT-4 soon. If they have an edge that is like one year of advance, maybe it’s too big of an edge that the other AIs are not able to catch up and help regulate this first one. So I guess the main disagreement is how fast do you think the first AI will be compared to the other ones? And when the first one is capable of self-improving.

Michaël: I’m sorry, I’m talking about self-improving again, but I think it’s going to be hard to self-regulate. So yeah, I think the main agreement is like, do you think 7 billion humans plus more trillions of AIs are able to regulate the front-runner or not? And I’m not entirely sure it’s possible. I would give it like, let’s say 25% chance if a federated regulation works like a multiple scenario with a bunch of AIs works, maybe like 75% chance that the front-runner is able to just escape and take control.

Extending AI Timelines By Slowing Down Science

Breandan: Yeah, it’s certainly something worth thinking about. I think a lot of this discussion, it’s sort of, I mean, I don’t know. It’s very hard to argue for a specific action that people should take in order to prevent this or whether you should, there’s this idea that you could slow science down. You know, everybody should just kind of slow down and so that the AI timelines can get extended and we can get a better hold to grasp on certain problems and prevent these singularities maybe from happening. Or one agent takes the lead and then everybody has difficulty catching up or things like this.

Regulating Compute Usage In AI Development

Michaël: You could just like regulate, we’re talking about the code of law, right? You can just regulate the amount of compute companies use. So if you use, I don’t know, more than 10 to the power of 15 flops for a training run, then maybe you need to make your model interpretable or build some alignment features, safety features, have some guarantees on the metrics you prompt. So I think there’s ways that humanity has implemented where you can just make sure that the technology is safe, like make sure all countries implement this kind of thing. It’s not something very vague. It’s like there’s ways to limit, let’s say, capabilities progress compared to like the safety measures. Yeah.

Heavy-Handed Regulation Might Lead To Unintended Consequences

Breandan: Yeah, I think it’s worth trying, but with regulation and these kinds of measures, just by applying it, sometimes it can have unintended consequences of its own. So if you regulate, so there’s the argument that, if we don’t do it, someone else will. Or there are other reasons to proceed with caution with a lot of heavy-handed regulation of this form, because it’s just very, very difficult. I think to really drive home the question of whether, if it’s possible, it will happen, right? If it’s possible. And then the “if” is the big question. Well, I think maybe people have jumped to a very quick “if”, and then they give a whole bunch of reasons. Yes, yes, it’s possible.

On The Physical Constraints Of An AI Singularity

Breandan: And then let’s just forget about that whole question and just proceed to what to do, now assuming that premise, that the singularity is happening and human beings will be wiped off the face of the planet and so on. Where I would just proceed, then that reasoning, just with a little bit more caution in that how could it be possible? And explore the different branches of that “if”, the initial premise. So I think, yes, there are physical constraints. Yes, there are moral and ethical constraints. Whether we need to impose other actions like these hard computational limits and who would, just the logistics of who would implement those regulations and how would they enforce them is just a whole other bag of issues.

Can Nuclear Regulations Inform AI Regulations

Breandan: If you just look at other technologies, how well do we fare with nuclear and other things like this?

Michaël: I think for nuclear, we’re pretty good. Like we basically managed to reduce the number of nuclear weapons being launched every year. I think it’s well regulated. I think for the thing with like, if someone doesn’t do it, if the US doesn’t do it, then someone else will do. We’ve managed to ban the export of like some GPUs to China, like H100 or AI100. And so if the US has some edge to the other countries and is able to ban export of some hardware, it might have like six months or one year of advance to build safety features in the limit.

Breandan: Yeah, that’s an interesting sort of train of direction to go in because I think, let’s say the US is successful in imposing these economic constraints and that it can impose an embargo on certain kinds of products and enforce it. And then five or 10 years down the line, something happens in that, well, we’ve successfully done this, but the climate has changed in a completely different way. And then essentially you end up with regulatory capture or different kinds of negative consequences. Whereas other countries now, they’ve kind of divorced their supply chain in some way and they’ve built up this capability.

AI Development: A New Cold War Or Arms Race?

Breandan: Then you end up with a new Cold War or a new arms race in some sense. And that might be a possibility. Another possibility is that we’re successful in applying these regulations. We have access to this technology, but instead of happening somewhere else, it just happens in the US. And then with the geographic location where the singularity happens or something like this is not a concern for then, then well, it shouldn’t make a huge difference about the kind of measures that you take in order to restrict who gets access and who is able to use this thing. So I think in some ways it’s similar to nuclear, but in some ways it’s very different from a policy perspective, because if you get a bunch of people in a room, it’s very unlikely they’ll be able to build a nuclear weapon because you need a whole bunch of resources. And-

Michaël: Same for like a training run that costs $10 million.

AI Development Requires Clear Indicators of Resource Usage

Breandan: Yeah. I think there’s different kinds of constraints there, but it’ll be pretty clear if somebody is using that much energy, whether it’s used to refine U-238 in a centrifuge or whether it’s used to train some sort of large language model that’s very good at some task, then there’ll be clear indicators of when it’s happening.

Michaël: I have no idea how many GPUs Google is using for the training run. Maybe they can use older servers. Maybe there’s something like, you can see like with a thermal camera from above that they’re using a lot of GPUs, but compared to like their normal use of GPUs, I don’t know.

Would A Laissez-Faire AI Economy Lead To A Takeover

Breandan: It is similar in some ways, but it is different from other technologies. And so I think I would just go back to this question of, well, let’s say there’s no regulation, laissez-faire kind of AI economy, and everybody gets to do whatever they have the resources to do. Then is that going to enable them somehow to take over the world in some way?

Breandan: And that’s unclear to me, because even with very organized, capable, efficient systems, then there are checks and balances in place so that even if you obtain a competitive advantage in the field of AI, then just by the nature of human, we’re social creatures. We tend to share things, and so it’s very hard to completely disentangle the technology that’s been developed in the open in academia for a long time, because everybody is so in tune with what’s happening in some way.

Self-Regulating Systems Might Be More Robust

Breandan: If somebody or some company or some organization is able to edge out a competitive advantage just with the tools that we have today, then I think it will spread, and you’ll end up with a sort of self-regulating system, that systems that have regulation imposed on them, there’s some sort of protocol where you need a centralized organization to impose this regulation, tend to be less robust than systems that have the self-regulating capability.

Breandan: And so if you imagine there’s this thing, well, everybody agrees that the safest way to distribute some cake is I cut it and you choose it. Everybody, oh, yep, there’s no way to game that protocol in some way. So you can implement these kinds of things in an ad hoc and very brittle manner. Oh, we have some regulatory agency that’s kind of overseeing the progress of AI. Or you can rely on the ingenuity of people to build self-regulating systems.

People Will Develop Reasonable AI Solutions Once They’re Comfortable With It

Breandan: And I think once people get around to the idea of having AI and they’re not afraid of it or they’re not worshiping it, then they’ll come up with very reasonable solutions about how to deploy this technology, about how it can be used. And if one person has a competitive advantage, then it will maybe affect in the near term, there’ll be one or two year gaps in technology cycles, like there are in other domains, hardware, for example. Right now, Europe has the edge. They have a very efficient mechanism for manufacturing these lithography machines. They can print in like circuits on silicon. But the idea is that in a well-functioning economy and in an economy of ideas in science, as we have in the scientific community, that there’s sharing and that will emerge naturally. It doesn’t have to be imposed by some sort of government organization or some overseeing body.