COMPLEXITY: Physics of Life

Melanie Mitchell on Artificial Intelligence: What We Still Don't Know

Episode Notes

Since the term was coined in 1956, artificial intelligence has been a kind of mirror that tells us more about our theories of intelligence, and our hopes and fears about technology, than about whether we can make computers think. AI requires us to formulate and specify: what do we mean by computation and cognition, intelligence and thought? It is a topic rife with hype and strong opinions, driven more by funding and commercial goals than almost any other field of science...with the curious effect of making massive, world-changing technological advancements even as we lack a unifying theoretical framework to explain and guide the change. So-called machine intelligences are more and more a part of everyday human life, but we still don’t know if it is possible to make computers think, because we have no universal, satisfying definition of what thinking is. Meanwhile, we deploy technologies that we don’t fully understand to make decisions for us, sometimes with tragic consequences. To build machines with common sense, we have to answer fundamental questions such as, “How do humans learn?” “What is innate and what is taught?” “How much do sociality and evolution play a part in our intelligence, and are they necessary for AI?”

This week’s guest is computer scientist Melanie Mitchell, Davis Professor of Complexity at SFI, Professor of Computer Science at Portland State University, founder of ComplexityExplorer.org, and author or editor of six books, including the acclaimed Complexity: A Guided Tour and her latest, Artificial Intelligence: A Guide for Thinking Humans. In this episode, we discuss how much left there is to learn about artificial intelligence, and how research in evolution, neuroscience, childhood development, and other disciplines might help shed light on what AI still lacks: the ability to truly think.

Visit Melanie Mitchell’s Website for research papers and to buy her book, Artificial Intelligence: A Guide for Thinking Humans

Follow Melanie on Twitter.

Watch Melanie's SFI Community Lecture on AI.

Join our Facebook discussion group to meet like minds and talk about each episode.

Podcast Theme Music by Mitch Mignano.

Follow us on social media:
TwitterYouTubeFacebookInstagramLinkedIn

More discussions with Melanie:

Lex Fridman

EconTalk

Jim Rutt

WBUR On Point

Melanie's AMA on The Next Web

Episode Transcription

Michael:  Melanie Mitchell, it is a totally intimidating delight to have you on Complexity Podcast.

Melanie:  I hope I'm not too intimidating,

Michael:  Quite friendly actually! I like to start these conversations by inviting you to talk about how you got into science in the first place, to give us a little context about the the curiosity and the passion that drive your research and how you got your start.

Well, so when I was very young, I always really liked logic puzzles. You know, those kinds of puzzles that you get in books that you can solve, and I just spent a lot of time doing those. And that was a lot of fun. My dad was a computer engineer. Back in the days when people were building mainframe computers. And the computers had, very little memory and you program them in Fortran, and so I learned a little bit of programming from him. He built a computer in our house, which was kind of a strange thing to do in those days. Now it's more normal. But it was a big thing.

But I really got excited about science, I think when I took physics in high school, and we covered Einstein's theory of relativity, and that was just completely mind blowing. And just the fact that you could gain an understanding of the world just by thinking about stuff, just by thought experiments, just was amazing to me. And that was really excited. So I decided, I also read a book by an astronomer named Harlow Shapley. I can't remember the book's title, but it was about all of sort of how much knowledge you could gain just by looking at a star through a telescope. You know, and what was that light, what knowledge that light could give you about the star and about the universe. And I decided that I wanted to go become an astronomer and a cosmologist. So I started out in college majoring in physics. But I turned out I really didn't like it very much. I felt very intimidated by it. I really did not feel like it was, at least what I was studying was calling out to me.

So I ended up switching my major to math, and then was kind of a little bit, but I did research and astronomy actually, which was a lot of fun. But I was a little bit lost. As to what I really wanted to do after I kind of gave up the physics idea. And then a little bit after college, I read Doug Hofstadter, his book Gödel, Escher, Bach, which got a lot of people into computer science, including myself, and I decided I wanted to go to graduate school computer science and work for Hofstadter. So I did. I ended up convincing him to take me on as a graduate student and he was going to University of Michigan. At the time moving there. And so I went there. And that kind of started a whole cascade of lucky events that got me here to Santa Fe Institute.

Michael:  So, your entire career when you gave a presentation on the history of artificial intelligence research here at FSI for the symposium last year, you talked about there being you know, this, the way that we have researched, this has changed along with the way that we have understood intelligence itself and that these two things are really intimately related. That we can't just deploy engineered intelligence without understanding it first. And so I'm really curious, because, you know, over your career you've been involved directly in in a lot of these projects. And I think maybe the place to start is with Copycat, this program that you developed with Hofstadter and the implicit model of intelligence that was built into that, and then how that is sort of different from other strains of AI research.

Melanie:  Absolutely. So people sometimes asked me, how did I get into complex systems. And the way I got into complex systems was through my work with Hofstadter because his view of intelligence was that we had to model it as an emergent system that came out of many of what he called sub-cognitive activities. And it wasn't he doesn't he didn't mean activities in the brain, although eventually, it would have to map on to that, but he meant very much a kind of complex system where you have all these different agents that are interacting and sharing knowledge and competing with each other. And out of this, you get some kind of concepts and representations of situations.

So his idea was to try and model this in a computer program. And the domain that he chose was, first he was interested in sort of these visual abstraction problems called Bongard problems, which I can talk about later if you want. But then he realized that would be really hard to do. So he made us version of them that was much simpler using letter strings to make analogies. So an example is if the string A,B,C,D changes to the string, A,B,C,D,E, what should the string P,Q,R,S change to? And a lot of people would say, okay, P,Q,R,S,T. This seems like an incredibly trivial problem. But it turns out that you can take make these analogy problems with letter strings that require quite a bit of creativity and insight into recognizing patterns. So his Hofstadter’s idea was not to build a program that was for the purpose of making letter string analogies, but rather to use this domain as kind of a way to explore his kind of architecture for intelligence, his ideas about how perception and cognition are related.

So, my assignment when I showed up in his research group was, “Implement this!” And he had a kind of a paper that he had written kind of a manifesto about his ideas about how this should happen in a program, but it was less thorough than I think he thought. So it really took quite a bit of thinking and working out ideas about how to develop a program that adhered to these ideas about intelligence in general, that would then display how it worked on the letter string domain without getting too specific, so that it couldn't generalize. So that was copycat. it solved some problems in the letter string domain, but hopefully did it in a very general way then could be applied to other domains. And in fact, it has been.

Michael:  So in your book you talk about where copycat started to fail. One of the examples was A,Z,B,Z,C,Z,D is to A,B,C,D, as…and then another letter string. You talked about how it wasn't able to abstract. It wasn't able to come up with new concepts on the fly.

Melanie:  That's right. Yeah. So copy had had a set of concepts that we gave it like that were relevant to the letter string world like, you know, accessor in the alphabet, predecessor, new alphabet, grouping of letters, all kinds of different concepts that, you know, you would try and apply to any problem that it was given. But one thing that people do is that they come up with these temporary concepts that they use all the time. So an example in the letter string world is if A,B,C changes to add, so you might say see changes to its successor. What does A,C,E change to? And people will, like they'll count they'll say, A,C,E, each letter is separated by two letters. And so you should say, A, C, G. Okay, so that idea of like the double successorship that's a concept that we can easily create, you know, we can take the concepts we have and extend them and this this kind of way, but copycat didn't have that ability. But I have to say that you know, Copycat, it did work. On certain problems, but the most interesting aspects of it were its failures, because that's what really taught us. Like, what kind of how subtle this whole domain is how subtle the problem of perception is. And kind of what the program was lacking.

Michael:  I guess we could take this in one of two directions. Maybe we should take it in both Yogi Berra style. One of them is an issue that you've spoken on about the problem of autonomous vehicles and edge cases, and how it's very difficult to train a machine intelligence to respond in an adequate way to scenarios that we ourselves cannot foresee, or the scenarios in the long tail that are exceedingly rare. The other one, I think, maybe is a little bit of a shorter bridge. I forget who it is that you quoted on this, but you talk about thought being made of concepts concepts being made. analogies.

Melanie:  Yeah, so that was Yeah. From the book by Hofstadter and Sander about analogies.

Michael:  Okay. Yeah. So I was delighted to find out in your book you have a section on Lakoff and Johnson and Metaphors We Live By, which was the book that blew my mind in college and like completely transformed the way that I understand thought. And so, yeah, it seems like copycat as well as a lot of AI stuff is stumbling on sort of trying to build thought, in a way where the ladder doesn't actually reach the ground. Where it's like the we're programming and concepts in order to get analogies, but it's like we have to start with analogies and build up?

Melanie:  Yeah. I don’t know if this is what you're talking about. But people when I talk about the symbol grounding problem. The idea is, if we have some kind of noun in our language, let's say a good example. Like “tree.” And you could tell you could get a computer to reason about trees, you could tell it that trees are plants that plants need water. Therefore trees need water. And you tell it all you can give it all these rules about trees, all these facts about trees, but the question is, if it's hasn't ever actually interacted physically, either with sensory perception or literally physically, hugged the tree, how can we say that it actually has a concept like that? The concept isn't grounded in the real world. So there's a lot of debate in AI, whether symbols like that we teach machines have to be grounded in some sense in in real world experience, or whether they can learn about concepts successfully without that kind of basic grounding that we humans experience.

Michael:  So it seems pretty directly really related to the embodiment issue, right? And how much does the body of an artificial intelligence matter? It seems like if our desire to engineer intelligence is about trying to reproduce the organic forms of intelligence we're familiar with, it seems like fairly inescapable that we actually need to anchor these things in the physical world. For example, Diana Slattery, who wrote Xenolinguistics, talks about how the kind of languages that might emerge in zero gravity, like Lakoff and Johnson talks about this, you know, we have this sense of somebody who's sprightly, and you know, up, these are happy people, and somebody slumped over is sad. And so we extend these metaphors to metaphors about like the market is up, which is good. And yet in space, if you don't have an anchor of gravity pulling you down, then the way that we connect these things, the way we build analogies is going to move from the center out. So I don't know, I guess, I guess I'm just asking, what are your thoughts on him on embodiment? And do you consider the work that's being done, for example, with autonomous vehicles as embodied? Because it seems like the more sensors we put on something, the more we have it navigating real-world charts like we're getting closer, right?

Melanie:  That's a good question. I think the term embodiment is something people throw around a lot and means different things to different people. That the idea is that, you know, a lot of AI systems, not robots, but AI systems that kind of maybe run on your desktop computer, don't have bodies. They're like brains in vats. They can't pick up something with their hands or put it in their mouth or do other manipulations of it. And a lot of our concepts, human’s concepts, are formed when we're infants by this kind of manipulation, that we're this physical manipulation, these physical experiences that we have. So the question is, can a machine ever have the intelligence of a human or the kind of concepts that humans have? Without this kind of embodiment?

I think it's an open question. And it sort of depends on what you want the machines to do. Because clearly, they can do symbolic mathematics without bodies, that's perfectly fine. They do great. They can compute spreadsheets, they can do all kinds of different things. But the question is, the kinds of AI that we humans want them to do, can they do that? Like understand language, have a conversation with with a person and actually be able to respond to them in the way that is useful to the person? Can they do that without having the kinds of same kinds of experiences that we have? I think it's a big open question. I tend to think we need embodiment, that we're never going to get human level AI without embodiment. But you know, it's not something you can prove. Intuitively it seems right. But I seen other arguments that, you know, may also be right, who knows,

Michael:  When you were talking to Lex Fridman for his podcast, you talked about how difficult it is to separate intelligence from the desire for self preservation and from our emotions. And everything I know about developmental psychology suggests that human rationality emerges out of sort of emotional sub-units. And I'm curious, the sociality, the social dimension of this, a lot of the work—to connect this to work that lots of people here at SFI are doing—it’s very hard to define a human being in a way that is that is not a social definition. And a lot of, if we’re talking about training an infant, almost all of the training of an infant is done in society. And so I'm curious how you see that reflected in the way that AI research is being done or how you see that it is not being reflected.

Melanie:  It's a good question. It's just complicated. Humans, as you say, are social organisms. That's just how we exist. Babies, as you say, learn, usually from their parents, or other adults, because they are in an emotional relationship with those people. And therefore, a lot of cognitive effort goes into modeling other people, trying to understand other people and their goals and their theory of mind. And that's extremely important for us in our lives. The question of whether AI needs that kind of thing is another question. And again, it's very similar to the embodiment question. It kind of depends what you want it to do, and it's not clear. Do we have to bring up an AI baby like we do a human baby? And give it the same kinds of experiences, kind of program in the same kinds of need for social interaction? Is that going to help it be smarter? I don't know.

Michael:  I mean, that's that's sort of remember years ago, Kevin Kelly said that the the market just doesn't tilt that way. Like, if it's gonna take us as long to raise a human-like AI, then what's the point? And then that actually the market forces are moving into a more of an ecology of non-human intelligences that augment and supplement us. It seems like over the course of the history of this, this thinking. I mean, even 20 years ago, it was odd and kind of heretical to talk about the intelligence of a forest, for example. And now, you know, the the way that we think of intelligence, the way that we think of computation, at least around here, these things are kind of bleeding into each other.

Actually, here's a kind of a more focused way to ask this question stuck with Nihat Ay yesterday about the role of feedback because a lot of neural networks, deep neural network systems are just feeding one layer forward into the next layer and building associations out of that. But in the eye, we have all of these reverse wirings. This is why we get all these weird optical illusions because your brain is expecting to see something, to get information from the eye. So I mean, this seems related to the sociality issue: how recursion figures into intelligence, whether it's human-like or not.

Melanie:  I think the way most cognitive scientists think about human intelligence is by humans construct these mental models of the world. So you might have a mental model of some particular situation like, you know, walking into a room. And there's a lot of feedback involved because there's these feedback links in the brain between perception and higher-level cognition and the motor cortex and, and you basically your behavior is happening over time. And so at different periods during time, you get lots of feedback and you use that to figure out what to do next. Deep neural networks on the other hand, as you said, are primarily feedforward. They don't have feedback connections. They use feedback to learn in their learning phase. But once they've been trained, they have this operating phase where everything is feedforward. And they don't build these kinds of models. So psychologists talk about “perceptual categories” versus “concepts.” They're different.

So you might have a neural network that learns how to recognize cats, dogs, cows, sheep, and it can tell the difference, because it's been trained on lots of photographs, or something like that. So it has perceptual categories, it can look at the difference between these categories and sort images into the different classes. But it doesn't have the same kind of model-based concepts that we humans have that allow us to combine modalities, to reason about these entities. Those models do seem to require this kind of temporal feedback and this interplay between what we expect to perceive and what we do perceive, you know, bottom-up versus top-down. And this is something that people are now really thinking about very seriously in AI trying to get systems that actually built these sort of models or ways to kind of internally simulate situations.

Michael:  Yeah, this issue of abstraction and the role of feedback and and recursion and abstraction. Again, this seems related to a  really interesting strain of research going on here in SFI. Work like Albert Kao doing on collective intelligence work that Jessica Flack is doing, on the way that our models of our role in society, the way that we are evaluated by others, creates like a new layer of individuality that emerges above the members of a human society.

Melanie:  Yeah.

Michael:  There’s a point at which, for example, the, you know—again, you talked about this with Lex—that there's a moment in which the agency of an institution takes over from the agency of the members. You know, the people working for that company or whatever. So, yeah, I mean, it seems like there has to be a multi-scale structure built into this and some, at least internally, if not externally, something that looks like a social world in which the agents are modeling themselves in light of one another.

Melanie:  Yeah, I mean, I think social intelligence is very understudied in AI. Collective intelligence, this the kind of things that people are looking at here at SFI is not mainstream in AI at all. Most AI systems are trying to get a computer to do a particular task, you know, sort of recognize faces or transcribe spoken language or translate between two languages. But they're not trying to model how we might build models of each other, and use those to interact, to maybe improve our status in the society. So I think that's something that AI is going to need to grapple with, especially when, for example, we have self driving cars that are interacting in our society. They're interacting with pedestrians, they're interacting with other vehicles, they're interacting with, with animals and other kinds of things that are out there in the world. And it's going to involve something quite different from these narrow tasks that AI has been so good at to date.

Michael:  I'm curious if you're comfortable, extending the metaphor. You know, I bend towards these perhaps absurdly large and encompassing analogies. But I've heard other people describe capitalism itself as a form of artificial intelligence, and the institutions that we create—economic institutions, corporate agencies—we drop this self organizing thing into a landscape of incentives. And to us it appears like it has no mind of its own. And yet it's still subject to evolutionary pressures, and it's still adapting intelligently. And so it seems there's this other inquiry, which I'm curious about to hear your thoughts on, which is how this study of AI might bleed out into considerations of the ways that we are training what we think of as dumb systems that we do not regard as AI, except, you know, in a kind of renegade way.

Melanie:  Yeah, I think the word intelligence is, is used in a lot of different contexts. So, you know, we talk about, you know, we humans are intelligent, other animals are intelligent you might talk about the intelligence of a market, like in an economy. And I don't think we have yet define these terms very well, so that we can analyze what we're actually talking about, you know, there's different kinds of intelligences. It may be one of those words, like complexity, that's a little too broad for what we need in science.

I think a lot of people have said that these terms that we use, like “understanding” or “consciousness,” or “cognition,” they're placeholders for the things that we don't understand scientifically and the terminology will change as we understand them a little better. And so maybe we will be able to see if we when we say, “The market is intelligent, whether that's really the same kind of intelligence that we're talking about when we talk about how people are intelligent.” I don't think we're quite there yet.

Michael:  If we're going to coast this swampy terrain of poorly defined terms, it would seem like the market as an intelligent agent would be subject to the same sort of perceptual restrictions that a lot of the AI that we're building now is in terms of how we’re not giving it rich feedback in a sort of social ecosystem, in which it's capable of developing a model of itself. I'd be curious at this point, it seems like it's worth bringing in…you bring up metacognition in the book, and how Copycat was extrapolated into this other piece of software called Metacat. And I'm, I'm curious how you see the role of—saying this very carefully—the role of self awareness in all of this.

Melanie:  Right? So I just wanted to get in the what book we're talking about, which is my, my new book called Artificial Intelligence: A Guide for Thinking Humans. And in that book, I do talk about sort of the state of the art of AI. I talked about some ideas about what human cognition human understanding is. And one of those things, as you say, is, is this notion of metacognition, where we're able to think about our own thinking. Or I can think about your thinking I can develop a model of my thinking or your thinking.

So Metacat was a successor to Copycat. It also solved letter string analogy problems, but it did it in a way where it was able to, in some sense, observe its own “thinking processes,” you know, with scare quotes around the thinking. It was able to describe whether what it thought it was doing at a higher level and whether it was stuck, whether it was doing a good job or a bad job. And that's something that we humans use, we use our metacognitive abilities constantly. Typically we’re completely unaware of them, but we use them on ourselves and on other people. And metacognition is something that people in AI have thought about for a very long time, but it hasn't really made its way into the forefront of deep learning. Because as you say, the market doesn't feel that way. I mean, this is AI has kind of dual identity one is as a field that creates commercial products that make money for companies, like you know, face recognition and translation so on, but also as a scientific endeavor that's trying to elucidate what intelligence is. And those two goals don't always jive very well together.

You know, it may be that thinking about the foundations of intelligence and trying to understand how human intelligence achieves what it does, doesn't necessarily translate into big money for companies. So the incentives are different. And right now, in the history of AI, we're seeing some of the first AI systems that are really commercially successful. And I think that's tilted the balance away from the more scientific side of AI. And people are putting more effort into more of the engineering side. I think it's going to tilt back, because I think people are going to run into big roadblocks, and already have on the engineering side that will only be solved by thinking more deeply and broadly about intelligence.

Michael:  Let's talk about some of those roadblocks. In your book, which again, yes is one of six Melanie Mitchell books—and the community lecture, which we’ll post in the show notes, the talk that you gave on this topic—you run through this just hilarious list of AI failures. Like adversarial fashion,where you can deceive a facial recognition algorithm with a pair of glasses or a sticker.

Melanie: Or a T shirt.

Michael:  Yeah, there was a recent item in the news about somebody who got one of the Tesla autopilot vehicles to drive 85 in a 35. Yeah, now this kind of stuff, like where are the stumbling blocks right now? And how do you think that does reflect on theoretical failures or shortcomings?

Melanie:  So I mentioned this notion of perceptual categories versus concepts. It turns out that if a neural network has perceptual categories without having more robust concepts, it's very easy to fool it. You can make a new image, say an image that it classifies very confidently as your face. There's Michael! And then you can put on a pair of glasses with the frames have a certain pattern. And now it's completely sure that you are, say, Brad Pitt. Because it doesn't have a human-like concept of you, it just has these perceptual categories, which are much more manipulable. So, that's one of the roadblocks I think, is the fact that these systems do not have full fledged concepts. I think that's true in in vision systems and language systems, and even game playing systems. You know, systems that have beaten humans at video games or chess and Go, they don't have the kinds of full-fledged concepts that we humans have that would allow them to, say, play a game that's similar but not identical to the one that they've learned. That's been shown, that being able to transfer their knowledge from one domain to a similar domain is very difficult and often fails. And I think it really is this lack of real concepts.

Michael:  And other related question ties back into this lengthy history you have with SFI and the fact that as an institution we've become associated with work by people like Chris Langton on artificial life and this argument over whether life is unique to the substrate of organic chemistry. And when I asked social media about questions for you, one of the questions that popped up was about work that's being done in hybrid digital-organic systems or structured gels, this kind of thing. I’m curious, do you consider that a promising strain? Or is it like in Robin Hanson's Age of Em, where he suggests that we might be able to reproduce the human brain in silico without understanding it at all. It seems like maybe an organic approach to artificial intelligence doesn't really answer for us the kind of theoretical questions that we're trying to answer.

Melanie:  So I don't agree that we could reproduce the brain but without understanding it. I think that's true for probably most very complex systems. I mean, it's it's a good question. People are very excited again about evolutionary computation methods where you evolve, say, computer programs, instead of programming them yourself. So one approach to neural networks for example is to evolve the structure of the neural network and to evolve the the weights and to do this kind of combination of digital evolution and digital learning. Will we be able to just kind of go go to bed at night and let our computers run in the morning wake up, and there's an evolved artificial intelligent system that we don't understand? Possibly. That possibly could happen. I'd be very surprised, because I don't think we understand…I mean, I think we're lacking…how to say this? Current day approaches to say evolution bottom up evolution and computers are lacking a lot of the mechanisms that biological evolution has. And one of the mechanisms is like being able to increase levels of complexity, like to go from single cell to multicellular, to systems that have modular organs and all of these kinds of levels of complexity that you get, and we don't have any way to do that with current day, evolutionary computation. Maybe someday in the future, we'll be able to do simulated evolution and actually create systems organically in that way, but I don't see it happening anytime soon.

Michael:  I can kind of imagine if if you draw on Tom Ray's work in artificial life, you maybe imagine a system in which you incentivize. And you can try and model the same bias that it seems…again, heretically, it seems like a lot of the research at sci fi suggests that there is a sort of a trend toward the increasing complexity of the biosphere. Because each organism in some way contributes to a more complex ecosystem that raises the bar on intelligence. (I mean, does this sound crazy?)

Melanie:  I think of it as you know, there's lots of different niches and evolution fills niches. So there's the niche that all the microbes, the microorganisms, fill. That’s a big giant niche. There are orders of magnitude more of them there are of us. So that niche is done. Now, the only way to differentiate, to get a new niche, is to perhaps become more complex. And, of course, I'm not defining what I mean by more complex.

Michael:  Again, we're back in that tricky mess.

Melanie:  Yeah, I’ll leave it to be intuitive. And so I think that's kind of what has driven the evolution of complexity is that need to find new niches. And there's a lot of side effects, like all the associated sociality. I mean, instead of just reproducing asexually, we now have sexual reproduction, which is much more complex, and it creates all kinds of side effects like sociality and that kind of thing.

Michael:  So I mean, in that example, you know, something that I I think about all the time I probably bring up on every episode of the show is David Krakauer’s work with Martin Nowak on the emergence of syntax and human language, you know, they wrote about it as a way of avoiding a so called “error catastrophe,” that as the relevant features of your environment become more complex, it puts a pressure on the memory of the organisms trying to navigate it. That then leads to a shift from just remembering one new word for every situation to coming up with a way to combine words and create parts of speech and create sentences. And so reading that, it seemed like it was very analogous to the evolution of multicellularity and complex life and eusocial organisms. That there are these informational thresholds where adopting a multiplicative approach is just cheaper than trying to continue adding and adding and adding. So I guess the question is, if we were to assume that this is baked into the way that evolution works, at what point are we going to start seeing a kind of evolution of sexuality in AI? Is that kind of thing even going on right now?

Melanie: I think people mostly built that kind of thing into their algorithms. To actually evolve that from scratch? No one's shown anything like that. We don't have the ability that biological evolution does, which is this very open-ended evolution where it can evolve all kinds of increasingly complex structures. Whereas our current digital evolution programs don't have that kind of open-endedness at all. Now, I thought it was really interesting that Rod Brooks, who’s who's a big name in robotics and AI, recently on Twitter, he had this thread about how he thought evolutionary computation was going to become sort of the next big thing in AI, it was going to revolutionize the field, sort of in the way deep learning has maybe even more. But only if we're able to capture some of the things that biological evolution has been able to capture. So, to make it more biological, to make make it have this open-endedness, this is going to create this revolution. So that was that was interesting. I think that's possibly plausible, but I think it's going to take a long time.

Michael:  You mentioned in the AMA you gave this week about on The  Next Web, which we will also link in the show notes. somebody asked you about genetic algorithms. And they're like a kind of a revival of interest in that work and, you know, building epigenetics into this kind of thing. I'd love to hear you talk more about that like to distinguish that approach from these other approaches.

Melanie:  Well, I think, you know, evolutionary computation ideas have been around since the beginning of the computer age, just as neural networks have. And what happened with neural networks is that all of a sudden, we have huge amounts of data and huge amounts of compute power, and so we're able to get systems that no used to not work very well at all, suddenly, they work incredibly well on certain tasks. So the same is happening for evolutionary methods. You know, evolutionary methods also can benefit from lots of compute power, and lots of data. And I think that it's seen a renaissance because of that, but like neural networks, I think there's kind of going to be some roadblocks in the fact that our current models of evolution that we have that we use in evolving neural networks are extremely limited. They're just very loosely biological, just like neural networks are very loosely like the brain.

Michael:  Like optimizing for a fixed landscape.

Melanie:  Yeah, exactly. Exactly. So I think some major new discovery has to happen. To get kind of over this hump that we're seeing both in neural networks. Kind of the plateauing of progress. And also in evolutionary methods.

Michael:  It almost seems like the next big push in AI is contingent on research into the origins of life. They’re kind of asking the same question, which is, how do you jumpstart this open-ended process in the first place?

Melanie:  Yeah, it's all related. So I think the biggest question is, is progress going to come just from bigger compute power, more data? Or is it going to require some something really new? And that's the big question.

Michael:  I'd like to shift a little bit into an exploration of the way that these ideas are actually landing in and being worked on in in society and the way that they're affecting us now. Yuval Harari gave a talk at Google a few years ago on the new religion of Silicon Valley. And his “new religion,” the one he identified as an anthropologist was one in which the liberal self of the modern world has been replaced by this sort of meat robot. That we are all algorithmic entities. And that, as true as this might be, it's opened up some kind of concerning developments in terms of like social engineering, and the way that we have allowed ourselves over the last few decades to become, as Jaron Lanier puts it, like the operational extension of these algorithms that we've set loose into society. That each of us is just sort of an actuator being used by the social media robot.

Melanie:  Yeah.

Michael:  And I'm just curious how you think about this stuff. I guess this sort of ties back to that earlier question of whether a company counts as an AI, but it seems like the digital turn has created some troubling blind spots that are leading to large problems in society. Maybe the most obvious example and like one that you talked about in the book, and one that's addressed by the algorithmic justice group here at SFI is about, you know, the way that our implicit bias works itself into the algorithms that we design for offloading our decision making in justice or real estate or that kind of thing.

Melanie:  Sure. Right. So I mean, if computers are learning from data, they learn what's in the data. And that's not always the thing that we hoped that they would learn or planned for them to learn. So let's say you lose a bunch of data about criminal sentencing. And you know, you have a person who's described by a bunch of features, then you have some data about sentencing and recidivism and all that stuff.

Well, maybe it's going to learn that some statistical patterns in the data that might have to do with the fact that people who live in certain zip codes are likely to have more recidivism than people who live in other zip codes. Sure, it's true, but it's not the cause. Living in that zip code is not necessarily the cause of that. The system has no causal model of the world and how the world works. The system just Learning patterns from data. And so it's inevitable that you're going to get biases. We humans are wired to have biases.

We saw that the other night when Rajiv Sethi gave a talk about our stereotypes and how and why we have stereotypes, and how they can be beneficial, and also very unbeneficial. But we don't just learn patterns from statistics, we have kind of these causal models of the world. So we're likely to know that living in a certain zip code, that those particular five numbers are not the reason why you did well on the SAT or why you went to prison. You kind of have this world model of causality. And we also have, as we talked about, metacognition. We’re able to look at our own cognition and say, “Wait a minute, I have stereotype, but I can recognize it as a stereotype.” So our neural networks that are recommending sentences or recognizing faces or what other kinds of things they do, they don't have those kinds of models or that kind of ability for recognizing their own biases. So if they're just looking at statistical patterns, they're going to absorb these biases, and they're not going to be aware in any useful sense of what the biases are.

Michael:  I don't know if I agree with this or not. but it seems like a lot of people are concerned now that the way we have adapted to life online, you know, the way that we have adapted to being immersed in a world of these relatively rudimentary algorithms, has sort of impoverished, our own experience of our humanity. There's, like John Danaher talks about the unbundling of the self—the way that we've been become this n-dimensional data feed for advertising. The social consequences of this are in areas like the intersectionality and identity politics where you're like, “I am this and this and this.” And so, you know, we break ourselves down into categories.

Melanie:  The same categories that advertisers break us down in.

Michael:  Right. Even more broadly or generally that machines become more lively in the information age, and humans become more mechanical. And I'm curious. This is sort of the digital analog question, these aren't necessarily like concrete objective categories, right, but they do seem to yield different philosophies. There is a reflux going on right now. A lot of people returning to are almost for fetishizing the analog, and part of it is out of this desire to be irreducible. And there's this the strain in your work and a lot of work in in complexity science about the irreducibility of complex systems. What do you see in these trends? Ultimately it seems like it kind of dead ends in the question of like, human mind uploading. Like is it even possible to digitize a person? I'm really taking you out on the limb today.

Melanie:  Is it possible to digitize a person?

Michael:  I guess the the reciprocal of that question might be, “Can we even imagine something like an analog artificial intelligence?” Or, “Where are we going to find a balance between trying to reduce everything to its component algorithmic structure, and it’s n-dimensional data that we can gather about it, and at what point does that fall into the same theoretical trap that SFI has been arguing that we have to escape, in the way that we think about systems generally?”

Melanie:  Okay, I kind of see what you're getting at. So, advertisers think of us as this n-dimensional list of features. And they use that to target ads. And it works surprisingly well. I mean, if you have a lot of data, and we saw that, you know, they can predict quite a bit about what you're going to do, what you're going to buy, who you're going to vote for, what kind of car you drive, and so on, based on just a small number of features about you. So it does feel very reductionist. And I don't know if it's a self self-magnifying feedback effect that as we're more kind of broken into these parts, you know, “You are a male between the ages of 24 and 34.” I'm just guessing. “You live in Santa Fe and you drive this kind of car and you know, you have this many friends on Facebook,” or whatever. And therefore, you're targeted this stuff to buy and you buy it and become even more like the this this group of people that you're being sort of pushed in with.

I think it's there's some kind of feedback loop there that's very disturbing. Is our humanity reducible in that way? Well, I guess it depends what you want to predict. And the way society works. It's all it's very complex. To some extent, I guess we are reducible. But I think it's not a perfect algorithm. So in some cases, or with some aspects of ourselves we're not very reducible. I don't know. It's a hard question. I don't know how quite to think about it. I mean, I've been disturbed at how reducible I am.

Michael:  Is this like Doug Hofstadter’s question of, at point can a machine do something that makes you sad?

Melanie:  Yeah, “I thought I was more complex than that. And I thought I was more entrepreneur unpredictable.” But maybe I am really predictable.

Michael:  to come at it kind of sideways from that already sideways question, you know, when Michelle Girvan gave a community lecture here and talking about reservoir computing and how adding noise to machine learning algorithms improved their ability to to predict the behavior of chaotic systems. You know, just training a camera on a bucket of water and kicking it periodically to generate waves, and then feeding that in, gave you weather predictions that were outperforming what we believed was even possible.

I see something like, as we adapt to a world that is capable of examining us ever finer and more granularly people are starting to zag when they think the computer is expecting them to zig. Even back in the 90s Brian Eno was talking about he imagined in 25 years direct mail marketing would get so good that people would just start buying items randomly to throw off the profiles. So this seems like an arms race, right? At what point, and in what ways, do you imagine chaos enters in a meaningful way, here? Like the noise must be included in order to make sense of it?

Melanie:  Wow. Yeah. I don't know. What you just were talking about reminded me of these new approaches people have to privacy in data sets, right. How do how do you anonymize data sets? There was this competition to improve Netflix’s predictions of people's preferences for movies and stuff in it? The people were given an anonymized data set. But it was actually pretty easy to figure out who the people were, even though it was anonymized. So now the approach is to actually introduce a bunch of random noise into the data set in a specified ways that doesn't affect the machine’s ability to use it to predict, but it does affect its ability to use it to identify people. So this is an approach to privacy that's getting a lot of a lot of attention now. So we're going to use randomness to protect ourselves from the ever-increasing scrutiny of our lives. And I, as you said, it's it's going to be an arms race.

Michael:  Back to jamming facial recognition.

Melanie:  Right, you know, with the glasses.

Michael:  Or the trippy stickers. Yeah.  I want to honor the people on social media who wanted me to ask you about some other stuff here. One of the questions came from Mateo Quentoqui on Facebook, who wanted to know your work with Complexity Explorer and agent-based modeling more broadly, you've done a lot of work in getting agent-based modeling out into society and helping people think about this alternative methodology. He wants to know why you think agent based modeling has been so slow to catch on in certain fields, fields like political science, where it seems as though it ought to flourish. What is it about that particular technique that so many different disciplines resist?

Melanie:  Hmm, that's a good question. I think there have been very influential agent-based models and say political science. You know, the work by Thomas Schelling are very early on work by Bob Axelrod have had a lot of influence on on people. There's good agent based models and there's bad ones and the good ones that really help, you can't necessarily expect them to predict the world very precisely, but you can get a lot of intuition from running them. Those, those are the best ones, in my opinion. So, you know, Axelrod's agent-based models that played the prisoner's dilemma. That got people's intuitions were really broadened by those and Schelling’s models. And some of the models that have come out of SFI have really broadened people's intuitions. And I see that as really the function of these models. They’re much more intuitive than a lot of mathematical models and you can you can poke them more easily. You can make changes and see what happens and kind of get an intuition, from doing little experiments on them. You know, another influence really influential one was the Boyd's model of the flocking. And that's I don't know that's gotten a lot of and the the Brian Arthur's El Farol model. So all of these I think have been pretty influential.

Michael:  Yeah, I forget who it was but there was a fun ABM that came out last year on hipster fashion, sort of based on the El Farol Bar problem about, like, at what point do people flop and decide to act counter to the popular decision, popular behavior? Anyway, another question from social media, this one from from Aritra Sarkar. This one would be based on your own intuitive grasp on things that you've developed over the years. What problems in AI In your opinion, are good candidates for acceleration in the upcoming paradigm of quantum computing?

Melanie:  Yeah, I get that question a lot. And I'm no expert on quantum computing. So my understanding is that there's certain algorithms that give you speed ups on certain problems. And it's not clear that a lot of problems in machine learning, say, map on to the algorithms that are known right now for quantum computing. So I'm not sure that very many problems in machine learning will immediately be impacted by quantum computing.

I do have a colleague who is working on machine learning using the D-Wave quantum computer. There's some argument about whether it's doing quantum computing or not. But it what it does is you have to map your problem to a very specific kind of network. And then the quantum computer can find a minimal energy configuration of that network which is equivalent to a solution of your problem, and it can do it very, very fast. This colleague of mine does sort of machine vision inspired by neuroscience. And there are some problems that he's looking at which do map well onto this particular kind of network, which in physics is called an Ising model. And so therefore, he's able to solve them very quickly. But it does take a lot of preparation. So I'm not sure we're going to have a general purpose deep learning platform that runs on quantum computers. But I think there may be some very specific problems that can benefit.

Michael: This question actually comes from Stewart Brand, who was a long-time SFI trustee.

Melanie:  The Whole Earth Catalog!

Michael:  Indeed, yeah! Stewart wanted to know, “How many levels of recursion is healthy?” Again, if we want to get into these questions about social intelligence and machines, my mind automatically goes to the awkward teenager, and at what point does self reflection begin to paralyze a system begin to interfere with its behavior in a harmful way. I think that's kind of what kind of what he was getting at.

Melanie:  Yeah, that's a great question. You have too much metacognition I don't know how to answer how much is too much, or how much is healthy? But I think, God, it’s a great question. Maybe somebody has an answer. I don’t.

Michael:  Like one AI can't work up the nerve to ask another AI on a date. Too much metacognition.

Melanie:  Right!

Michael:  We've we've kind of danced around this next question already in the show. But Marco Valenti wanted to know what do we know, if anything at all, about how the laws of causality are at work in complex systems. I know you're leading this reading group here on Judea Pearl’s Book of Why. Yeah, this is this is a question very intimately related to the work that David Kinney is doing here. And again, if we're going to be training AI to start to model cause and effect, then we have to have some understanding of these things. So where are we with this?

Melanie:  Wow. Well, we had our first meeting of our Book of Why reading group yesterday, and it was very contentious about what causality is, and how to think about causality. You know, Judea Pearl has has pointed out that deep neural networks don't have any notion of causality. They just fit of data to a function. His view is that you're never going to get to human-level intelligence without the ability to think about causes, to think about counterfactuals. Like, what if I did this thing differently? What would have happened? That's a causal question. To me, it's seems very intuitive that causality is central to our thinking. You know, babies learn about causality very early, maybe it's innate, I don't know. They learn that, if you drop something that's going to fall to the ground, because you dropped it. It’s not a coincidence. I was kind of struggling with how to give machines, causal models. There's a lot of different proposals out there. But I didn't realize until I came here, and we started talking about causality, that there's so many different views about about it. It's such a philosophical question. So that's really good. That's interesting. And I'm now completely confused about that subject.

Michael:  What are you working on now? Now that you're here? I mean, you're in residence here for a while now as the the Davis Professor of Complexity. Yeah. So what's on the table currently?

Melanie:  Well, I'm really interested in this question. We've talked about already of like, what is difference between a perceptual category and a concept? And how could we get machines to have something like concepts? So this gets into how do we think abstractly? How do we make analogies, there's been a quite a few recent efforts, especially in the deep learning community to look at conceptual abstraction, using kind of these toy domains, these idealized domains, sort of like the Copycat domain, there's several different domains kicking around that people have looked at. And looking at how well these neural networks can do this kind of abstraction.

I've been reading a lot of these papers, and I'm a little skeptical about what some of them have done. So I'm trying to dig into that whole area of conceptual abstraction, especially as being done now by various AI systems, and to try and understand what people have done and what's missing from it, and what my own ideas can contribute…sometimes write a paper about all of this. Hopefully, it won't take the entire year that I'm here to gather my thoughts about this! But it's what I've been thinking about. And I've been reading quite a bit about developmental psychology and how babies acquire concepts as opposed to perceptual categories.

Michael:  Talk more about that. What is the difference in infant learning?

Melanie:  Well, this is a good question. I mean, no, there's no real agreement there. But it certainly seems like babies very early on are able to reason about the way the world works in a way that a deep neural network can't. That they have something like a concept, the beginnings of conceptual thought that underlies all these sort of Lakoff and Johnson metaphors we live by. That we have these physical concepts and those map into more abstract concepts, and babies are doing that.

So how are they doing that? Well, that's a good question. Nobody really knows. Nobody knows a lot of theories, but there's not agreement. And it's really interesting that one of the grand challenges in AI right now that's being funded by DARPA is called “foundations of common sense.” And the goal of the program is to build a machine that has the common sense of an 18 month old baby. By “common sense,” I interpret that as the conceptual system, and kind of goes through the same developmental stages that a baby does, like, knows about objects, learns about how objects have permanence, like if you hide an object, it's still there. These classic developmental stages that babies go through. Learns theory of mind. You know, there's this phenomenon where if you ask a two year old, “Show me that picture you're drawing,” they'll hold it up to their own face, because they think that if they can see it, you can see it. But there's certain age, I can't remember exactly, when if you ask that they they show hou they kind of have a theory of your perception. So it's fascinating area, and I'm trying to learn about it and try and figure out if any of these ideas that people have are going to be relevant for getting AI systems do these kinds of things.

Michael:  I don't know enough about the field to know for certain whether this is true, but it seems as though a lot of the work that's being done in AI is assuming a kind of a blank slate model and that maybe we're not paying enough attention to the innateness. That there is the learning going on in an individual lifespan, but then children are born with all these instincts and identical twins are different. How much do you think innateness figures into this? And is anyone working on building innateness into machine intelligence?

Melanie:  Yes. So this is actually a big debate, as it has been for AI’s entire existence and probably for the last 200 years of psychology too. How much are we blank slates, and how much are we born with some kind of prior knowledge? And it does seem that babies have some innate knowledge in the sense that they, they have the concept of “object” as opposed to “background,” and that an object is kind of a coherent thing. They have a concept of causality, that certain things cause other things. They have concepts like, certain things are animate, that they can move on their own, whereas other things are not animate, and they can't move without some animate thing pushing them. There's all kinds of things like that. There's a developmental psychologist named Liz Spelke. He who has catalogued what she calls “core knowledge.” This is sort of what babies either are born with or learn very, very early on, that everything else is built out of. So there's a whole group of researchers who are kind of taking that idea of the core knowledge and trying to put it into AI systems. And that's something that's fairly new in the area, at least in this incarnation of AI. And it's really interesting. It's something that I definitely think it's quite promising.

Michael:  It seems like we're back to the question about evolutionary computation and how maybe if we want to design systems that work the way we're trying to make them work, we need to make an important differentiation between ontogeny and phylogeny. That individual AI systems probably have to evolve in some sort of ecology, where there is their own individual lifespan, and then there's an opportunity for evolution over generations. I don't know, but it just seems like AI research is sort of recapitulating evolutionary history.

Melanie:  I don't think it is. And in fact, I think there's been kind of almost an allergy in the field to looking to biology very precisely. Now I think more people are interested in ideas from developmental psychology and neuroscience. And I'm hoping that here at SFI we’ll be able to kind of facilitate some of those conversations, to get people from these different fields together to interact with each other and to really think about intelligence more broadly.

Michael:  When Adi Livnat not came out, to speak at SFI in 2018, for the Developmental Bias and Evolution workshop, he was making an argument that genetic mutation actually works by like a Hebbian learning model that like gene regulatory complexes were fusing as they were expressed, in a way that you see neurons wiring together as they're fired.

Melanie:  Wow.

Michael: That seems like an area where it work in neuroscience and AI is informing the way that we're actually thinking about evolutionary dynamics. I don't know. I'm after the grand syncretic thing here, but what terrain remains fruitfully uncovered?

Melanie:  Yeah. There’s endless terrain, but can't cover everything. Have to save something for the next podcast.

Michael: Yeah, we'll give it to them. I guess maybe the last question for you would be, you founded the Complexity Explorer program here. Which, as of 2013, was very precocious, you know, the idea of bringing all of these educational resources online. And now, seven years later, we have EdX. We have far more capable teleconferencing technologies. How do you imagine the evolution of online education? Where would you like to see this headed? What would you like to see SFI do with the future of Complexity Explorer?

Melanie:  Yeah, that's a really good question. You know, one of the problems with online education is, it's so one way, we have the lecturer being videotaped, and then the videotape being watched by a bunch of people. And there's isn’t the backwards interaction. But I think that the new technology has made it possible to make these things much more interactive. And to have something much more like the the actual live classroom experience. I don't know exactly how that should go, but I'm hoping that that's something that people in online education can make progress on.

The other thing is, you know, so, I mean, one of the things that happened to Complexity Explorer after I left, after I stopped being so closely involved with it, was that they started organizing these more project-based courses where students would actually do projects and projects would get feedback from other students, and if they created some kind of simulation, they get to post the demo of it online and other people could try it out. That's kind of heading in more in that direction of hands-on learning that you can do in this online way. So I think that's a really important way to go.

The last thing I want to say is that I've always been fascinated by this concept of citizen science, where non-scientists, just regular people, can contribute to the kind of scientific questions they are interested in. There have been a lot of examples of this where people are collecting observations of things or they're playing games online that actually lead to more understanding of protein folding optimization and things like that. And I'm hoping that SFI, through its Complexity Explorer program, can do some more sort of opportunities for the vast number people were really interested in complex systems, but aren't working here at the Institute, to actually get involved in SFI research in that way. So that would be really exciting.

Michael:  I mean, we've got 7200 people in the Facebook group, just waiting to collective intelligence their way into something like that.

Melanie: Yeah. Say I'm doing research on some topic, I would have to figure out some way that I could tap into that desire of lots of people to get involved in the research and make it happen in this more distributed model. And I don't know how to do that. But I think that's where I'd like to see some of this going.

Michael:  Maybe that's a question to the audience. Email us! Well, Melanie, it's been just awesome to get to sit down and have your time for this conversation. I really appreciate it.

Melanie:  Well, thank you. I've enjoyed it. It kind of went all over the place, but it's been fun.

Michael: So it is!