COMPLEXITY

Nature of Intelligence, Ep. 6: AI’s changing seasons

Episode Summary

In the final episode of the season, Abha sits down with Melanie to hear her perspective. They chat about Melanie’s career and research with Douglas Hofstadter, the author of Gödel, Escher, Bach. They also discuss her opinions on LLMs’ current capabilities, what she thinks of existential questions like the alignment problem, how sustainable the industry is, the difficulty of making claims about concepts like “intelligence” and “understanding,” and what she thinks future technological development should focus on.

Episode Notes

Guest: 

Hosts: Abha Eli Phoboo

Producer: Katherine Moncure

Podcast theme music by: Mitch Mignano

Follow us on:
TwitterYouTubeFacebookInstagramLinkedIn  • Bluesky

More info:

Books: 

Talks: 

Papers & Articles:

Episode Transcription

[THEME MUSIC]

 

Abha Eli Phoboo: From the Santa Fe Institute, this is Complexity. 

Melanie Mitchell: I’m Melanie Mitchell.

Abha: And I’m Abha Eli Phoboo.

[THEME MUSIC FADES OUT]

Abha: Melanie, it's so wonderful to be able to sit down and ask you questions this time. Could we maybe get started with, know, how you got into the business of AI, could you maybe tell us a little bit about that?

Melanie: Yeah, so I majored in math in college. And after college, I worked as a math teacher in a high school in New York City. But while I was there, I didn't really know what I wanted to do. I knew I didn't want to teach forever. So I was reading a lot. And I happened to read a book called Gödel, Escher, Bach by Douglas Hofstadter. And it was a book about, well, Gödel, the mathematician, Escher, the artist, and Bach, the composer, obviously. But it was really much more. It was about how intelligence can emerge from non-intelligent substrate, either in biological systems or perhaps in machines. And it was about sort of the nature of thinking and consciousness. And it just grabbed me like nothing else ever had in my whole life. And I was just so excited about these ideas. So I decided I wanted to go into AI, which is what Hofstadter himself was working on. So I contacted him. He was at Indiana University and I never heard back. In the meantime, I moved to Boston for a job there and was hanging around on the MIT campus and saw a poster advertising a talk by Douglas Hofstadter. I was so excited. So I went to the talk and I tried to talk to him afterwards, but there was a huge crowd of people around him. His book was extremely famous and had a big cult following. So then I tried to call him at his office. He was on sabbatical at MIT, it turned out, and left messages and never heard back. So finally I figured out like he's never at his office during the day, so he must be there at night. So I tried to call him at 10 in the evening and he answered the phone and was in a very good mood and very friendly and invited me to come talk to him. So I did and I ended up being an intern in his group and then going to graduate school to work with him. So that was the story of how I got to my PhD program. It was actually at University of Michigan where he was moving to and worked with him for my PhD working on how people make analogies and how a machine might be able to make analogies in a similar way.

Abha: That's so interesting. I mean, you were very tenacious. kept, you know, not giving up.

Melanie: Yeah, exactly. That was the key.

Abha: So when you graduated, I've heard you mentioned before that you were discouraged from mentioning AI in your job search at that point of time, right? Could you maybe tell a little bit about what the world of AI was like at that point?

Melanie: Yeah, so the world of AI has gone through several cycles of huge optimism and people thinking that true AI is just around the corner, just a few years away. And then disappointment because the methods that AI is using at the time don't actually turn out to be as promising as people thought. And so these are called sort of the AI springs and AI winters. And in 1990, when I got my PhD, AI was in the winter phase. And it was not, I was advised not to use the term artificial intelligence on my job applications. I was advised to use something more like intelligent systems or machine learning or something like that, but the term AI itself was not looked well upon.

Abha: So what do you think now of the fact that the Nobel Prize just recently went to people working in AI? The one for physics went to John Hopfield and Jeffrey Hinton for their work in machine learning. And then Demis Hasabis for chemistry. What do you think of that?

Melanie: Well, obviously we're in an AI spring or summer right now and the field is very hot and people are again predicting that we're going to have, you know, general human and level machine intelligence any day now. I think it's really interesting that the Nobel prizes this year were sort of, you know, the AI sweep. There was a lot of people joking that ChatGPT would get the literature prize. But, you know, I was a little surprised at the physics prize, not so much at the chemistry prize. You know, the chemistry prize was for Alpha Fold, which is a program from Google DeepMind, which is better than anything that ever came before in predicting protein structure. That was obviously a huge, huge success and incredible achievement. So I think that was not surprising to me at all that the DeepMind people got that award. The physics award, you know, Hopfield is a physicist and the work that he did on what are now called Hopfield networks was very inspired by physics. Hinton I was a little more confused about just because, you know, I don't didn't really see the physics connection so much. I think it is just more the impact that machine learning is having on physics. And machine learning today is all about neural networks, and Hinton was obviously a big pioneer in that field. So I think that's the thinking behind that. But I know a lot of physicists who have grumbled that that's not physics. 

Abha: Yes, it's been very interesting to see that debate in the physics community. You and I, you know, we've talked to so many researchers over the course of the season, and I wanted to ask if there was something you were hoping to learn when we first started building this podcast together? 

Melanie: Well, I think one reason I was excited to do this podcast was because I wanted to talk to people, not just in AI, but also in cognitive science. The voices of cognitive science and AI haven't been given as much sort of airtime as people who are say at big AI companies or big AI labs. I think that they've been missing a key element, which is sort of what is this thing we're calling intelligence? What is the goal of something like general AI or AGI? What's the thing we're trying to get to when we talk about human level intelligence and cognitive scientists have been trying to understand what human level intelligence is for a century now. The ideas that these people have about intelligence seem to be very different from those of people sort of leading the pack in the AGI world. So I think that's an interesting contrast.

Abha: I agree. I think I learned a lot too. And you know, John Krakauer, one of the first guests we had in the first episode of the season, you and he are currently going through a three-year discussion project to understand the nature of intelligence. And I'm curious about, you know, what you've learned. I know you had your first meeting. So what you learned in that first meeting and why do you think it is so important that you want to put this exercise together for, you know, number of years, not just like a couple of sessions that end in, you know, a month or two.

Melanie: Well, I think there are several aspects to this. So John Krakauer and I have been talking for years about intelligence and AI and learning, and we finally decided that we should really have a set of very focused workshops that include people from all these different fields, similar to this podcast, about the nature of intelligence. You know, AI and machine learning, it's a very fast moving field. You know, you hear about new progress every day. There's many, many new papers that are published or submitted to preprint servers. And it's just overwhelming. It's very fast. But there's not a lot of more slow thinking, more long-term, more in-depth thinking about what it is that we're actually trying to do here. What is this thing called intelligence? And what are its implications, especially if we imbue machines with it? So that's what we decided we would do, kind of slow thinking rather than very fast sort of the kind of research that is taking over the machine learning and AI fields. And that's what in some sense, SFI or Santa Fe Institute is really all about is trying to foster this kind of very in-depth thinking about difficult topics. And that's one of the reasons we wanted to have it here at the Santa Fe Institute.

Abha: Yeah, I mean, it almost seems counterintuitive to think of AI now in slower terms because the world of AI is moving at such speed and people are trying to figure out what it is. But going back to, you know, our original question in this podcast, what do we know about intelligence right now? 

Melanie: Well, intelligence as we've seen throughout the podcast is not a well-defined sort of rigorously mathematically defined notion. It's what Marvin Minsky, the AI pioneer called a suitcase word. And by that he meant that it's like a suitcase that's packed full of a jumble of different things, some of which are related and some of which aren't. And there's no single thing that intelligence is. It's a whole bunch of different capabilities and ways of being that perhaps are not just one single thing that you could either have more of or less of, or get to the level of something. It's just not that kind of simple thing. It's much more of a complex notion. There's a lot of different hallmarks that people think of. For me, it's generalization, the ability to generalize, to not just understand something specific, but to be able to take what you know and apply it in new situations without having to be retrained with vast numbers of examples. So just as an example, know, AlphaGo, the program that is so good at playing Go. If you wanted to teach it to play a different game, it would have to be completely retrained. It really wouldn't be able to use its knowledge of Go or its knowledge of sort of game playing to apply to a new kind of game. But we humans take our knowledge and we apply it to new situations. And that's generalization, that's to me one of the hallmarks of intelligence.

Abha: Right. I'd like to go into your research now, and if you could tell us a little bit about the work you've done in conceptual abstraction, analogy making, and visual recognition and AI systems, you know, the problems you're working on right now, could you tell us a little bit about that?

Melanie: Sure. So I started my career working on analogy making. And when I got to Doug Hofstadter's group, he was working on building a computer system that could make analogies in a very idealized domain, what he called letter string analogies. So I'll give you one. If the string ABC changes to the string ABD, what did the string IJK change to?

Abha: IJL.

Melanie: Okay, very good. So you could have said, ABC changes to ABD, that means change the last letter to a D, and you would say IJD. Or you could have said, ABC changes to ABD, but there's no Cs or Ds in IJK, so just leave it alone. But instead, you looked at a more abstract description. You said, okay, the last letter changed to its alphabetic successor. That's more abstract. That's sort of ignoring the details of what the letters are and so on and applying that rule to a new situation, a new string. And so people are really good at this. You can make up thousands of these little letter string problems that do all kinds of transformations and people get the rules instantly. But do you get a machine to do that? How do get a machine to perceive things more abstractly and apply what they perceive to some new situation? That's sort of the key of analogy. And it turned out it's quite difficult because machines don't have the kind of abstraction abilities that we humans have. So that was back in, you know, when I was first starting my PhD, that was back in the 1980s, you know. So that was a long time ago in AI years. But even now, we see that even the most advanced AI systems like ChatGPT still have trouble with these kinds of analogies if they haven't seen them before in their training data. And there's a new kind of idealized analogy benchmark that was recently developed called the Abstraction and Reasoning Corpus, which features more visual analogies, but similar to the ones that I just mentioned. You have to try and figure out what the rule is and apply it to a new situation. And there's no machine that's able to do these anywhere near as well as people. The organizers of this benchmark have offered a prize, right now it's at $600,000 for anybody who can write a program or build some kind of machine learning system that can get to the level of humans on these tasks. And that prize is still unclaimed.

Abha: I hope one of our listeners will work on it. It would be very cool to have that solved.

Melanie: We'll put the information in the show notes.

Abha: So can you tell me like, you know how do you go about testing these abilities?

Melanie: So the key for the letter string analogies and also for the abstraction and reasoning corpus problems that's abbreviated to ARC is to show a few demonstrations of a concept. So like when I said ABC changes to ABD, the concept is change the rightmost letter to its successor. Okay, and so I showed you an example and now say, here's a new situation. Do the same thing. Do something analogous. And the issue is I haven't shown you millions of examples. I've just shown you one example or sometimes with these problems you can give two or three examples. That's not something that machine learning is built to do. Machine learning is built to pick up patterns after seeing hundreds to you know, millions to billions of examples, not just one to three examples. So this is what's called few shot learning or few shot generalization. The few shot being you just get a few examples. And this is really the key to a lot of human intelligence is being able to look at a few examples, and then figure out what's going on and apply that to new kinds of situations. And this is something that machines still haven't been able to do in any general way.

Abha: Right. So say, if a child sees a dog, right, of a certain kind, but then it sees a Dalmatian, which has, you know, different kinds of spots, they can still tell it's a dog and not a cow, even though they've seen a cow with, you know, those kinds of patterns on their bodies before, right? So when you do that in machines, what do you actually find out? Like what have you found out in your testing of the ARC?

Melanie: Yeah, we found out that machines are very bad at this kind of abstraction. We've tested both humans and machines on these problems. And humans tend to be quite good and are able to explain what the rule is they've learned and how they apply it to do a new task. And machines are not good at figuring out what the rule is or how to apply a rule to a new task. That's what we found so far. Why machines can't do this well, that's a big question. And what do they need to do it well? That's another big question that we're trying to figure out. And there's a lot of research on this. Obviously, people always love it when there's a competition and a prize. So there's a lot of people working on this. But I don't think the problem has been solved in any general way yet.

Abha: I want to ask about this other workshop you've done quite frequently is the understanding workshop, which actually came out of the barriers of meaning. If you could tell a little bit about what the idea of understanding there was as a result of the many days of discussing and listening to people from different fields, I thought that was fascinating. Could you maybe recount a little bit?

Melanie: Yeah, so, back in the many, many decades ago, the mathematician John Carlo Rota wrote an essay about AI. This was long before I was even in AI. And he asked: When will AI crash the barrier of meaning? And by that he meant like, you know, we humans, language and visual data and auditory data, mean something to us. We seem to be able to abstract meaning from these inputs. But his point was that machines don't have this kind of meaning. They don't live in the world, they don't experience the world, and therefore they don't get the kind of meaning that we get and he thought of this as a barrier, this is their barrier to kind of general intelligence. So we had a couple of workshops called AI and the barrier of meaning because I kind of like that phrase about what it would take for machines to understand and what even understand means. And we heard from many different people in many different kinds of fields. And, it turns out the word understand itself is another one of those suitcase words that I mentioned. Words that can mean many different things to different people in different contexts. And so we're still trying to nail down exactly what it is we want to mean when we say, do machines understand? And I don't think we've come to any consensus yet, but it certainly seems that there are some features of understanding that are still missing in machines that people want machines to have this idea of abstraction, this idea of being able to predict what's gonna happen in the world, this idea of being able to explain oneself, explain one's own thinking processes and so on. So understanding is still kind of this ill-defined word that we use to mean many different things and we have to really understand in some sense what we mean by understanding.

Abha: Right. Another question that you asked one of our guests, you posted Tomer and Murray. Some AI researchers are worried about what's known as the alignment problem, as in, you know, if we have an AI system that is told to, for example, fix global warming, and you have said, you know, what's to stop it from deciding that humans are the problem and the best solution is to kill us all. What's your take on this and are you worried?

Melanie: Well, I find it... mysterious when people pose this kind of question, because often the way it's posed is, imagine you had a super intelligent AI system, one that's smarter than humans across the board, including in theory of mind and understanding other people and so on. Because it's super intelligent, you give it some intractable problem like fixed climate change. And then it says, okay, humans are the source of the problem. Therefore, let's kill all the humans. Well, this is a popular science fiction trope, right? We've seen this in different science fiction movies. But does it even make sense to say that something could be super intelligent across the board and yet try to solve a problem for humans in a way that it knows humans would not support. So, you know, there's so much packed into that. There's so many assumptions packed into that, that I really want to question a lot of the assumptions about whether intelligence could work that way. I mean, it's possible. We've certainly seen machines do unintended things. You know, we remember a while ago, there was the stock market flash crash which was due to machines, allowing machines to do trading and them doing very unintended things, which created a stock market crash. But the assumption that you could do that with a super intelligent machine, that you would be willing to sort of hand over control of the world and say, go fix climate change, do whatever you want. Here's all the resources of the world to do it and then have it not have that kind of sort of understanding or… lack of, in some sense, common sense. It really seems strange to me. So every time I talk about this with people who worry about this, you know, they say things like, well, the machine doesn't care what we want. It's just going to try and maximize its reward. And its reward is does it achieve its goal? And so it will try and create sub goals to achieve its reward. The sub-goal might be kill all the humans, and it doesn't care because it's going to try and achieve its reward in any way possible. Yeah, I mean, that just, I don't think that's how intelligence works or could work. And I guess that's all, it's all speculation right now. And the question is sort of how likely is that to happen? And should we really put a whole lot of resources in preventing that kind of scenario? Or is that incredibly far-fetched and should we put our resources in much more concrete and known risks of AI. And this was a debate going on, for instance, just in California recently with a California Senate bill to regulate AI. And it was very much influenced by this notion of existential threat to humanity. And it was vetoed by the California governor for and one of the reasons was that the assumptions that it was based on, he felt were too speculative.

Abha: What do you think are the real risks of the way we would function with AI if AI would be flourishing in the world at the pace it is?

Melanie: Well, we're already seeing all kinds of risks of AI happening right now. We have deep fakes in both visual and auditory modalities. We have voice cloning, AI voices that can convince you that they are actually a real person or even a real person that you personally know. And this has led to scams and spread of disinformation and all kinds of terrible consequences. And I think it's just gonna get worse. We've also seen that AI can sort of flood the internet with what people are calling slop, which is just AI generated content that then things like Google search engine picks up on and returns as the answer to somebody's search, even though it was generated by AI and it's totally untrue. We see things like AI being used, for instance, to undress women in photographs. You can take a photograph of a woman, run it through a particular AI system, and she comes out looking naked, and people are using this online. And it's just lots and lots of current risks. You know, Daniel Dennett, the late philosopher, wrote an article very shortly before he died about the risks of artificial people. The idea that AI impersonating humans and convincing other humans that it is human, and then people kind of believing it and trusting it and giving it the kind of agency it doesn't have and shouldn't have. These are the real risks of AI.

Abha: Is there any way to sort of keep the quality of information at a certain standard, even with AI in the loop?

Melanie: I fear not. I really worry about this. The quality of information, for instance, online never has been great. It's always been hard to know who to trust. One of the whole purposes of Google in the first place was to have a search algorithm that used methods that allowed us to trust the results. This was the whole idea of what they called PageRank, trying to rank web pages in terms of how much we should trust their results, how good they were and how trustworthy they were. But that's really fallen apart through the commercialization of the internet, I think, and also the motivation for spreading disinformation. But I think that it's getting even worse with AI and I'm not sure how we can fix that, to be honest.

Abha: Let's go back to the idea of intelligence. You know, a lot of people talk about the importance of embodiment. Also, you know, our guests mentioned this to be able to function as intelligent beings in the world because of the input we receive and experiences we have. Why is it important to think of this as a factor?

Melanie: Well, the history of AI has been a history of disembodied intelligence. Even at the very beginning, the idea was that we could somehow sift off intelligence or rationality or any of these things and implement it in a computer. You could sort of upload your intelligence into a computer without having any body or any direct interaction with the world. So that has gone very far with today's large language models, which don't have direct interaction with the world except through conversing with people and are clearly disembodied. But some people, guess, including myself, think that there's only so far that that can go, that there is something unique about being able to actually do things in the world and interact with the real world in a way that we humans do that machines don't, that forms our intelligence in a very deep way. Now it's possible with, you know, vast, almost infinite amounts of data, training data and compute power that machines could come close to, you know, getting the knowledge that would approximate that, what humans do. And we're seeing that kind of happening now with these systems that are trained on everything online, everything digitized, and that companies like Microsoft and Google are now building nuclear power plants to power their systems because there's not enough energy currently to power these systems. But that's a crazy, inefficient, and non-sustainable way to get to intelligence, in my opinion. And so I think that if you have to train your system on everything that's ever been written and get all the power in the world and even like Sam Altman says, have to get to nuclear fusion energy in order to get to sort of human level intelligence that you're just doing it wrong. You're not achieving intelligence in any way that's sustainable and we humans are able to do so much with so little energy compared to these machines that we really should be thinking about different way to approach intelligence and AI. And I think that's what some of our guests have said that there's other ways to do it. And for instance, Alison Gopnik is looking at how to train machines in the way that children learn. And this is sort of what Linda Smith and Mike Frank and others are looking at too is like, aren't there better ways to get systems to be able to to exhibit intelligent behavior.

Abha: Right. So let's move on to AGI. There are a lot of mixed opinions out there about what it is and how it could come into being. What in your view is artificial general intelligence?

Melanie: I think the term has always been a bit vague. It was first coined to mean something like, you know, human-like intelligence. The idea is that in the very early days of AI, the pioneers of AI like Minsky and McCarthy, their goal was to have something like the AI we see in the movies, robots that can do everything that people do. But then AI became much more focused on particular specific tasks, like driving a car or translating between languages or diagnosing diseases. And you know, these systems could do a particular task, but they weren't the sort of general purpose robots that we saw in the movies that we really wanted. And that's what AGI was meant to capture was that vision. So that was, AGI was a movement in AI back in the early 2000s. It had conferences, they had papers and discussions and stuff, but it was kind of a fringe movement. But it's now come back in a big way because now AGI is at the center of the goals of all of the big AI companies. But they define it in different ways. For instance, think DeepMind defines it as a system that could do all what they call cognitive tasks as well as or better than humans. So that notion of a robot that can do everything has now been sort of narrowed into, oh well, we don't mean all that physical stuff, but only the cognitive stuff, as if those things could be separated. Again, the notion of disembodiment of intelligence. OpenAI defined it as a system that can do all economically valuable tasks. That's how they have it on their website, which is kind of a strange notion, because you know, it's sort of unclear what is and what isn't an economically valuable task. You know, you, you're not, you might not be getting paid to raise your child. But raising a child seems to be something of economic value eventually. So I don't know, I think that it's ill defined, that people kind of have an idea of what they want. But it's not clear what exactly the target is or how we'll know when we get there.

Abha: So do you think we will ever get to the point of AGI in that definition of the ability to do general things?

Melanie: In some sense, we already have machines that can do some degree of general things. You know, ChatGPT can write poetry, it can write essays, it can solve math problems, it can do lots of different things. It can't do them all perfectly for sure. And it's not necessarily trustworthy or robust, but it certainly is in some sense more general than anything we've seen before. But I wouldn't call it AGI. I think the problem is, you know, AGI is one of those things that might get defined into existence, if you will. That is, the definition of it will keep changing until it's like, okay, we have AGI. Sort of like, you know, now we have self-driving cars. Of course, they can't drive everywhere and in every condition. And if they do run into problems, we have people who are sort of can operate them remotely to get them out of trouble. Do we want to call that autonomous driving? To some extent, yeah. To some extent, no. But I think with the same thing is happening with AI that we're going to keep redefining what we mean by this. And finally, it'll be there just because we defined it into existence.

Abha: You know, going back to the Nobel Prize in physics, physics has a theoretical component that proposes different theories and, you know, hypotheses that groups of experimentalists then go and try to see if it's true or, you know, if they can try it out and see what happens. In AI so far, the tech industry seems to be hurtling ahead without any theoretical component to it necessarily. How do you think academia and industry could work together? 

Melanie: There's a lot of people trying to do what you say, trying to kind of come up with a more theoretical understanding of AI and of intelligence more generally. You know, it's kind of difficult because the term intelligence as I said, isn't that, isn't rigorously defined. I think academia and industry are working together especially in the field of applying AI systems to scientific problems. But one problem is that it's going much more in the sort of big data direction than in the theoretical direction. So we talked about Alpha Fold, which basically won the chemistry prize. Alpha Fold is a big data system. It learns from huge amounts of data about proteins and the evolutionary histories of different proteins and similarity between proteins. And nobody can look at Alpha Fold’s results and explain exactly how it got there or say, reduce it to some kind of theory about protein folding and why certain proteins folded the way they do. So it's kind of a black box big data method to do science. And I fear in a way that that's the way a lot of science is going to go, that some of the problems that we have in science are going to be solved, not because we have a deep theoretical understanding, but more because we throw lots and lots of data at these systems and they are able to do prediction, but aren't able to do explanation in any way that would be sort of theoretically useful for human understanding. So maybe we'll lose that quality of science that is human understanding in favor of just big data prediction.

Abha: That sounds incredibly tragic.

Melanie: Well, maybe the next generation won't care so much. Like if you could cure cancer, let's say, as we've been promised by people like Sam Altman that AI is going to do. Do we need to understand why these things work? You know, some kind of magic medicine for curing cancer? Do we need to understand why it works? Well, I don't know. Lots of medications, we don't totally understand how they work. So that may be something lost to AI is the human understanding of nature.

Abha: Right. Ted Chiang wrote an article, I think you must have read in the New Yorker, about the pursuit of art and what art is and how AI approaches it versus how we approach it. And even though art does not have an impact like the same kind of impact as curing cancer would, it does have a purpose in our human existence. And to have AI take that away, I mean, you must have seen the memes coming out about these things, you know, that one had expected artificial intelligence to sort of take care of a housework, but it's gone and taken away our creative work instead. How do you look at that? Like, does that mean that as humans, you know, do we continue trying to pursue these artistic endeavors of understanding or, you know, understanding more deeply things that we feel like have meaning for our lives or do we just give that over to AI?

Melanie: That sounds even more tragic to me than giving science over to AI. You know, Ted Chiang wrote that he didn't think AI generated art was really art because to make art, he said you need to be able to make choices and AI systems don't really make choices in the human-like sense. Well, that's gotten a lot of pushback, as you would imagine. You know, people don't buy it. I don't think that art will be taken over by AI, at least not any time soon, because a big part of art is the artist being able to judge what it is that they created and decide whether it's good or not, decide whether it sort of conveys the meaning that they want it to convey. And I don't think AI can do that. And I don't think it will be able to do that anytime soon, maybe in the very far future. It may be that AI will be something that artists use as a tool. I think that's very likely already true. Now, one big issue about AI art is that it works by having been trained on huge amounts of human-generated art. And unfortunately, the training data mostly came without permission from the artists. And the artists didn't get paid for having their artwork being used as training data. They're still not getting paid. And I think that's a moral issue that we really have to consider when thinking about using AI as a tool. To what extent are we willing to have it be trained on human generated content without the permission of the humans who generated the content and without them getting any benefit.

Abha: Right, I think your own book, something was done by AI, right?

Melanie: Yeah, my book, which is called Artificial Intelligence: A Guide for Thinking Humans. Well, like many books, someone used an AI system to generate something a book with the same title, that really was pretty terrible, but was for sale on Amazon.

Abha: So if you're looking to buy that book, make sure you get the correct one.

Melanie: Right. you know, I put in a message to Amazon saying, please take this off. It's, you know, played, it's plagiarized, whatever. And nothing happened until I got interviewed by a reporter from Wired Magazine about it. And then Amazon deleted that other book. But, you know, this is a broad problem. We're getting more and more AI generated books that are for sale that either have related content to an actual human-generated book or whatever content. When you buy a book, you don't know it's generated by AI. And often these books are quite bad. And so this is part of the so-called slop from AI that's just sort of littering all of our digital spaces.

Abha: Littering is a good word for this phenomenon, I think. I want to go into the idea of complexity science and AI research. You've written a book also on complexity science and AI research. You've had a long history with the Santa Fe Institute. You've been with us for many years now in different capacities. Why do you think AI is a complex system? And what keeps you in the complexity realm with this research?

Melanie: Well, I think AI at many different levels and dimensions of it are complex systems. One is just the systems themselves. Things like ChatGPT is a big neural network that is very complex. And we don't understand how it works. People claim that it has so-called emergent behavior, which is buzzword in complex systems. Something that I think complex systems people who think about large networks and large systems with emergent behavior might be able to put some insight in. You know, the first notion of emergence came from physics. And now we know, you know, AI is part of physics, it's won a Nobel Prize. So I think, you know, these things are all tied up together. But also another dimension is sort of the interaction of AI and society. And clearly that's a socio-technological complex system of the kind that many people here at the SFI are interested in studying. So I think there's many ways in which AI relates to complex systems research. I think SFI in particular is a great place for people to take this slower approach to thinking about these complex problems rather than the more quick incremental improvements that we see in the machine learning literature without very much deep thinking about how it all works and what it all means. So that's what I'm hoping that SFI will be able to contribute to this whole discussion. And I think, my colleague David Krakauer here at the SFI and I wrote a paper about the notion of understanding in AI that I think is kind of influential because it really laid out the complexities of the topic. I do think that we people in complex systems do have a lot to contribute to this field.

Abha: So Melanie, I mean, we've talked about, you know, AI as a complex adaptive system. We've talked about AGI, the possibility and where we stand. Where do you think the research will lead us, eventually say in another 10 years, having seen the progress we've made in the last 10 years?

Melanie: Yeah, I think that one of the big things I mentioned is that the current approach to AI is just not sustainable in terms of the amount of data it requires, the amount of energy it requires. And what we'll see in the next 10 years is ways to try and reduce the amount of data needed and reduce the amount of energy needed. And that I think will take some ideas from the way people learn or the way animals learn. And it may even require AI systems to get more embodied. So that might be an important direction that AI takes, I think, in the next decade so that we can reduce this ridiculous dependence on so much data, so much energy, and make it a lot more sustainable and ecologically friendly.

Abha: Great. Thank you so much, Melanie. This has been wonderful as a season and to have you as a co-host was such a privilege. I've really enjoyed working with you and I hope, you know, we continue to discuss this over time. Maybe we'll have another season back when you and John have finished your workshop that's going to happen for the next three years.

Melanie: Yeah, that would be great. It's been an incredible experience doing a podcast. I never thought I would do this, but it's been fantastic and I've loved working with you. So thanks, Abha.

Abha: Likewise. Thank you, Melanie.

Complexity is the official podcast of the Santa Fe Institute. This episode was produced by Katherine Moncure. Our theme song is by Mitch Mignano, and additional music from Blue Dot Sessions. I’m Abha, thanks for listening.