In this episode, we examine how the course of human history has shaped our scientific knowledge, why the physics community prioritizes some questions over others, and why progress in complex systems research is especially difficult. Academia continues to operate within set boundaries and students are taught certain concepts as fundamental and to skirt others completely. However, the history of science demonstrates that such concepts aren’t always set in stone. It’s possible that blowing open the “shackles of reality,” such as redefining the concept of life itself, and reprioritizing the problems that scientists want to tackle, might help scientists make more progress in this very difficult world of complexity research.
Guests:
Hosts: Abha Eli Phoboo & Chris Kempes
Producer: Katherine Moncure
Podcast theme music by: Mitch Mignano
Additional sound credits: Digifishmusic, Trundlefly, Greenvwbeetle, Miksmusic, Brewlabboffin
Follow us on:
Twitter • YouTube • Facebook • Instagram • LinkedIn • Bluesky
More info:
SFI programs: Education
Complexity Explorer:
Books:
Talks:
Papers & Articles:
Ep 5: How human history shapes scientific inquiry
Sean Carroll: Now, nobody hears that for the first time and says, “Oh yeah, that makes perfect sense. I kind of like that theory.” This is crazy nonsense talk.2
[THEME MUSIC]
Abha Eli Phoboo: From the Santa Fe Institute, this is Complexity.
Chris Kempes: I’m Chris Kempes.
Abha: And I’m Abha Eli Phoboo.
[THEME MUSIC FADES OUT]
Abha: Every time we interview a guest for this show, we like to get them warmed up with a couple of light, easy questions. A few weeks ago, I asked David Krakauer, the president of the Santa Fe Institute, a question that we thought was fun. He didn’t agree.
Abha: So what's the most interesting fact you've come across in the course of your research, like all kinds of research you've worked on?
David Krakauer: I refuse to answer that question. Facts are so uninteresting, right? I mean, facts are like dead animals.
Abha: To be clear, David’s not saying that facts are unimportant. They’re just not what gets him out of bed in the morning — to him, they’re not the be-all, end-all of scientific research.
Chris: And it’s not because people at SFI don’t like knowing stuff. We do. But I think what David finds more interesting isn’t what we know, but how we know it. How certain ways of looking at the world become dominant over others, especially because anyone who knows the history of science knows that there’s a lot that we think is cold, hard truth… until we find out it’s not. An obvious example is that up until the 16th century, many people believed that the earth was the center of the universe and that everything revolved around us.
Abha: Which is a totally human-centered way of seeing things, but it makes sense that anyone could come to that conclusion if they didn’t have all the information. We watch the sun and the moon and the stars move around the sky, we don’t feel the earth rotating beneath our feet — this version of reality seems intuitive.
Chris: But it’s completely wrong.
Abha: It is. And in today’s episode, we’ll look at how our basic understanding of scientific reality, and the trajectory of future inquiry is shaped by both human history and human decision-making.
Chris: And to start, I’ll let David explain.
David: One of our faculty, Doug Erwin, suggested a book to me called Disputed Inheritance and it's about the history of genetics. In the early 20th century, there was a raging debate about the nature of genetic causality. And two figures were at war, William Bateson at Cambridge and Raphael Weldon at Oxford. And Bateson championed Mendel's laws, discrete atoms of inheritance.
Chris: Bateson used Mendel’s ideas about genetics to argue that —
David: Somehow a small number of genes can explain personality, disposition. susceptibility to disease, race, and so forth. This is just patently wrong.
Abha: Weldon, on the other hand, argued that genetics do play a role, but it’s more complicated. He said that the environment also has a strong influence. It’s the classic nature versus nurture debate, with Bateson and Mendel being on the nature side, and Weldon kind of being on the nurture side, or somewhere in the middle. At the turn of the 20th century, Weldon was writing a book that outlined his side of the debate.
David: And Weldon favored a much more complex attitude towards genetics. But just before Weldon published his book, he died. And so Bateson and Mendel won.
Chris: So this is the version of reality we’re left with: Mendel and his pea plants, those Punnett squares that many of us learned in school science class, where genes shape outcomes in very direct, simple ways. And this way of thinking about genetics has shaped much of the research done in the last century.
David: So I'm sort of interested in that, that facts are just sort of accidents of history in some sense.
Abha: So then, how do we find the gaps in our knowledge? For David, this means rethinking some of the most basic questions, like: What are organisms? What’s an ecosystem? What does it mean to be alive?
David: We tend to project onto all of reality, the very small number of mechanisms that we're familiar with, and pretend that they're universal. And part of what you get to do when you're a theorist is play the counterfactual game, which is, what if that were not true? What then? And it's extraordinarily liberating, and it's a kind of mathematical empathy, because it allows you to see worlds that could exist that we've never encountered. And they generally, I think, genuinely extend our sympathies because you don't only have to be this way. There are lots of different ways of being.
Chris: In Part One, we’ll examine how researchers decide which problems to solve and what types of knowledge to pursue. And we’ll look at how some of these decisions can come down to the culture of specific communities and the history that’s shaped them.
Chris: Part One: Which questions are worth answering?
Chris: On this show, we’ve avoided looking at physics in the lifeless vacuum that it’s traditionally viewed in. Instead, we’ve been interested in how the fundamental building blocks of physics can influence more complex things like the biosphere, and this intersection is a huge area of uncharted territory. But Sean Carroll, theoretical physicist and fractal faculty at SFI, makes the case that, even in the most basic, traditional parts of the discipline, there are still some deep mysteries.
Sean: So aside from complexity, the other thing that I do research on these days is the foundations of quantum mechanics. And you explain the problems with quantum mechanics to any person. And they're like, this is really important. This should be a very high status sub-specialty within physics, right? And I have to explain that, you know, no, you get kicked out of physics, if you think about this.
Abha: Quantum mechanics, for those of us who aren’t physicists, is the study of extremely small objects that behave like waves or particles, depending on the situation.
Sean: Quantum mechanics is an interesting position as a physical theory because it is the way nature works as far as we know, is our best idea of what the fundamental laws of physics, how they run, the engine that gets them going.
Chris: But as a collective, the physics community has decided it’s not important. Or, perhaps, not worth the effort.
Sean: But it has a weird history, it came about sort of a patchwork, finally seemed to more or less coalesce in the late 1920s. But there were still some lingering questions, you know, in quantum mechanics, you say the electron, for example, is not a point-like particle with a position and a velocity. We describe it using something called a wave function that has a value all throughout space. But then when you observe the wave function, you never see it. You see a little point-like particle. And we decided to agree to teach our students that that's because the wave function is what's there when we're not looking at it. But when we measure it, we see a particle. Now, nobody hears that for the first time and says, “Oh yeah, that makes perfect sense. I kind of like that theory.” This is crazy nonsense talk. But we have not yet been able to do better. And we really do teach our students exactly this paradigm. And a certain subset of physicists, going back to Einstein, raise their hand and say, you know, that's not good enough. We want to dig a little bit more deeply, figure out what is really going on. And for whatever set of reasons, which you could talk about for a long time, the physics community decided to say no. Those questions about what we call the foundations of quantum mechanics, what's really going on beneath the hood, what is the actual stuff of reality and so forth. Those are not what we physicists are interested in.
Abha: We asked Sean what some of those reasons were.
Sean: And the most charitable reason why is because physicists just don't see how to make progress on this problem. Like even if they said, sure, it's interesting, what are you going to do? What is the experiment you can do? There's no guarantee in nature that how much we are interested in solving a problem tracks with how solvable the problem is.
Abha: This poses a big question for scientific inquiry: is it best to grab the low-hanging fruit first? Or is grabbing the low-hanging fruit, with no regard to its purpose, the wrong way to prioritize? Does it mean some important questions get ignored because they’re too difficult?
Sean: Physicists absolutely have favorite problems to think about and attach respect to and so forth. And others, they don't. And it's not because the problems are just rated by their interest level. Physicists really care about how much progress you can make in answering these questions.
Chris: And it turns out, trying to figure out what’s really going on with an electron is very difficult to do. And another thing that’s difficult? Complex systems.
Sean: And I think many of them feel the same way about complexity in some sense that, you know, okay, yes, there are complex systems. Those are hard to deal with. I'm going to go back to the systems I know how to deal with. And there's something to be said for that attitude. But I think as you and I know, if you… spend enough time thinking about it, you actually can make progress on these. I would even say you can make progress on the foundations of quantum mechanics. So sometimes it just requires a little persistence.
Chris: At SFI, we don’t go back to the systems we know how to deal with — if anything, we keep coming up with new ones. And as we discover new information, we shift the goalposts and move across traditional disciplines. We think it’s worth it, but it’s not easy or necessarily popular.
Sean: We don't have space in academia right now for young people, for people who are just starting out, just getting a PhD and so forth, to step outside of the disciplinary boundaries in interesting ways. I'm old enough that I can do it, right? You know, I'm settled and I can try to do it. And there are young people who try and some of them miraculously succeed, but man, we do not make it easy on them. Anyone out there who's a young person who wants to become a professor someday, if your advisors are being honest with you, they will say, try to play within the boundaries of some known academic discipline, because that's how we hire people. Physics departments hire physicists, biology departments hire biologists, and so forth. I don't think it has to be that way.
Chris: David, unsurprisingly, agrees.
David: The environment that supports that kind of inquiry, which is very, very fluid and very freewheeling, is super rare.
Abha: David’s talking again about SFI here. And there are downsides to this type of environment too.
Chris: Yeah so in that environment, what's the biggest weakness?
David: There are several. One is something we all experience, which is a lack of critical mass. So there are often times when you want to ask that question of an expert, because you've become interested in the problem and they're not there. And you just have to move. You just have to travel. You go to a university, go to another institute and pursue it or bring them in even better. So lure them in with green chili or something. And that is what we do. I mean, we're constantly luring people to the Institute with the promise of the beauty of Northern New Mexico. That's one obvious one. Another one is a kind of corollary of that, which is rediscovering things that other people know and have known for a long time. And so we'll often be at lunch and someone will say, I've just made this startling discovery. And someone will say, well, that's actually 200 years old. And...so if you don't have critical mass, you don't also have that constant constructive aspect of academic policing, which says, you know, you might want to read this paper. And so we suffer from a kind of naivety, which is enormously powerful, because it allows us to move into territories that others might either ignore or be fearful of exploring, but it comes at a cost, right? And the way that SFI, I think, has solved that problem to some extent is by bringing so many people through to sort of keep us honest, I guess, is when we're saying it.
Abha: Something else that makes it challenging to do complexity research is that it’s… complex. Really complex. To illustrate, let’s take a look at traditional physics first. There’s a trajectory where things start off as simple, then in bigger numbers they get more complicated, but then as you get even bigger, it gets simple again.
Sean: If you have one hydrogen atom, that's a pretty simple system, and you can solve it. If you have a molecule made of 1,000 atoms, that can be really hard to understand. But once you have Avogadro's number of atoms, it becomes simple again. Now it's a fluid.
Chris: Fluids, like individual atoms, behave in predictable ways. This simple-to-hard-to simple pattern might apply to complex systems, but it’s not nearly as clear. Most of complexity science is in that middle category that’s hard to understand, and we’re trying to tease out some rules that might make things simple again. But we don’t know what we’ll come out with all the way at the top, if we find anything at all. It’s a lot like searching in the dark. Humans and other organisms don’t behave in fixed, predictable ways.
Sean: And you're absolutely right that human beings, especially complicated, there could very well be some simplifications along the way, but there's a huge difference between human beings and atoms, which are atoms are themselves simple and their interactions are themselves simple, they're linear. Whereas human beings are themselves complex and their interactions are highly nonlinear and difficult to predict. So I don't think there's any guarantee or even a very strong reason to believe that once we get a billion or even Avogadro's number of people together, that we will see simplifications like we do in fluid dynamics. There might be, I hope that there are, I'm all in favor of looking for it, but let's realize why it worked in the first place and not just extrapolated mindlessly to the more complex situations.
Chris: Sometimes, these higher level laws do emerge from complex systems, like the scaling laws, which we’ve talked about in previous episodes. And it’s an exciting day when these things are discovered. But we should point out that when we’re talking about something like the scaling laws or assembly theory, it’s easy to just say, “oh look, here’s a new law of physics,” because we’ve found a rule that distills something complicated into something simple. But the labels we use are still up for debate.
David: If you discover new emergent laws, it doesn't mean they're physics. Complex reality learns physics, and it learns to exploit physics. So when we build rocket ships, right, those weren't present at the origin of life. We've learned how to use gravity to slingshot. And so there's this really complicated relationship between the laws of physics and the laws of complex system. And sometimes, because complex systems use physics so well, we think physics is more important than it really is.
Abha: A lot of what we’ve talked about in this season, like the scaling laws in organisms, trait driver theory in biodiversity, or the innovation pathway in cities, might exist, as we said, in a middle ground. This is where complex systems, like animals or human societies, are interacting with things like gravity. Gravity has existed for most of the history of the universe. Plants and animals, though, have evolved over time. So when we find laws in these systems, systems that exist because of evolution and time, do we call those laws physics?
David: And then this middle ground, which I find particularly fascinating, where physics is recruited and morphed and distorted and built upon such that you don't quite know whether you're looking at a physical law or a new evolved law. And that's partly what makes complexity so interesting.
Abha: I asked Sean where he thinks we might be headed in the next century with physics and complexity science. He, perhaps wisely, avoided answering directly.
Sean: You know earlier I mentioned that one atom is simple, a thousand atoms is hard, Avogadro's number of atoms is simple again. Same thing for predicting the future. You asked me about a hundred years. One year I can do. A hundred years is hard, but a quadrillion years I can do again, right? So a hundred years is hard because that's the interesting, unpredictable kind of time scale that we have to deal with in human history.
Chris: A quadrillion years is so much easier for Sean because physicists predict that, eventually, increasing entropy will lead to the heat-death of the universe. Which means everything that exists now will burn out, go cold, and stop. And that cold will last forever. Uplifting, right?
Abha: But we’re not going to focus on heat death for the moment. Instead, we’re going to stay here, in this interesting, unpredictable space of life that complexity researchers are working in. And in Part Two, we’ll rethink how to approach it. What happens if we completely upend our understanding of what it means to be alive? Can shifting our frame of mind help us discover more with these really difficult questions?
Abha: Part Two: Turning life upside down
Abha: Life exists in this messy, unpredictable part of the universe’s timeline. Like Sean said, humans — and other organisms — don’t behave like atoms. But as long as humans have been around, we’ve been trying to make sense of what life is anyway.
Chris: In the scientific community, no one seems to agree on how to define life. Even though people have built entire careers — or podcast seasons — examining it. In this season, we’ve attempted to distill some higher-level, simple characteristics of life, like the scaling laws, assembly theory, and trait driver theory. But even with these theories, there’s no consensus yet. We’re still wading through the messy, middle part of that simple-to-hard-to-simple trajectory that Sean outlined, trying to understand the relationship between physics and the origin of life. And there’s a big problem here.
David: And the problem with the origin of life is it looks like a singular event.
Chris: A couple of years ago, David and I published a paper titled “Multiple Paths to Multiple Life” in the Journal of Molecular Evolution, in which we tried to address just this problem.
David: So planetary science is a comparative science
Chris: As in, there are multiple planets, more than one example of what researchers are observing.
David: And the origin of life has not been because it's an N of one. And so the way that we think about life is the way we’d define a planet if there was only the Earth in the solar system. So it's cell-based, it has DNA and RNA. It has these following metabolic pathways and so on. And so life is defined based on a sparsity of evidence. And I think what we were trying to do, Chris, in that paper was say, how do we make the origin of life a comparative science?
Chris: In order to find more data — more origins of life — we might need to entirely rethink the definition of what a living system is.
David: And what we came up with is the following very obvious statement that a living system is a system, a mechanism that is able to integrate and store a past so as to be able to predict a future. But if you think about it in those terms, then you start to generalize your notion of individuality because maybe a culture has that property, right? And certainly you do during your lifespan and your lineage does too, species do. So we were just trying to break out of the shackles of reality as we know it, generalize it, and then we built a kind of mathematical formula that would sort of help us find it somewhere else.
Chris: And I think it’s really interesting that maybe the most radical thing we do in that paper is that you and I are then willing to say, maybe we already have other origins of life. You know, so you and I would be much more willing to say, for example, that certain computer programs or languages might count as an origin of life. They live on really weird substances but they seem like maybe they're obeying many of the things that we want life to have. There's a long debate in this field about whether viruses are alive. And you and I had a really fun conversation where we sat down and talked about this, and we said, well, both of us are really happy with a photosynthetic bacterium being alive. It just uses sunlight, it makes all its own energy, it seems really impressive. And we said from that perspective, yeah, viruses are less alive, but maybe so are we. In terms of as humans, we have to eat all these other things, we rely on an entire microbial ecosystem inside of us. We don't make energy directly from the sun. And maybe we're more alive in some other dimension like intelligence, but for certain, for the...for the dimension that's just dependence on the environment, us and viruses don't look so different. We both require a lot of other organisms to make our living.
David: Yeah, and part of the problem here is the terms. I think that I am one of those completely crazy people and you might be too, who thinks that culture is alive or ideas are alive or certainly computer viruses are alive. They're very simple, but they're alive.
Chris: In this paper, David and I landed on three categories for the way we might try to define life. The first level of definitions is just looking at literal material — is it made from DNA, RNA, and proteins, or something totally different? The second looks at constraints on life, like the scaling laws or convergence of certain characteristics across different species like the physics of eyes that see. We refer to these as L1: the organic material itself and L2: the constraints that shape evolution.But the third level, L3, is even more abstract and has to do with how organisms optimize functions in any world, real or virtual. David and I also call L3 the Tron theories, based on the sci-fi movie from the ‘80s.
David: And so one set of theories we call Tron theories, which is a fully virtualized living experience in silico in simulation. And these are from the film Tron from the 80s. And there, if you think about it, you have people, they've given up their organic chemistry and they live on inorganic, you know, condensed matter, you know, silicon-based transistors, but they're still themselves. And that's the world in our world of artificial life and the effort to move L3, you know, the general principles, across to a different L1, and with different L2s, different constraints, right? So again, move the principle of life, the identities but with a completely different underlying substrate.
Chris: Let’s think about what this might look like if we just focused on memory — the ability to store information and pass it on.
David: You know, I think everyone who listens to this show will be familiar with the idea of saying there's a memory of culture in a library, right? Or there's a memory in Wikipedia. There's also a memory in a worm and a memory in a dog and in each of those cases something's staying the same but a lot of things are also changing so the principle is invariant but the matter is variable.
Chris: The concept of intelligence adds another layer to what we think of as life.
David: Life is what we would call in science a lower bound. As long as you have a certain number of things in place, you're alive. And you're not more alive by having more of them. It's a lower bound, right? So as long as you can propagate information from the past into the future, you're alive. You're done. And, but if you can propagate more information from the past into the future, you could be more intelligent. So it could be that life is the lower bound on any intelligent system. So there are ways of trying to connect these concepts in principled ways. It's just that it's very difficult to do. And there's also the trickiest example, which is a system that we would all agree is intelligent that might not be alive. And that's where things like large language models come in. I think it would be foolish to say that they're not intelligent. They clearly are. They're not human intelligent, but they're intelligent. But they just certainly don't replicate. They certainly don't repair errors when they occur. They don't seem to have much autonomy. They're fed with huge amounts of data constantly. They're tended by humans. So there are categories of phenomena in the natural world where I don't know how to connect intelligence to life. But in some cases, I think I do. And so it's a very open question.
Abha: To some people, especially for those outside the SFI world, this might seem pretty outlandish. Many of us feel we know intuitively – what it means to be alive whether or not we’re researchers. But expanding the way we think about life isn’t just a fun thought exercise, it can actually give us more data to work with. And whether or not everyone agrees on an exact, precise definition may be less important than the ability to find new insights about life, or life-like systems.
David: I mean once you realize that you can recognize life, you know, L1 matter, L2 constraint, L3 function, look you can look for all three or any one of them. And these are defined at the level of principles. And in fact, if we're brutally honest and that's what we try to say, right? Is that you always start with principles. So as you go to another planet, the question you will ask is, what matter supports replication, right? So what L1 supports what L3? And so we always in some sense mobilize all three of these when we try to understand whether a system is living. It's just that we've been extraordinarily dominated, by an obsession with the evolutionary history of matter. And SFI's approach has perhaps been a bit more principle-based. And that's by the way, one of the areas where we get into big arguments, because you know, a lot of people don't like that. It's too mathematical.
Abha: One thing that everyone does seem to agree on, though, is that whatever you think life is, it won’t last forever. And that might be why it feels so meaningful to us.
Sean: Whatever you think the meaning of life is, it had better be compatible with the laws of physics and with science more generally. And those laws are telling us very strongly that our lives are finite, right? The average human lifespan is about three billion heartbeats. And I emphasize that's just an average. You're not gonna live longer by, you know, never getting your heartbeat up, okay? But three billion is a very evocative number because it's a large number. It's a lot of heartbeats, but it's not wildly large, right? It's not like federal deficit kind of large. And a heartbeat is a tangible unit of time. You can feel the heartbeats going by.
Chris: As life continues to evolve and change, the entropy in the universe increases too.
Sean: And we use increasing entropy. I hate it when people think about increasing entropy as the enemy, right? We're trying to fight increasing entropy. No, we need increasing entropy. If entropy is not increasing, that means you're at thermodynamic equilibrium and there's no life, there's no interesting stuff, there's no interaction, there's nothing complex or important. So entropy increasing is the fuel that makes us go and it won't last forever. It won't last forever on the individual level, it won't last forever even on the universal level. So our smallness, our fleetingness, our temporary nature here on this earth is something that I think is absolutely an important thing for human beings to stand up to if they want to make their lives meaningful.
Chris: The trajectory of the universe starts off simple, gets complex, and then eventually, far, far in the future, it will get simple again.
Sean: All the stars will die out, right? Stars rely on free energy. They rely on the universe starting with low entropy, so they have fuel and they can burn and all that will eventually be burnt out in about 10 to the 15 years from now. And most galaxies have large black holes in the middle of them and all those stars will very gradually fall into those black holes. And Stephen Hawking taught us in the 1970s that even black holes don't last forever. These black holes will give off radiation and themselves disappear. So 10 to the 100 years from now, a Google years from now, our universe, we have every reason to believe, will look completely desolate, cold, and empty, and that will last forever. So I think that helps us, you know, gain a bit of perspective when we're worried about rebalancing our stock portfolios and so forth, that, you know, the universe is going to last forever. We happen to be in the fun part of it, right? The first 14 billion years of an infinite history where stars are still shining, life is still plopping around on planets and so forth. So that's something to give thanks for.
Abha: And for David, he’s decided that to pursue the big, complicated questions, to dig into the messy principles of life and large systems, is the way he wants to use this fleeting window we have.
David: I think SFI is one of those places, and perhaps it shouldn't be the dominant model, but it certainly should be one model, that supports this endless adaptive pivoting towards the new question as new observations are made, as opposed to a commitment to sort of understanding one thing only. One of the reasons I defend SFI, is because I genuinely believe, despite the hardships of the work and how unsuccessful most of us are in our science, and I’m speaking largely about myself, is that we've kind of created in microcosm the kind of community that I would like to see in more places. And it's not that it's super expensive right? We cost much less to run than a large department with labs. I mean so it's not that but it is dependent on a certain degree of economic privilege. So I think giving people access to ideas, making them comfortable, asking very questions, challenging questions that sort of rock their world, right, that undermine their beliefs and perhaps substitute in better ones or alternatives. I mean that to me is a good life.
Chris: Running headfirst at the most difficult questions might not be for everyone, but it’s where we want to be. Nothing we do here can be tied up in a neat little bow, which is what makes it both exciting, and an ongoing struggle. We’re swimming in open questions.
Abha: And coming up on our final episode of this season, we’ll continue exploring the idea of multiple origins of life. And we’ll reflect on what we’ve learned so far.
Heather Graham: So this idea that there could be many of these sorts of interior oceans just inside our own planet, is there the possibility for life there?
Chris: That’s next time, on
Complexity
.
Complexity
is the official podcast of the Santa Fe Institute. This episode was produced by Katherine Moncure, and our theme song is by Mitch Mignano. Additional music from Blue Dot Sessions, and the rest of our sound credits are in the show notes for this episode. I’m Chris, thanks for listening.