COMPLEXITY

Michael Garfield & David Krakauer on Evolution, Information, and Jurassic Park

Episode Notes

Episode Title and Show Notes:

106 - Michael Garfield & David Krakauer on Evolution, Information, and Jurassic Park

Welcome to Complexity, the official podcast of the Santa Fe Institute. I'm Michael Garfield, producer of this show and host for the last 105 episodes. Since October, 2019, we have brought you with us for far ranging conversations with our worldwide network of rigorous researchers developing new frameworks to explain the deepest mysteries of the universe. Today I step down and depart from SFI with one final appearance as the guest of this episode. Our guest host is SFI President David Krakauer, he and I will braid together with nine other conversations from the archives in a retrospective masterclass on how this podcast traced the contours of complexity. We'll look back on episodes with David, Brian Arthur, Geoffrey West, Doyne Farmer, Deborah Gordon, Tyler Marghetis, Simon DeDeo, Caleb Scharf, and Alison Gopnik to thread some of the show's key themes through into windmills and white whales, SFI pursues, and my own life's persistent greatest questions.

We'll ask about the implications of a world transformed by science and technology by deeper understanding and prediction and the ever-present knock-on consequences. If you value our research and communication efforts, please subscribe, rate and review us at Apple Podcasts or Spotify and consider making a donation or finding other ways to engage with SFI at Santa fe.edu/engage. Thank you each and all for listening. It's been a pleasure and an honor to take you offroad with us over these last years.

Follow SFI on social media: Twitter • YouTube • Facebook • Instagram • LinkedIn

📚Reading & Videos:

The Lost World
by Michael Crichton

Jurassic Park
by Michael Crichton

The Evolution of Syntactic Communication
by Martin Nowak, Joshua Plotkin, and Vincent Jansen

InterPlanetary Festival 2018 + SFI Science Explainer Animations
by SFI

Complexity Economics
by SFI Press

Supertheories and Consilience from Alchemy to Electromagnetism
by Simon DeDeo (2019 SFI Seminar)

How To Live in The Future, Part 4: The Future is Exapted/Remixed
by Michael Garfield

Artists Misusing Technology
by NXT Museum

The Collapse of Artificial Intelligence
by Melanie Mitchell (2019 SFI Symposium Talk)

The Debate Over Understanding in AI's Large Language Models
by Melanie Mitchell & David Krakauer

Welcome To Jurassic Park
by Tink Zorg
(re: COVID-19 and the collapse of supply chains)

Smarter Parts Make Collective Systems Too Stubborn
by Jordana Cepelewicz at Quanta Magazine
(re: Albert Kao)

Coarse-graining as a downward causation mechanism
by Jessica Flack

Argument Making In The Wild
by Simon DeDeo
(SFI Seminar re: egregores)

The Collective Computation of Reality in Nature and Society
by Jessica Flack (SFI Community Lecture re: “hourglass emergence”)

Interaction-based evolution: how natural selection and nonrandom mutation work together
by Adi Livnat

In The Country of The Blind (_Afterword: An Introduction to Cliology)
by Michael Flynn

An exchange of letters on the role of noise in collective intelligence
by Daniel Kahneman, David Krakauer, Olivier Sibony, Cass Sunstein, David Wolpert

Murray Gell-Mann - Information overload. A crude look at the whole (180/200)
(re: the challenges of funding truly innovative research)

The work of art in the age of biocybernetic reproduction
by W.J.T. Mitchell

Ken Wilber

Intelligence as a planetary scale process
by Adam Frank, David Grinspoon, and Sara Walker

Light & Magic (documentary series)
on Disney+

Palantir Analytics
The Lord of The Rings
by J.R.R. Tolkien

Present Shock: When Everything Happens Now
by Douglas Rushkoff

Michael Levin

Robustness of variance and autocorrelation as indicators of critical slowing down
by Vasilis Dakos, Egbert H van Nes, Paolo D’Odorico, Marten Scheffer

The Singularity in Our Past Light-Cone
by Cosma Shalizi

🎧Podcasts:

 

Complexity Podcast

001 - David Krakauer on The Landscape of 21st Century Science

009 - Mirta Galesic on Social Learning & Decision-making

012 - Matthew Jackson on Social and Economic Networks

013 - W. Brian Arthur (Part 1) on The History of Complexity Economics

016 - Andy Dobson on Disease Ecology & Conservation Strategy

036 - Geoffrey West on Scaling, Open-Ended Growth, and Accelerating Crisis/Innovation Cycles: Transcendence or Collapse?

056 - J. Doyne Farmer on The Complexity Economics Revolution

060 - Andrea Wulf on The Invention of Nature, Part 1: Humboldt’s Naturegemälde

065 - Deborah Gordon on Ant Colonies as Distributed Computers

067 - Tyler Marghetis on Breakdowns & Breakthroughs: Critical Transitions in Jazz & Mathematics

072 - Simon DeDeo on Good Explanations & Diseases of Epistemology

087 - Sara Walker on The Physics of Life and Planet-Scale Intelligence

090 - Caleb Scharf on The Ascent of Information: Life in The Human Dataome

92 - Miguel Fuentes & Marco Buongiorno Nardelli on Music, Emergence, and Society

099 - Alison Gopnik on Child Development, Elderhood, Caregiving, and A.I.

 

Future Fossils Podcast

194 - Simon Conway Morris on Convergent Evolution & Creative Mass Extinctions
190 - Lauren Seyler on Dark Microbiology & Right Relations in Science

165 - Kevin Kelly on Time, Memory, Change, and Vanishing Asia

125 - Stuart Kauffman on Physics, Life, and The Adjacent Possible

 

Podcast theme music by Mitch Mignano

Other music by Michael Garfield

Episode Transcription

Welcome to Complexity, the official podcast of the Santa Fe Institute. I'm Michael Garfield, producer of this show and host for the last 105 episodes. Since October, 2019, we have brought you with us for far ranging conversations with our worldwide network of rigorous researchers developing new frameworks to explain the deepest mysteries of the universe. Today I step down and depart from SFI with one final appearance as the guest of this episode. Our guest host is SFI President David Krakauer, he and I will braid together with nine other conversations from the archives in a retrospective masterclass on how this podcast traced the contours of complexity. We'll look back on episodes with David, Brian Arthur, Geoffrey West, Doyne Farmer, Deborah Gordon, Tyler Marghetis, Simon DeDeo, Caleb Scharf, and Alison Gopnik to thread some of the show's key themes through into windmills and white whales, SFI pursues, and my own life's persistent greatest questions.

We'll ask about the implications of a world transformed by science and technology by deeper understanding and prediction and the ever-present knock-on consequences. If you value our research and communication efforts, please subscribe, rate and review us at Apple Podcasts or Spotify and consider making a donation or finding other ways to engage with SFI at Santa fe.edu/engage. Thank you each and all for listening. It's been a pleasure and an honor to take you offroad with us over these last years.

David Krakauer:

Okay, so this is a very special a hundred and sixth episode of the Complexity Podcast. It only is 106 and this is marking an important bifurcation in Michael Garfield's career, having helped build this amazing series, is moving on to other projects that we are going to discuss today. But let me just introduce Michael a little bit and then turn it over to him. I have to say, when I first met Michael, I sort of was reminded of sixties, seventies and eighties, more than nineties actually, sort of Whole Earth Catalog from the sixties, Omni Magazine from the seventies, Cosmos from the eighties, Future Shock, you know, that whole interesting moment when computers had in some sense been domesticated and we were rethinking our relationship towards the planet and towards technology. And I think Michael channels that interesting nexus in addition to connections to the arts. And so just a bit of background. So Michael did his degree in ecology and evolution in, uh, he has a longstanding interest in paleontology. He was actually a scientific illustrator. He did grad work in integral theory from JFKU in California. And I'm gonna sort of just ask you Michael, how did you first discover about complexity and sort of come into the SFI orbit?

Michael Garfield:

We're gonna pin back here and loop to the whole thing full circle by the end of this because really it was the Lost World, 1995, Michael Crichton's sequel to Jurassic Park where Ian Malcolm is sitting at the old convent down on Canyon Road giving a talk on uh, red queen arms races in evolution and catastrophe. And that was my first encounter with complexity theory. That was my first encounter with SFI. I was 11, you know, you've got Jurassic Park in '91 before that. And I was like, my dad worked for Universal Studios. So I was there at the world premiere of the film, uh, in '93, but like that was, that was chaos theory, right? So that's the embryo of that other thing, but yet it was 95. And then I, and I don't know if you remember this, but there was a point I think after the developmental bias workshop that was held here in 2018 where someone had made some point about recombinant novelty production and the emergence of new syntax and all of this stuff.

And I, I was like, David, I mean do you know Martin Nowak’s work? And you're like, I co-authored that paper with him on the evolution of Syntactic Language and the Error Catastrophe. And I was like, oh, egg on face. Because what I had read was actually a follow-up paper that he had written with other co-authors, but that was the paper I read in my final semester of my baccalaureate program at the University of Kansas in an animal communications seminar. And at that point I realized that the math in that paper applied to the origins of multicellularity and it had applied to the origins of complex societies and that this was a paper about much more than language. And I wanted to pursue a graduate program in this general theory of evolution that included agents as observers and as decision makers and as communicators. And everywhere I went in 2005, every single advisor I spoke to said, you're outta luck son, because nobody's gonna let a grad student take on a project this huge.

And I was totally demoralized. And so I actually reached out to SFI in 2005 and I said, what do I do? And they said, well, you're too old for the undergraduate program and we don't have a graduate thing and you don't have a Ph.D. So sorry kid. And so I spent the next 13 years doing other stuff, ended up with Future Fossils podcast and then somebody suggested I have Geoffrey West on Future Fossils, my friend Violet Luxton. So I reached out to Jenna Marshall, the manager of communications at SFI, and I said, can I have Geoff West on the show? And she said, actually, you know, Geoff is kind of sick of talking about scaling laws, which of course is total nonsense, right?

David Krakauer:

He never, ever tires of it.

Michael Garfield:

It’s undying, which is interesting. It's, it's kind of a maybe the exception that proves the rule as far as his work is concerned. But why don't you have David Krakauer on the show instead because we're just launching this InterPlanetary project and the festival. And so I had you on, that was Future Fossils, episode 76, and we talked about InterPlanetary and then I came out and I played music at that thing and did live scrubbing for the panels and I had such a beautiful time at Santa Fe and I was like, there's an open position here. Austin was feeling like the whole place was the, the ship was about to sink. I was living in Austin, Texas and it was clear that the whole thing was running off the rails. And I said, you know, I need to get outta here. I have found my people and I'm gonna apply for this job. And then I found out I was gonna have a kid at the same time and the rest is history. And this is after three grueling months of interviews I ended up here and here we are.

David Krakauer:

Yeah. So actually the way we're gonna, that's great and the way we're going to structure this episode is actually just go back through some of the interviews that you conducted and that's where we'll begin. And then we're gonna talk about some of your new projects and endeavors and how they grow out of a lot of those interviews and, and those kinds of ideas that you had covered, I should say, just for you to know, right, that those early papers on the evolution of combinatorial languages grew out of an interest in the late work of Ludwig Wittgenstein. And um, they were actually a, a very deliberate effort to mathematize the ideas he presented in the philosophical investigations on language games, which are fundamentally about the origin of coordinated meaning, which have huge relevance now in terms of ChatGPT. And so anyway, it's interesting to go back to that work personally just to go back to that work. But let's start then with episode one, which was an interview with me talking about Comple, what complexity is actually I think we did cover issues of understanding and prediction the differences between the two. So let's listen to a bit of that and then return to discuss those topics.

Michael Garfield:

We look back even into antiquity and it seems as though the real action going on evolutionarily is in some combination between all of these different approaches to reality, the temple religion versus the wilderness mystics, et cetera. And it feels as though there's a modern instantiation of this in the relationship between complex systems science and machine learning. I mean it seems as those are, I've heard you describe this as these are like sibling disciplines.

David Krakauer:

Yeah, so, there's two issues. So to understand this properly, we have to understand what complexity is and complexity is this domain of reality that straddles the very regular and the random. Science has been really good at those two limits, right? And so one limit's classical mechanics and the other limit's statistical mechanics and both are very powerful theories. One, if you like, dealing with crystals and the other one dealing with gases. The perfectly ordered and the very disordered and in the middle is where it all gets very complicated and complex. And that's all where we live at SFI. Now, what that's done, because science is not very good there historically, is generated two possible approaches. One of them is complexity science and one of them is machine learning and AI, and they do different things. Machine learning and AI takes all that complexity in encoding big models like deep neural networks and mixed predictions. But those predictions are completely opaque and don't give anyone an understanding of how they were reached. On the other hand you have complexity science, which tries to, in Murray's language, take a crude look at the whole. It tries to find the right scale, by which you can do theory of these adaptive systems, if you like, in the center with a view to not producing predictions, but generate insight — explanation for why they exist.

And I, we're now entering in the 21st century a new kinda scientific schism where we're gonna live with two very different ways of engaging with reality —a machine-based, high-dimensional, very precise, predictive framework that is a black box and ours, which is a more familiar framework from the history science, if you like, but that is faithful to the complexity of the systems we study, which doesn't predict so well, but does allow us to understand the basic mechanisms generating the phenomena of interest. And that's where I think complexity lives. And it's gonna have to come to terms with living with machine learning and AI. It's almost as if we've returned, to use your biblical metaphors, to the Cain and Abel and those two brothers are gonna have to get on as opposed to one killing the other.

Michael Garfield:

Yeah. So one of the things I love about this episode is how you introduce, in that clip you talk about how machine learning versus complexity science is like a Cain and Abel story. It really is interesting how complexity is hard in that I feel like the reason that so many people rebuffed me when I said I wanted to study this in my graduate program was because it defied the way that people wanted to do science in the 80s and 90s and the early 2000s and it required a kind of making peace with uncertainty. You know, Brian Arthur, who will we'll talk about later talks about this quite a bit in in complexity economics, the SFI press volume and and my episodes with him that there is something irreducibly uncertain because all of the models are themselves evolutionary products made by agents. And one of the things I've loved about engaging with you in this community is that it became so clear to me over the last few years how important it was to tell people as we move into an age of extraordinary turbulence and transformation, that there isn't gonna be one ring to rule them all.


 

There isn't gonna be one system that makes sense here. There was that Simon DeDeo talk I think in January of 2019 on probability and consilience and his topic modeling of the, the Royal Society papers dating back 350 years and how science goes through these kind of convulsions of arriving at a unified understanding and then it undermines itself through learning too much and then has to take a step back and adopt a more pluralistic approach. And you know, I've always appreciated like the talk that you gave to the postdocs last year when you brought on the new cohort about how you said, we have to act as if this unified theory exists, but we don't actually believe that we're not at a point now where we're, we're seeking out an equation to put on a t-shirt that explains the cosmos. And I think that's most people's misunderstanding about SFI that's like a holdover from a previous age and this like myth of physicists as being, you know, seeking this, like, grand unified theory.

David Krakauer

Yeah, it's an interesting point I actually to address because even within physics there's always been this understanding that there's a fundamental theory, let's call that quantum field theory, which we might not have yet. And then there are pragmatic theories like mechanics. So the universe is fundamentally quantum, but we still use these continuum theories which are classical to do work. And I think one of the things that we can learn from that, right, in relation to the current moment is that there are going to be theories that help us understand how the world really is and then there'll be models that we use in our everyday lives, like for example, machine learning models. And I think that kind of pluralism between understanding and prediction has always been there. It's been there in physics and it's just amplified in our domain. In fact, I want you to just let me keep going because um, one of the areas where SFI has had maybe a surprising impact is in economics. And this seems like the most pragmatic field you can possibly imagine, right? One of your early episodes was an interview with Brian Arthur and Brian was very influenced by this paper by Lindgren, where he said, let's rethink economics beyond the neoclassical territory in terms of biological processes. Let's just listen to a little bit of Brian and then reflect a little bit on what he had to say.

Michael Garfield:

Definitely. This is where it gets very interesting for me because this dimension of having to anticipate or learn from the behaviors of other agents in the system. And you know, you talk about how this means that the economy is a collective computation. And so that, you know, there's a really interesting example that you bring up about Kristian Lindgren. And the, you know, evolutionary game theory, uh, and, and running, you know, iterated simulations of the Prisoner's Dilemma. I'd love to hear you go into that because I think that that's a really interesting place to leap from and into technological systems as evolutionary ecological systems.

Brian Arthur:

Lindgren was looking at what, thought in those days as being a prisoner’s dilemma tournament. You don’t need to know much about prisoner’s dilemma, think of it as 100 different strategies, for playing this game called Prisoner’s Dilemma, but they’re competing, and they’re competing against each other for the first 100 moves, and then somebody wins more than another and so they’ve won that round. What Lindgren set up on his computer was the idea that the strategies that won consistently could reproduce themselves, and so if one strategy was particularly good then in playing randomly at other strategies, including itself, then they would reproduce and do pretty well and other strategies that didn’t do well would drop out, would have a sort of trap door. And they’d be thrown out. The interesting thing about what Kristian Lindgren did that fascinated me was he didn’t just set this up as an automatic game, automaton or algorithm, or something in the computer strategies he played. That had been done before. The kicker in his world was that the strategies every so often could mutate and get deeper. They could remember more moves back and so you might be playing with strategies that just remembered one move back and suddenly the strategy could discover how to play two moves back and that would obviously be an advantage because they’d have more knowledge of what their opponent was doing and so on. So suddenly these strategies began to mutate and deep-end and have deeper memories of how to play that game that was ongoing and remember more moves back. And some of those strategies, obviously, started to take over. What interested me in Lindgren’s model. He ran this thing for 60,000 tournaments or 60,000 goes. It must have taken in 1991 three days or two weeks on the computer. But Lindgren ran this thing and there were periods where there was clearly a best strategy and the other strategies started to disappear but you’d see some of those other strategies staying in the game because they were all so necessary. If the superior strategy just played itself, it didn’t do that well. It needed some strategies almost as fodder. The wolves need some sheep to eat. Otherwise there won’t be wolves. So there’s a balance there. And then other times deeper strategies would be discovered. One or two of those might take off for a while and beat everything in sight. And there’s other areas where there is a gigantic free for all and you’d see for maybe a lengthy time of a thousand or 50,000 tournaments that it would be like a situation where there was a lot of random strategies that was being created but nothing that dominated but was quite chaotic and then even a deeper strategy would take over and I remember looking at Lindgren’s results and this is what I’d call paleontological economics, meaning that there were long periods perhaps like before the K–T Boundary, where you’d get all the dinosaurs and then suddenly something would happen that was a smarter strategy that would be discovered and the whole game would change and you’d be in a new eon where that would last for a while but there was no equilibrium in this sort of game. Things just kept getting discovered and discovered. In fact, I began to realize that standard economics essentially views the economy as a machine with all parts in balance. Complexity economics views the economy as an ecology with strategies or forecasts or actions competing to see which will dominate. But then maybe new things are being discovered all the time.

Michael Garfield:

Yeah. So we're back to that red queen arms race kind of situation here, right? Like I love this because this gets back to a direct address of the question that I had as an undergraduate, which is where is all of this complexity coming from, right? The complexity out of simplicity thing, which is, you know, such a holy grail and you look at the increasing speciosities of the biosphere, right? And the, the evolution of intelligence. It's a Copernican revolution, right? Where we're no longer at the top of a divinely ordained taxonomy of being, but we are at some point along an ongoing open-ended process by which intelligence constantly bootstraps itself because of these competitive dynamics between agents that are trying to infer one another's behaviors. You know? So that's why I really loved that piece.

David Krakauer:

I mean, yeah, one of the things that I know is of interest of yours and certainly of mine and many people at SFI, and it's not always easy to express this, which are these two very different perspectives on the evolution of a trait. And the way we typically talk about this is in terms of the invention, the origin of it, and its subsequent success or fixation or innovation. And I think one of the things that Brian and many of us have had a long-term interest in is the fact that the models, the mathematics that solve the second problem, the fixation of a gene are not the same as the models are required to explain its origin, right? And that was what in your field in paleontology the whole Goldschmidt-ian Hopeful Monster, right? Versus the more continuous Darwinian worldview. And I, it's sort of interesting, I know this is an interest to yours and it's something you want to continue doing, but where does that come from that your particular interest in invention and as opposed to fixation?

Michael Garfield:

I don't know, I except to say that again, it does kind of all run back downhill to your work with Martin Nowak, or actually let me talk about Gould and Vrba, right? Acceptation for me is like one of the most core concepts. And like I still to this day don't really understand why any evolutionary biologist, somebody can like check me on this, would see those two things as non-synonymous. So acceptation is something that emerged in one context, finding new function in a different context, right? So the, you know, the the classic example of the fish limb being something that first confers a fitness advantaged organisms that are in a kind of a turbulent intertidal environment. And then only later does it turn out that these things get washed up on land and they can move themselves around and you get the first tetrapods or feathers which start out as an insulating layer and then only later get repurposed for flight

So for me, I was just on a, a panel with the Nxt museum in Amsterdam on, on on Twitter. And we were talking about the creative misuse of technology and you know, my point in that panel was that, you know, because, and this is again boiling back to Brian Arthur's work, the inventor of any instrument, any technology is incapable of imagining all of the scenarios. Melanie Mitchell talks about this with AI and edge cases and why it's such a challenge for autonomous driving, right? Like there's no way that you're gonna be able to pre-specify all possible outcomes. And so there's this thing that kind of speaks to when we had uh, you know, Miguel Fuentes on the show and we were talking about how he and Murray Gell-Mann wrote about emergence as an epistemic thing, that there's something about the mistake, right? Or the way that randomness on the horizon of our understanding, it's very difficult to say whether these things actually have a kind of being of their own or whether it's just our failure to grasp them to cognize them.

David Krakauer:

No, I mean a, a good right, I mean a good example of this much of this has its roots in the limitations of mathematics because we all understand there can be truly novel things someone invented or a group of people invented this symphonic form or what have you, chess didn't exist presumably in the Cambrian, right? And so the things happen that seem incredibly new, but when you write down a mathematical system of equations where you are studying novelty, you have to specify the dimensions in advance. And none of us know quite how to do this right? And so there's this view like methodological bottleneck that makes invention a hard problem for theory even though it feels kind of self-evident for us in everyday life. And actually I want to take this opportunity then to jump to the next interview which deals with this accelerating need for invention in the modern world. And that's talked about Geoffrey and scaling, right? And so let's just listen to Geoffrey on the treadmill red queen dynamics of successive inventions.

Michael Garfield:

We're finally at the place in your book where you raise the issue of can we come up with a principled way of understanding a complexity science for sustainability. You talk about how a typical human being now lives significantly longer than the time between major innovation. So there's the one thing which is energy capture. Do we actually have the resources on this planet to sustain this growth? And then the other links to that question about balkanization and polarization in these social networks as they scale beyond a sustainable threshold and this issue of the crisis of growing so fast that we collapse seems to be partly informational and partly metabolic. I'd love for you to unpack the finite time singularity in the growth of cities and explain why you think this in particular is up against the assumption of infinite growth and the paradigm that we can just innovate our way out of everything.

Geoffrey West:

Okay, good. Very good. Yeah, so you're right. I mean I, the last chapter I serve my book, I uh, got into this and I took it to quote its logical conclusion and it led to some very disturbing questions and I sort of left it up in the air. In biology we have this subline scaling this economy of scale the bigger you are unless you need per capita per cell. And that leads to finite growth. That is, um, organisms typically stop growing after rapid growth in childhood and they remain stable till they die roughly. And that is in contrast to cities in particular that also would be economies that they have this super linear scaling. The bigger you are, the more per capita, the more ideas, more innovation, more wealth, blah blah blah per and that gives rise to uh, open-ended growth when you put it in the same equations, which is great because you have a lovely kind of consistent package.

You have these networks that have positive feedback in them, social networks giving rise to because that positive feedback building on each other gives rise to super linear behavior. So the more we get together, the more we interact, the more ideas and the more we sort of get out of that in terms of socioeconomic activity per capita. And that leads to open-ended growth, all of which we see both qualitatively and quantitatively with the data. So it's very nice. However, it has a disturbing consequence. One is life gets faster, bigger you are and you feel it viscerally in terms of social interactions. Life gets faster. And so you already have that problem for comeback to that because it can have dire consequences as the system grows bigger and bigger, it reaches an infinite size in a finite time, which is ridiculous. You know, that would imply that 10, 20, 50, a hundred, even 500 years, the economy will be infinite.

The uh, number of AIDS cases will be infinite, the matter of wages be obviously crazy. And indeed that is crazy. And the equations sort of tell you what happens that before you get there the system stagnates and collapses. Well we've seen arguments like that before in famous Malian argument, but this is different because one of the things it says here is yes, you can avoid that collapse by doing the critics of Malthus said, namely you didn't take into account that we're gonna innovate. That we do innovate, we make major paradigm shifts that lead to effectively starting the clock all over again. We basically reinvent ourselves in the industrial revolution being across the major one. But you know, we discover their name or we discover oil, we invent the automobile, we invent the telephone, we invented IT, we invented the computer. All these things are paradigm shifts effectively and they sort of reset the clock.

This reinvention is critical and it happens and it happens, but according to this theoretical framework, that's the way you avoid collapse. You can sort of almost state it as a theorem if you want to have open-ended growth indefinitely, you have to reinvent yourself systematically in some periodic fashion so that you effectively set the clock back to zero and start over again. However, built into that mathematics is another terrible consequence. And that is, yes you can do that but you have to do it faster and faster. And it's like being on a treadmill that's accelerating and some stage you gotta jump off the treadmill onto another treadmill that's accelerating even faster and you have to keep jumping faster and faster and so on. And of course that leads to a socioeconomic heart attack is the idea. And the image that I presented in the book was a Sisyphean image.

Well you remember who Sisyphus was. He was the king who thought he was infallible. He was king of the universe and screw everybody else. And the gods punished him by condemning him to roll this big ball up the mountain to the top and then it would roll down again and he would've to go down and roll it back up again. And he had to do that for eternity. We are like that but were much worse cuz Sisyphus was fortunate. The rock and the mountain remained the same every time, hours. Unfortunately every time you get to the top and it rolls down, the ball gets bigger and the mountain higher. Yes you can avoid collapse by innovation and shifting paradigms. When a paradigm shift or a major innovation is only a stop gap measure, it is not a permanent solution. You can make the reductio ad absurdum argument that of course then we would have to do something like invent something analogous to the internet eventually every eight months.

Question is, is that avoidable or are we condemned to complete collapse eventually because we're getting close? So I got very despondent after this cause it's hard for me to see a way out until I realized that I was confounding something. And that is confounding the idea of innovation with technology. When you think of the word innovation, I think most people think of oh a new technology, some new widget or gadget. But of course innovation is fortunately much broader than that. There's been innovations that aren't necessarily technological, they might be cultural. You could argue the Marxism and communism was a major, innovation was for part of the world and still plays a crucial role on the planet actually. But it was cultural.

Michael Garfield:

So this is the consequence of all of this, right which is that the thing becomes more and more complex. And then you know as, so you wrote that paper recently with Melanie Mitchell on understanding in AI and I've always appreciated how your figuring of modernity as an era defined by the way that cultural learning increasingly outstrips individual learning. We talked about this with Andrea Wulf when we were saying that, you know, Alexander von Humboldt was kind of the last renaissance man. He was like the last person capable of holding the state of science all in his one person's mind at a time. And then even by the end of his life he was breaking open into a network of international collaborations with younger explorers. So you know, now here we are and we've crossed the event horizon as far as I'm concerned and many, many people have written about this, about how the challenges that we're facing now are that all of our efforts to control the externalities generated by the technologies that we used in order to control the externalities generated by earlier technologies are only amplifying this problem.

David Krakauer:

Yeah, lemme just make that clear. I mean in relation to Geoffrey’s observation and his co-authors. So if you have a super linear scaling and you put that scaling into a standard model of growth, you get growth to infinity a so-called finite time singularity. And those singularities avoided through some kind of technological invention and Geoffrey's point being that the rate at which you have to invent increases in time. And that's a sort of an alarming future shock like observation which plugs into your point about this sort of positive feedback that you see in the technological world. But let me then this is a natural segueway again to your interview with Doyne Farmer because one of the obsessions of ecology, not so much economics has been collapsed, right, stability complexity and economists by virtue of the models they used, they couldn't get collapsed, right? I mean this notion of finite time singularity is a kind of oxymoron within their mathematics. And so let's listen to Doyne a little bit talk about why economics should be viewed as an explicitly ecological dynamic

Michael Garfield:

That brings us directly to this paper that you co-authored with Scholl and Calinescu on how market ecology explains market malfunction. You’re doing a really interesting thing in this paper using ecological models like if people know the Lokta–Volterra equation, you know, predator prey cycles, you're applying something like that to a population of noise traders, value investors and trend followers. And I'd love to hear you talk a little bit about how you're rigorously extending this analogy into this space and then what you found in the relationships between these three sort of species of market strategies and what it means for macroeconomics.

Doyne Farmer:

Sure. With this idea that trading strategies are like species and they have ecologies and that they may interact with each other like lions and zebras and grass, in this case the food source ultimately are inefficiencies in the market, ways in which things are not perfect that allow traders to make money and who's present in the market is gonna influence what those inefficiencies are and what the available niches are. The goal is to understand why markets, why do they malfunction? Like why is it that prices often seem to deviate from fundamental values? Why is it that markets often can get very volatile for reasons that to have nothing to do with underlying fundamentals about what's going on in the market and nothing to do with outside news. The market just gets volatile because the market is volatile and because of its own internal dynamics. And so we show how in a world where you have three strategies, all of them are bounded rational, none of them has access to complete information and none of them is a perfect model where you let the market evolve by having the strategies that accumulate profits accumulate wealth, which then means they have more influence on how prices get set.

Cause more wealth means you're making bigger trades, which means you have more influence on how the price moves every day. So we just put some strategies in and let things go and we see what happens and we use some ideas from ecology to try and understand it. We were able to do things like compute what's called the community matrix, which tells you whether the species are they competitive, meaning suppose you have species A and B or in this case trading strategy A and B, if the wealth of trading strategy B goes up to the returns, the profits to trading strategy A go up and vice versa, that would be what's called mutualism. If the population of trading strategy B goes up and the returns of strategy go down and vice versa, that's what's called competition. And if it's asymmetric, so if B goes up, A's profits go up, but if A's wealth goes up, B'S profits go down, that's called predator prey where A is preying on B, if you go back to the analogy of say, lions and zebras, that's the way it's gonna work for lions and zebras.

Now we found several interesting things about our model ecology that we studied. One is that when we reached the equilibrium where the returns of all the strategies were the same, we actually saw that we had mutualistic interactions between all the strategies that is if their own wealth went up, the returns would go down. But if anybody else's wealth went up, the returns would go up. That kind of surprised us. But then we realized well that's actually maybe what you should expect in an equilibrium at an efficient, a place where the market's efficient in some sense and that at the efficient place all the returns of the strategies are the same. And if you deviate from that, then one of the strategies starts to have an advantage again. The way in which you deviate is you want the others to have more wealth and you have less wealth.

It's like, you know, the foxes do well when the rabbit population's high. So that was one of our insights. Another insight is that we're able to compute what's called trophic levels for strategies that kind of tells you who eats whom and how do they lie like with the lion, zebra, and grass ecology because if you assume that zebras eat only grass and lions eat only zebras, then the trophic levels are one for grass, two for zebras and three for lions. Cuz your trophic levels by definition one higher than the thing you eat in the real world where there's more complicated diets, trophic levels can be more complicated, you can still compute and we're able to compute these in a financial ecology by looking at what happens when we knocked one of the trading strategies out to see how that changed the profits and saw that, you know, in the typical case we had noise traders and at a trophic level close to one and value investors and a tropic global close to two, trend followers at a trophic level close to three. But that could change depending on the wealth of the strategies and in some cases, in fact the trophic levels even cease to be defined. The key thing that we found is that if we wanna understand why the market's malfunctioning, like why is volatility high, why are prices mispriced, why are they straying from fundamental values? Then the wealth of each of the strategies in the ecology determine how mispriced the market is and the system's own spontaneous dynamics can cause substantial excursions away from equilibrium and substantial market malfunctions.

Michael Garfield:

Yeah, this is, it's funny because I've spent a lot of time consorting with people in the cryptocurrency community, right? And one of the things that people seem really keen on in that scene is in limiting the expense of transactions or like you look at Robinhood in the way that Robinhood opened up the ability for people to play in the stock market. And when I talk about Doyne, you know, Doyne was very concerned about this, right? Because what happens then is something akin to what you see with the evolution of the printing press or the evolution of the internet where the, the barrier to entry drops to close enough to zero, that suddenly everyone can feed into a system. And at that point all of the vertical structures that we're keeping things in order are imperiled by herd following behaviors. And so, you know, so you look at like GameStop and these kinds of things and you get these runaway outcomes where no one is actually in charge and so you have a flash of lightning and everyone goes on a stampede, but that's one piece of it.

And then the other piece of it that I liked in the conversation with Doyne is akin to a conversation I actually I had with Kevin Kelly on future fossils where he was arguing, he talks about protopian, right? That like we're never gonna get to utopia but we're gonna make these incremental improvements. And he talks about the demographic transition into urbanism and he says, everyone moving from the rural areas of China into these mega cities in China is making decisions on the basis of their own increased financial opportunity. So this gets back to Geoff's work as well, right? The problem with everyone being suddenly capable of abandoning the rich tapestry of local agrarian culture and the encoded wisdom in those cultures and allowing those things to decay as they move into the ability to participate in a global economy is suddenly everything becomes hyper-connected and very, very brittle.

And we saw this with Covid 19, actually I'll have to dig it up, but I read a really interesting blog post recently called Welcome to Jurassic Park, comparing John Hammond's decision to automate all of the processes of Jurassic Park with the way that just in time supply chains caused cascading failures. Or when I had Matthew Jackson on the show and he was talking about hyper-connected bank networks, we see these things where the convenience afforded us by patching everything together creates a vulnerability. So like there's something going on in the zeitgeist right now where I think even outside of the field of complexity science, people are starting to, because of Covid, because of bank failures, because of all of this stuff, people are starting to understand that there are reasons why, and this is something we've brought up time and time again on the show, there are reasons why you want isolated pockets. You want reservoirs, you know, you want to keep stuff off of the network, you want air gaps in things and consequently you also need some kind of verticality, like there's ultimately an argument for the ivory tower and for there being standards to journalism and this kind of thing.

David Krakauer:

That's, it's interesting. Okay, so this takes us a little bit to the next episode because we've talked about ecology and instabilities. Ecology focuses largely on energy, exchanges of matter. But of course the other dimension that you talked about in the two examples you gave, you know, in Blockchain, GameStop it's really about computation, the sharing of information. And many people in our community have pointed out most notably uh, Melanie Moses and Deborah Gordon, that there are highly connected borg-like systems in the natural world, namely the you social insects bees ants and wasps that somehow have squared the circle. They have that kind of characteristic and yet remain stable. And I think we just listened to this excerpt from Deborah Gordon on social insect computation.

Michael Garfield:

So it makes sense to anchor this I think in a bit of the history of research on ants and on the diversity of behaviors that we see ants engaged in and the problematic or inaccurate analogies like metaphorical language that we're bringing into this. You know, you mentioned Adam Smith here coining this term, the division of labor and that seems to set off this whole cascade of assumptions in entomology when people are looking at ants that you critique very articulately in this, I think, right? So this idea you argue is not appropriate for ants even though it certainly seems to be the way that ants are commonly understood and have been understood by researchers for a while. Could you talk a little bit about the history of the research on this and how you attack this particular misunderstanding?

Deborah Gordon:

When I started doing research on ants in the eighties when I was a graduate student, the prevailing idea was that each ant had a task or a function that was genetically determined. So that if you wanted to understand how colonies are different or how behavior evolves, you would look at the distribution of ants in each task. So in this view, a colony that has more foragers would do more foraging. And so if it was a good thing to do more foraging, then evolution would favor colonies that had more foragers. So that way of thinking locates all the causes of the T's behavior inside the ant. And it's really the same way of thinking that uh, we could use to understand how a brain works, we could say that each neuron has a certain job and if we could only list all the tasks of every neuron, we would understand how the brain works.

Or you could say the same thing about cells in an organism that each cell is a certain type and then what the organism does is the aggregate of all those different individual components carrying out their tasks. But it seems pretty clear that nature doesn't work that way because we see that individual parts change function when what's going on around them changes. And with respect to ants, what I learned is that individual ants switch tasks. And so the same ant doesn't always do the same task. And there's another side to it that even if you consider an ant to be assigned a certain task today, today this ant is a forager that still doesn't tell you how much foraging that ant is gonna do or when that ant is gonna go out in forage. So that means you can't really understand what the colony is doing by listing the numbers of ants of each type because there's other processes that come from interactions among ants and interactions with the world around the ants that determine what ant does which task and whether it does it right now.

Michael Garfield:

Yeah. So this notion of this fluid task allocation of ants has developed somewhat over time. I mean you mentioned that there was a time in the eighties when there was some understanding that ants were not merely limited by their body type by like size or size of specific parts. You know, people think about workers versus soldiers I think in the, you know, the lay understanding, but that was replaced at least temporarily by this polytheism, this notion that answer changing function as they age. And that's not, I mean you mentioned that that's going on, but that's not adequate to describe what's going on here, right? So what was it that you and your colleagues found that demonstrated that that was not a sufficient explanation?

Deborah Gordon:

Well, let's go back to that for a second. So ants and bees have in common that they live in colonies, they work collectively. There is one or more reproductive females that we call queens, although they don't tell anybody what to do and they lay the eggs and then all the ants or bees that you see flying around or walking around are sterile female workers. Now honey bees have been domesticated by people for 10,000 years and they have been selected to change tasks. So what we want bees to do is to go out and forage and collect pollen and carry the pollen around and pollinate our crops. And so we have selected bees to make the transition from working inside the nest when they're younger to going outside and foraging when they're older. So that's very well known that bees change tasks. That got a little bit confused with the idea in ants actually minority of species, just not all ants species, but some ants species, the workers come in different sizes.

They are adults, they don't grow from one size to another. So an ant when it emerges from the pupa is either a small one or a larger one. And the idea was that those sizes were associated with task. So that in those species where there were ants of different sizes, that the ants of one size would do one thing and the ants of another size would do another thing. So there were really two different views of how it works that in fact don't fit together very well. One was the idea from honeybees that a bee moves from working inside to outside, so she changes task over her lifetime. And the other was the idea that ants of different sizes are each assigned a task and they just do that task. So people started to look at this idea of temporal polytheism in ants and see that ants do like bees move from one task to another.

The further step is to understand how that changes in response to changing conditions. So it isn't just that an ant moves from one task to another along some predetermined trajectory, but instead the colony shifts around the numbers of ants allocated to different tasks as conditions are changing. So for example, if there's extra food, a colony might allocate more ants to go out and get the food. In harvester ants, which is a species of ant that I've worked on a lot, if there's more food, then ants from other tasks will switch to become foragers. They're not triggered to become foragers by something that happens inside them. They're triggered by the availability of food and the process that the colony has collectively for using interactions to get more ants to porridge. Does that answer your question?

Michael Garfield:

Yes. So yeah, the thing that I liked about this was Deborah Gordon pops the balloon of the myth that what you have as is it kind of like a biologically determined order of roles and casts in an ant colony that these, it's almost like, a eugenic misinterpretation of evolutionary dynamics. And she says, no, actually what's going on is that by diminishing, actually there was, you know, there was another paper that that we brought up on the show a lot. I think Albert Cow was involved in this research on the way that by reducing the memory of individual nodes in a network, then you can allow adaptive transformations to propagate more easily, whereas like long memory systems are kind of stubborn. And so the beauty of the liquid brain in Ricard Sole's terms of an ant colony is that the ants are dumb enough that they can function as a hive mind.

And so this is the thing about the borg that has always kind of stood out in contrast to what complexity science is actually saying, which is that you have to diminish the individual agency because you know, like one of my favorite papers that I read in the last four and a half years here was Jessica Flack’s work on course draining as downward causation, right? And the way that as these new systems emerge, they exert a kind of, it's almost like Simon DeDeo talks about this also egregores, right? Where these like these new beings that emerge at the intersections of things like an interference pattern and then you have a social contract and all of us are interacting in some way with the social contract more than we are actually interacting with each other. And musicians talk about this when they're in band dynamics. Or I think about this in terms of, you know, your responsibility as the president of this organization where it's like there is always that supervening layer that structures the way that we can interact as individuals. And I think what most people are afraid of with technology and the developments that AI has been taking recently is that they realize that this is increasingly determining the plays that are available to us as people. It's challenging the sort of myth of the modern self-authoring self.

David Krakauer:

It's a very interesting observation that you make and it's, it's something that many of us have been worried about, which is, um, and it actually takes us to the next episode in a second, which is exactly as you point out that the most sophisticated, if you like, highly connected system that we know is the central nervous system and maybe the central nervous system of mammals and the individual neurons I'm not saying they're as simple as we have portrayed the math. They're not just integrating fire binary units. Nevertheless, they're not that sophisticated and they are enslaved, if you like, by their institutional commitments in the nervous system. We haven't really experienced a world where the units are as sophisticated or quite frankly more sophisticated than the aggregate. And that is a new kind of complex system that we're building. And it does take me a little bit to the next episode and that's Tyler's episode.

And this is a question that I have for you because in that episode, Tyler talks about human creativity, for example, in jazz ensembles, but using models from statistical physics, which aren't individual musicians, they're actually Pharaoh magnets that have up and down spin . So we've always been, we have this tension rate in our work, which is the models that are available, if you like off the shelf, were developed for systems which aren't really the ones that we care about, which is how you started, which is systems that have agency where the units aren't just up and down spins, they're Michael Garfield or he wants to go and dance or compose or make a, a podcast episode. So let's just listen a little bit to Tyler, I'd like to hear your reflections on that

Because what we regard as improvisation might be kind of the same thing that we're seeing going on with composition, but at a different time scale or spatial scale. But that's all sort of a meta on this paper and we, we we have a responsibility to actually talk about this before we leap into that. So it seems like the, to start here would be in the precedent set by people like SFI External Professor Marten Scheffer talking about what is actually going on at these thresholds or these, these transitional moments and how it is that we can identify the features that we can look for to anticipate them. So laying out some of the core concepts there and then how you and your co-authors sought to identify the features in the music that you were examining that would allow you to quantify all of this.

Tyler Marghetis:

Totally. Yeah. So Marten Scheffer is an ecologist and he, along with a number of different ecologists have been trying to identify generic early warning signals that an ecosystem is about to undergo some kind of critical transition, right? So you can imagine a lake that goes from a really healthy, thriving, clear watered state to one where all the fish die off and you get a sudden catastrophic algal bloom. Can we know that that's about to happen? And uh, what those really clever ecologists have done is taken some technical tools that were originally introduced in statistical mechanics, sort of physics broadly looking at when we can predict phase transitions. And the idea is that when a system is perched on the edge of one of these transitions, it's lost resilience in some way. And one way that you can test for that is you can poke the system.

So imagine the lake example. You go in and maybe you kill off a bunch of the fish or you add a whole bunch more and you see how rapidly the system is able to bounce back to return to its, you know, healthy, happy functioning state. And if you sort of measure that return time, the time it takes for the system to bounce back after you poke it, that gives you a good sense of how resilient the system is. You want it to sort of really rapidly be like, okay, you poked me, but I'm back to usual now. And a lot of systems that we wanna study, we don't have the ability to go in practically and poke them. It would be irresponsible to poke a lot of like big healthy functioning ecosystems. And so what you can do instead is you just let the system sort of work on its own.

So sort of like living out its life and you look at the noise structure of the system. As it balances up and down just on the basis of natural noise in the system, there are, it turns out some recurring measures that you can calculate that tell you how resilient the system is. So you can look at auto correlation or variance or flickering, these are sort of technical terms for different calculations you can do that can give you an idea of how quickly the system forgets these pokes, these prods and returns to its natural wrestling state. Now my idea was that these measures of resilience might work just as well in the quote ecosystem of jazz improvisation, right? So really drawing on this ecosystem metaphor, when really that's sort of a way of speaking what I wanna say there is that in jazz improvisation you have multiple elements that are interacting with each other in a distributed way. And I could have called that an ecosystem or an economy or you know, just flat out called it a complex system. But the idea is in these kinds of systems where you have distributed elements interacting in non-linear ways, you can sometimes foresee this loss of resilience that precedes a sudden catastrophic critical transition. And so we set out to try to use the tools that had been deployed so well by folks like Marten Scheffer and others for natural ecosystems to see if they would work just as well for this human social, cultural, technical jazz ecosystem.

Michael Garfield:

So yeah, I mean this gets to stuff that I've heard Mirta Galesic talk about with psychophysics and the like visceral revulsion that some people experience trying to model crowds with fluid dynamical approaches, right? People don't like it for precisely the reasons that you mentioned. And yet going back to the whole question of the epistemology of emergence and going back to the conversation I hope to have with Jessica Flack when she gets her papers on the hourglass emergence model published. I love, I mean Jessica's approach is so provocative precisely because it undermines what we were talking about at the beginning where this idea that what's going on is that complexity is arising from simplicity and it suggests that actually these are horizonal issues.

And I'm not saying that ants are sitting there saying, ah, I wish I were watching Netflix right now, or whatever. But there is a dimension in which like, uh, when Adi Livnat of Haifa University came and presented at the Developmental Bias workshop in 2018 and he said the fusion of genes in gene regulatory networks seems to be governed by the same principles as Hebbian fired or wire learning in the brain, and therefore a lot of mutation is not random in the way that we have understood it to be, but is actually following a thermodynamic gradient. Right? And so a lot of people have written about this where it's like, yeah, at the level of the individual you can say, you know, millions of people are driving into New York, they're making the decision to go to work, but then you back out far enough and you can say, statistically we know 5 million people are gonna be driving over that bridge every day. And this doesn't really get to the other thing I love about Tyler's episode, which is this relationship between the way that there's a kind of aha moment in the creative process. And then the way that, as we were talking about a little bit earlier, the way that this kind of auto correlation also looks like a mass extinction event sometimes. Those are two key pieces here.

David Krakauer:

So let me, but actually a good example, and again it's a segue, was Covid because I think a lot of the social political polarization that we observed following an epidemic was about individual agency and institutional control. Mask wearing was perceived as to some communities, an unacceptable positional restriction of individual freedom and others as necessary. So it's to be more beholden to the collective good. It had exactly these qualities you're talking about. And Simon actually in that episode, does reflect on issues of individual agency in relation to individual rationality or irrationalities. It's worth listening to this section, uh, of an interview with Simon.

Michael Garfield:

In your discussion of astrophysics as a successful science. And then you drift away from that into the study of failure scenarios,

Simon DeDeo:

I, I don't say mecca, I may, I forget. You use a metaphor here. It's not broken bones, it's an autoimmune disease, right? It's a case where the virtue of the body turns into a trauma, turns into a vice. Our immune system, again, like one of these, you know, mega things, like it turns out we have this autonomous molecular drone system constantly attacking on our behalf. Uh, and that's great until it's not great and, you know, things go drastically, dramatically and chronically wrong. So you might say, like I studied the autoimmune diseases of epistemology. Now how the same things that make us such powerful scientists can also make us such powerful, let's say conspiracy theorist. The thing, one of the things we're interested in right now, but Michael, I interrupted you so please, please, go on on.

Michael Garfield:

Well this is great because you just handed it to me on a platter cause I wanted to talk with you about a paper that you co-authored with, uh, Zachary Wojtowicz From Probability to Consilience, How Explanatory Values Implement Beijing Reasoning. And now I imagine probably half of our audience knows what Beijing probability mm-hmm. even is mm-hmm. and the other half does not. Mm-hmm. . So for our purposes, you know, feel free to dip into that as much as you'd like, but mm-hmm. , what we're really after is you talk about being interested in the autoimmune disease mm-hmm. , and I think this paper does a very good job of unpacking how it is that people try to make sense of things. Mm-hmm and what sort of constitutes a healthy immune system, meaning, you know, like a balance of different characteristics for mm-hmm. understanding. So lead us into this piece a little bit and then we'll start tethering out from it into other stuff.

Simon DeDeo:

No, this, I mean, it's a wonderful place to begin because that paper, it's a paper in theory, it's a paper on the theory of both knowledge building and how knowledge building actually happens. One way into this is Bayesian reasoning. So what is Bayesian reasoning? It's associated with the Reverend Bayes publishes one paper posthumously, you know, and he didn't even submit it. Somebody else was like, Hey, I got these notes. This is the Reverend Bayes, submit it to the philosophical transactions of the Royal Society and the 1700s. And you read it today and you're like, holy shit. Like this guy saw 300 years into the future. It's insane how much he gets right. And you're reading, I mean, so you keep going. It's actually, the only other paper that's like this that I've ever read is Claude Shannon's Creation of Information Theory, which again, happens in one paper, they're Reverend Bayes's paper, it's the same thing, it's just slightly old style language, but you're reading it and you're like, I'm sure this is the end of the paper.

And it's like, no. And yet there's even more that he understands. And then what's funny of course is that we didn't actually realize how much was in there until the 20th century. So at the time, I mean, people understood it, people who knows they understood it. But you know, it was published in the best journal in the country, if not all of Europe. But unlike, let's say the theory of evolution, unlike the discovery of electromagnetism, unlike, you know, Ben Franklin flying his kite we don't have monuments to base in part because I think we didn't realize how important the question was until we started trying to form reliable knowledge and by reliable know we mathematically reliable knowledge. And that became a premium when our measurement devices got good enough that actually we were really getting quantitative evidence as not just qualitative evidence. And then of course we also had the ability to gather a great deal of information through machines.


 

And finally, of course maybe we had the processing power to do it because Bayes was like, oh look, here's how you would infer things optimally. Good luck with that. Right? Gotta go. And we really didn't have, you know, the tools to do this on a mega scale. It didn't really hit science until the 20th century. Not to, you know, tie everything together. But this was of course, one of the things that happened in astronomy was Bayesian and reasoning hit astronomy. And that's when we finally entered what we call today, the precision era, where I can literally tell you the universe is 12.7 billion years old, plus or minus, you know, a billion or whatever. It's, so that's a long way around to saying Bayesian reasoning is a story about how to form beliefs optimally. Uh, you can prove that you will achieve the correct unbiased belief faster than any other method.

So that's great. Like now we can just go play music, right? We're done. We've solved knowledge formation, we have a recipe, an algorithm literally for forming the best possible beliefs. And in fact, like it turns out, there are like cults of the Reverend Bayes at this point where people actually believe this problem has been solved. Of course it hasn't, but the reason it hasn't been solved is very interesting. And the reason comes back to this question of what are called priors. At least that's what we've come to call them. So it turns out actually what Bayesian reasoning enables you to do is to increase your knowledge from some previous state, right? It enables you to take your state of knowledge at point A and increase it by gathering information to take you to point B. And so it's actually the, you might think of it as the optimal way to go from A to B.

It doesn't, however, tell you how you got to A, it doesn't tell you where you begin. It doesn't tell you when you walk into a labar tree, when you come into a new field, you know, you come into the Santa Fe Institute and you've never seen an animal before, at least in a scientific study, it doesn't tell you how you ought to, for example, distinguish between explanations before the evidence comes in. It doesn't tell you the ways in which you might attend more or less to some chunk of evidence over another. Um, so there's a lot of these missing pieces in there that, you know, Zach and I are trying to tease apart. And so I would say the innovation there is in part obviously drawing our attention as psychologists, as cognitive scientists to the problem of explanation itself. So, you know, it's very meta we're explaining explanation.

And the other thing is to show, you know, even though Bayesian reasoning is supposedly this unique and optimal thing to do, you can actually kind of tease apart all the pieces, isolate them and consider them, you know, as a sort of set of values that come together in the mind of a scientist, let's say, or you know, in the mind of somebody off duty trying to make sense of the world. So I think that's where Zachary and I begin. And then, you know, the question is, okay, what are these values having teased apart this algorithm you might say to like these little submodules, okay, what are these submodules of the reasoning process? What do they look like? Can we name them? Do they tie into things that we've seen in, let's say the scientific record itself? Meaning how scientists have explained things. Can we see these units tying into things in the philosophy of science?

Things that philosophers have talked about as ways to make decisions. Can we see them tying into cultural practices of explanation that go beyond science? So can we see them tie into, let's say, how we tell stories, how journalists explain things, how historians explain things, how we explain things to ourselves and our diaries, right? So on and on, right? That was kind of our goal there was to sort of say, you know, look from the outside it looks like you have this unitary optimal algorithm for sensemaking. And yet, actually, you know what, when you open the hood and look inside, it's uh, collects like this little jewel box of pieces that actually look far more philosophical, far more value laid than we might expect.

Michael Garfield:

So yeah, I mean, the thing I love about this is again, to cast back to earlier notes on pluralism in complexity thinking that when Simon says you can use a Bayesian inference approach to think about the different heuristics that people are applying to, what constitutes a satisfying explanation? Some people want the big all-encompassing conciliant model. Some people want the parsimonious model that's like, you can write it down really quickly, but either one of those approaches pursued to the exclusion of the other leads to a pathology, you know? And he talks about that cons, you know, the conspiracy thinking as being an instance where the kind of brilliance that allows for great revolutions in scientific theory without a parsimonious countercheck ends up spinning people off and doing insanities like Q-Anon. And so yeah, it's this issue of if you cut that vertically and then you look at, again, like we were saying the, the inherent tensions between individual agency and institutional demand. We talked about this a lot on the show with Michael Lachmann and his work on why it's costly for cells to requisition nutrients from the body because if it were cheap then cancer would just proliferate constantly, you know? And so there's like a flaw kind of in, in human thinking of like, well, why can't I have what I want? Right? And this is not unique to children. This is not like, you know, why can't I just eat sugar all day?

David Krakauer:

Well, actually it's, it's a good actually moment to reflect some of the two last episodes that we're gonna discuss before talking a little bit about your current projects, which bear on both of those issues. One is on children in children's development, and then this notion of where the locks of information really is, right In Caleb's recent book and interest information. So let's listen to Caleb first.

Michael Garfield:

There's a blog entry actually from SFI researcher Cosma Shalizi about this back from 2010 where he makes the case the singularity has already happened. And it was over by the close of 1918. It was the industrial revolution that we look at things like corporations and we see how these things function in what Simon DeDeo in the seminar he gave at SFI last week would call borrowing from Western hermetic traditions, the aggregor, which are these bodies that we participate in the same way that Lynn Margolis argued bacteria came together, endos symbiotically to form complex cells. And then you've got, you know, just to give people another point of association for this, you've got Jessica Flack’s work and in particular, you know, her paper on course screening as a downward causation mechanism, arguing that even in less sophisticated, if you will, organisms, like macaques, that their efforts to model and understand one another in society end up leading to these collective computations that then shape behavior.

So again, back to this kind of Marshall McLuhan thing about how, you know, we shape our tools and thereafter our tools shape us. And this is where I'd like to dig in a little bit more on what you've said about the burden of these ideas and about the tension between our own intelligence and you know, the ability to actually track and participate in the ratcheting complexity of the data om and the way that it leads to the brain case of human beings 50,000 years ago was greater than the brain case of humans now that we've actually lost brain volume in the same way that our jaw started to shrink after we started using forks. And so that's something I'd love to hear you riff on

Caleb Scharf :

. Yeah, I didn't know about the brain case observation, which is very interesting. I mean, you know, brain size is a peculiar measurement of things, right? I mean for a long time, people assumed brain size correlated with how smart you could be or how sophisticated you could be, but it's not so clear that it's that simple. And so even something like a larger brain for our ancestors 50,000 years ago, well why did they need a larger brain vault? It could have been physiological thing in response to climate conditions, could have been something to do with how they had to operate to get food. They may have been much more physical than anyone on the planet today. I, I don't know, I'm just speculating. You know, it's interesting and you look at brains of elephants, right? They have differently proportion regions of their brains and some of that is undoubtedly because they have a large body and they need neurons to deal with that.

And the act of movement has to engage perhaps a lot more computation and active movement even for something like us, although we're pretty complex. So yeah, so it's very, very interesting and I think yeah, it does connect through to, as you put it so nicely, this tension between sort of our success in the world as a species or just our, the probability of us continuing to propagate both as individuals and our particular gene lineages and our species gene lineage and so on and everything around us and how the data om helps with that or seems to help with that in so many ways, yet does present this extraordinary burden. And I think that burden has become much more evident, but it's always been there to some extent. And you mentioned people referring to how singularity happened in the early 1900ss , right? Which I think is lovely and I think that may well be a better sort of point of reference.

But you know, there other interesting things that were going on in the early 1900s to do with information as we think about it today, almost digital information. So punched cards, something I talk about in the book and punched cards were for many decades the primary way of storing information for industry, for finance, all those things. We got punch card machines, punch card readers, the first digital computers, first sort of commercial digital computers utilized punch cards for programming, for data output and storage and so on. And what's so interesting is those were tangible physical things. They weren't invisible pieces of dope silicone that none of us ever get to look at unless you scrape away at your chip. They were very tangible in the world and they represented a very significant burden on our resources. And I think people have forgotten that. But if you dig into the history of this, it's really fascinating.

You knows the production of punch cards at the peak just in the US in I think the mid 1960s there were at least something like 200 billion punch cards being manufactured every year. And you know, a sizable piece of card or paper and then you have the physical act of punching them takes energy. You have to cart those things around. I don't know what tonnage that amounted to, but I'm sure it was significant and people were just producing more and more of these things. And what's so interesting about punch cards is they make it very easy to see the burden on humans. So there's a burden of making all that paper, producing all these things, printing them, punching and so on. But then humans had to carry the things around. If you were a scientist and you wanted to run a piece of code on a computer back in the fifties, sixties, even into the seventies, very often you would have to put your program on to punch cards and then carry it physically and stand there and feed it into the machine and then retrieve it and carry it physically and put it somewhere safe in your filing cabinet and so on and so on.

You were expending your energy. You know, the hamburger you had eaten ended up fueling your act of information processing later on. And of course punch cards went onto the side ditches of the roads, of technology cuz they weren't terribly flexible and they weren't as efficient as purely electrical, digital information storage and retrieval utilization. But today we have this ridiculous growth in the amount of data that we produce. It's something like 2.5 quintillion bits of new data are generated by our species every single day, every 24 hours. And that's something like a trillion times all of Shakespeare's products every 24 hours. And most of that, or a lot of it is finding itself somewhat permanently stored. And it's everything, right? It's this conversation being recording digital bits. It's the video you made, it's the picture you took on your phone, it's all the financial transactions, it's got scientific computation, it's everything in supporting the internet and so on.

And that of course all takes energy. It takes the construction of the technology in the first instance, which is very energy intensive. Making silicon chips is an extraordinarily energy intensive thing because you're making these exquisitely ordered structures out of very disordered material. And then so there too, we go back to thermodynamics and fighting in the sense against entropic in a local fashion. And that takes a lot of energy. Um, we're having to generate electricity to power, current digital informational world. That piece of the data om. And the rather sobering thing is that, you know, already the amount of energy and resources that we're putting into this, it's about the same as the total metabolic output or utilization of around 700 million humans . And if you look at the trends in energy requirements for computation, for data storage, data transmission, the trends all upwards. It's an exponential curve.

And they suggest that perhaps even if we have some improvements in efficiency, unless those improvements are extraordinary, then in a few decades time we may be at a point where the amount of energy, just electrical energy required to run our digital data om is roughly the same as the total amount of electrical energy we utilize as a global civilization at this time. That's for everything. That's for putting on your lights, running the pumps and your water plants charging your electric vehicle these days and so on and so on. That will be matched by just our informational world. So you look at that and you think this might be a problem , right?

Michael Garfield:

So yeah, Caleb's stuff is really kind of Lovecraftian, I think, right? You know, because he's really figured in his conversation about the data om precisely how in a quantifiable way the pressure of the institutions and systems that have emerged through our interactions are placing a burden on our behavior and an increasing demand on the metabolic processes of the biosphere, right? Like this is Mullock, right? And people talk about this, you know, this this slate star codex articulation that there is this thing that emerges out of us. It's sort of the demonic figuration of the egregore, right? Mm-hmm. .

David Krakauer:

It's the, it's the dark side of the collective consciousness.

Michael Garfield:

Yeah. And the thing that I loved about that conversation with Caleb was that in a way it's beautiful. You know, like our cultural inheritance grows richer and deeper by the day, by the moment.

David Krakauer:

Yeah. So let's just end now with, I think this is very interesting. I mean, it's personal for you cause you recently had, well not that recently, but you have children and, um, and in fact personal for me in other ways because of my interest in noise and a recent debate that David Wolpert and I had with Danny Kahneman and Cass Sunstein and others where we believe noise is absolutely essential to the complex world and they, and many others believe it's inessential. And Allison has some very nice work showing actually that perhaps the key characteristic of childhood is the incredible importance of noise. So let's just listen to that a little.

Michael Garfield:

Let's lens this through your piece, Childhood as a Solution to Explore Exploit Tensions. I love a good review paper. I love a paper that just brings it all together. And this is one of those. And can you help people understand how weird we are as human beings?

Alison Gopnik:

As I say, I started out asking this question about what we could learn from children about how learning is possible. But there's another kinda meta question, which is why is it that children especially seem to have these incredible learning capacities? And that's connected to a broader question, which is why do children exist at all? Why do we as humans have this long period of immaturity? And the more I started looking at the sort of evolutionary biology background for this, the more striking it is because we actually have a childhood that's twice as long as that of our closest primate relatives, chimpanzees, by the time they're seven, are producing as much food as they're consuming. And even in forger cultures, we aren't, humans aren't doing that until at least age 15, if not later. So that's really puzzling. Why do we have this very long period of childhood?

And that turns out that in fact, this isn't just true about humans. There's a very general relationship between how long a period of childhood an animal has and how many neurons it has, how big a brain it has anthropomorphically, how smart it is, certainly how much it relies on learning about formula. And in evolutionary biology, people have talked about the idea that it is that long protected period that actually enables you to learn as much as you do. So there's something really special about childhood, and it makes humans in particular grow way out on the end of the distribution in terms of how immature we are as children and how much investment as a group, as a species we have to put into just keeping those children alive. So the sort of vague general idea to start out with was, well, just having more time to learn might be the advantage of childhood.

But when you look at, especially at neuroscience, that it isn't just that children around for longer, they really have foundationally different kinds of forms of brain and forms of learning compared to adults. And many of these are actually things that might look like bugs, like not being very good at having focused attention, not being very good at long-term planning. Why would we do that? Why would we have this long period in our lives where we seem to be so incapacitated? And why would that be connected to our capacities for learning? So when I started doing the work in AI, one of the really very general ideas that comes across again and again in computer science is this idea of the explorer exploit trade. And the idea is that you can't get a system that is simultaneously going to optimize for actually being able to do things effectively.

That's the exploit part. And being able to figure out, search through all the possibilities. So imagine that you have some problem you wanna solve or some hypothesis that you wanna discover, and you can think about it as if there's a big box full of all the possible hypotheses, all the possible solutions to your problem, all the possible policies that you could have. For instance, you're in a reinforcement learning conference and now you're at a particular space in that box. That's what you know now. That's the hypotheses you have now, that's the policies you have now. Now what you wanna do is get somewhere else. You wanna be able to find a new idea, a new solution. How do you do that? And there are actually two different kinds of strategies you could use. One of them is you could just search for solutions that are very similar to the ones you already have, and you could just make small changes in what you already think to accommodate new evidence or a new problem.

And that has the advantage that you're going to be able to find a pretty good solution pretty quickly. But it has a disadvantage. And the disadvantage is that there might be a much better solution that's much further away in that high dimensional space. And any interesting space is going to be too large to just search completely, systematically. You're always gonna have to choose which kinds of possibilities you wanna consider. So it could be that there's a really good solution, but it's much more different than where you currently are. And the trouble is that if you just do something like what's called hill climbing, you just look locally, you're likely to get stuck in what's called a local optimum. So you're likely to get into a position where every small change you can make is gonna make things worse. So it's gonna look like you're just should stay where you are.

But these big changes could have made things better. And the way that typically gets resolved in various kinds of forms is start out with this big broad search through lots and lots of possibilities. Jump around from one possibility to another and then slowly cool off and narrow down. And the metaphor that's often used is about temperature. So you can think about big boxes. If it had air molecules in it instead of hypotheses, a low temperature search would be just a search where you weren't moving very much. The high temperature search would be this big much noisier, more random bouncy kind of search. And I like to say sometimes for anyone who has a four-year-old at home, which of those sounds more like your four-year-old , four-year-olds are both literally and metaphorically noisy and bouncy. So the solution is start with this big broad search.

The disadvantage, of course, is that you might be spending time trying out really weird, strange things that aren't gonna help you very much. And then when you see something that looks like it's in the right hole park, narrow into the cooler solution. So it's like what happens in metallurgy with a kneeling where you heat up a metal first and then gradually cool it to end up with a more robust metal. But of course, if you're thinking about childhood from that perspective, from the perspective of that kind of explorer exploit contrast, or from the perspective of the high temperature, low temperature and kneeling, then a lot of the things that look like bugs turn out to actually be features. So actually doing a lot of random variability, being noisy, having a broad focus potential instead of a narrow focus potential. All those things that are really not good from the exploit perspective when what you wanna do is just implement that policy, say as quickly and effectively as you can, those things all turn out to be real benefits from the explore perspective, what you want us to learn as much as you can about the world and explore as many possibilities as you can.

Michael Garfield:

This is really beautiful because I think this insight speaks to questions that seem especially pervasive in American society about the ultimate practical utility of arts funding, for instance. Or when Murray Gell-Mann talked about why it is so difficult to fund an institution like SFI, where fundamental theory, unlike the search for a vaccine, is something that doesn't have an immediate pre-specified use. Like you don't know what you're gonna get out of it, you may not know for decades, it's a high-risk pursuit, but that's why play exists. You know, and Allison's work is linked very intimately, and I I spoke about it with her in that episode with Andreas Wagner and you know, the, the work that he's done on the evolutionary utility of play. And so again, here we have an instance where the benefit to a child or you know, a juvenile organism of whatever kind or a new organization to be young and exploratory and to, like, my son climbs up on chairs like he's obsessed with getting on top of the table, you know, and he just doesn't, he doesn't care. Like he doesn't, you know, it's impossible for us to communicate to him what he stands to lose if he falls off and cracks his head, right? And so there is thankfully this thing that evolution keeps finding again and again and again, which is a balance between those who are older and battle scarred and they get it, but they've become risk averse and conservative in their thinking in a way that their children aren't. And the two of them balance each other, right?

David Krakauer:

So this is perfect because let's now view us as having given birth to Michael Garfield . And so in terms of someone who's hardly risk averse, who's high noise in that sense, high exploration. Let's pivot now, uh, having reviewed these amazing episodes that you made to a little bit of a consideration. Let me just make some remarks on this and then, and then ask some questions and, uh, sort of interesting, I'm sure many people listening, and certainly your friends here who know you are aware of this extremely interesting high temperature mind that is constantly looking for connections. Um, some are real, some are not so real from my point of view, but it, but it's necessary because, um, without that Alison's, you know, uh, simulation and kneeling model of childhood development, they're just areas that you never would've moved into. They would've been neglected and we would've become sclerotic. So I wanna talk a little bit about this next phase, your sort of high temperature exploration of both the book that you are planning, but I also know more podcasts that you want to do. So tell us a little bit about the next phase in your development.

Michael Garfield:

Yeah, so there's a, a University of Chicago media theorist, W. J. T. Mitchell, who wrote a fantastic follow up to Walter Benjamin's essay, the Art in the Age of Mechanical Reproduction. Mitchell wrote an essay called Art in the Age of Buyer Cybernetic Reproduction, in which he said, we need a paleontology of the present, a rethinking of our condition in the perspective of deep time in order to produce a synthesis of the arts and sciences adequate to the challenges we face. And, you know, for the same reasons we've discussed now at length in this episode, it strikes me that complexity science is one tool even in all of the plurality of its own methodologies, at a distance it is still one group of techniques that needs to be held in a kind of ecological balance with all of these other techniques.

David Krakauer:

Oh, let me, let me just qualify that a little. I mean, there's no doubt that it's circumscribed, but I, bristle a little bit about confounding the domain of analysis with the techniques of analysis. And I've always thought, you know, physics is not just about the calculus, right? and I think physics is about the portion of the universe dominated by symmetry. Complexity science is about the portion of the universe dominated by broken symmetry and the methods that we use will constantly change, right? And I, now that's not to say that it, it's also the arts and it's music because it's not, and I, so I take your point, but I do think we shouldn't confound deep areas of inquiry with their methods.

Michael Garfield:

Totally fair. But I guess all I'm saying is that, you know, there are quantitative approaches and there are qualitative approaches, and we need them both. Like, we're at a point now where there's a very real concern that people have that systems like ChatGPT could either gain access to directly or inspire and enable people with access to genetics laboratories and bio printers to create synthetic organisms that reshape the biosphere. You know, you look at Sara Walker's work with, uh, David Grinspoon, et cetera, and it's arguable that the anthropocene is really not the age of human beings. It's the age of these technological monsters that we've created. You know, with George Church, like De-Extinction is on the menu now in 2023. And you know, in in our last conversation we talked about how like whether you prefer aesthetically to ascribe to the notion that Covid 19 was a, a zoonotic transmission that crossed over from a pangolin or a bat or whatever, or that it was a gain of function experiment that got loose, what we're talking about again is the way that we, you know, in patching everything together, we have folded over these things that might have best been kept separate.

David Krakauer:

You know, it's interesting cause it, there's a, it raises, you make a interesting observation there, and I, the, the most ironic version of this is Peter Thiel calling his surveillance company Palantir . I mean, did he actually read Lord of the Rings? And, uh, if you remember what the Palantir who had mastery and control and how dangerous they were, and I think it's very intriguing to me that there seems to be this somehow the aesthetics of power and control beat the responsible ethics. Somehow, even though Crichton was a great moralist as well as a prognosticator on technology in, and so many things he observed seemed to have come true, but somehow the ethical component is just the sort of salt and pepper on the nutrition that people derive from the power play. That's the thing that persists, right? That's in the end, unfortunately, what it leaves behind in its residue is not greater moral responsibility, but a greater appetite and desire for total control.

Well, a, a good, a good example of that interesting aesthetic to practical to ethical trajectory is GPUs. Right? I mean, GPUs were gamer fair in the transistor world. They then migrate, uh, to, you know, blockchain and crypto, um, and now they're supporting large language models, deep neural networks and which are kind of having an impact on everything. So there is this very interesting, you make a really interesting observation that, and maybe this is, as you say, the larger complex systems read on Alison, which is Homo Ludens, right? Play first and then everything else follows.

So, okay, that seems like a very good point to wrap up because we've gone from complexity, uh, starting with this sort of what seems, I guess from the outside, like this very sober, serious mathematical, natural science, social science perhaps, and ended up in play in aesthetics and the deep existential question of where are we now? And I think, Michael, you have stewarded this series over these episodes exploring this incredible, incredible territory and to say, I thank you very much on behalf of SFI and we are really looking forward to the incredible things that you're gonna do next.

Michael Garfield:

Thank you, David. I mean, it really was a completely transformative experience to work here and it is with a heavy heart that I move on and I really do hope that I am able to remain an active member of this community. I, I, I cherish the relationships that I developed here and all of the people I met and everything I've learned. And I look forward to seeing where it leads us as I escape from the electric fences, and, uh, find my way on a boat to the mainland to go re wreak havoc in the rainforest of Costa Rica.

David Krakauer:

Fantastic. Thank you. Michael

Michael Garfield:

Complexity podcast has been and will be a production of the Santa Fe Institute, an independent nonprofit research center in the high desert of New Mexico. Follow up on everything that we discussed by visiting the show notes and stay tuned to see what lies in store.