COMPLEXITY: Physics of Life

John Krakauer Part 2: Learning, Curiosity, and Consciousness

Episode Notes

What makes us human?  Over the last several decades, the once-vast island of human exceptionalism has lost significant ground to wave upon wave of research revealing cognition, emotion, problem-solving, and tool-use in other organisms. But there remains a clear sense that humans stand apart — evidenced by our unique capacity to overrun the planet and remake it in our image. What is unique about the human mind, and how might we engage this question rigorously through the lens of neuroscience? How are our gifts of simulation and imagination different from those of other animals? And what, if anything, can we know of the “curiosity” of even larger systems in which we’re embedded — the social superorganisms, ecosystems, technospheres within which we exist like neurons in the brain?

Welcome to COMPLEXITY, the official podcast of the Santa Fe Institute. I’m your host, Michael Garfield, and every other week we’ll bring you with us for far-ranging conversations with our worldwide network of rigorous researchers developing new frameworks to explain the deepest mysteries of the universe.

This week we conclude a two-part conversation with SFI External Professor John Krakauer, Professor of Neurology and Director of the Center for the Study of Motor Learning and Brain Repair at Johns Hopkins. In this episode, we talk about the nature of curiosity and learning, and whether the difference between the cognitive capacities and inner lifeworld of humans and other animals constitutes a matter of degree or one of kind…

Be sure to check out our extensive show notes with links to all our references at complexity.simplecast.com  . If you value our research and communication efforts, please subscribe, rate and review us at Apple Podcasts or Spotify, and consider making a donation — or finding other ways to engage with us — at santafe.edu/engage. Please also note that we are now accepting applications for an open postdoc fellowship, next summer’s undergraduate research program, and the next cohort of Complexity Explorer’s course in the digital humanities. We welcome your submissions!

Lastly, for more from John Krakauer, check out our new six-minute time-lapse of notes from the 2022 InterPlanetary Festival panel discussions on intelligence and the limits to human performance in space…

Thank you for listening!

Join our Facebook discussion group to meet like minds and talk about each episode.

Podcast theme music by Mitch Mignano.

Follow us on social media:
Twitter • YouTube • Facebook • Instagram • LinkedIn

Referenced in this episode:

Prospective Learning: Back to the Future
by The Future Learning Collective (Joshua Vogelstein, et al.)

The Learning Salon: Toward a new participatory science
by Ida Momennejad, John Krakauer, Claire Sun, Eva Yezerets, Kanaka Rajan, Joshua Vogelstein, Brad Wyble

Artificial Intelligence Hits the Barrier of Meaning
by Melanie Mitchell at The New York Times

Economic Possibilities for our Grandchildren
by John Maynard Keynes

The Intelligent Life of the City Raccoon
by Jude Isabella at Nautilus Magazine

The maintenance of vocal learning by gene-culture interaction: the cultural trap hypothesis
by R. F. Lachlan and P. J. B. Slater

Mindscape Podcast 87 - Karl Friston on Brains, Predictions, and Free Energy
by Sean Carroll

The Apportionment of Human Diversity
by Richard Lewontin

From Extraterrestrials to Animal Minds: Six Myths of Evolution
by Simon Conway Morris

I Am a Strange Loop
by Douglas Hoftstadter

Coarse-graining as a downward causation mechanism
by Jessica Flack

Daniel Dennett

Susan Blackmore

Related Episodes:

Complexity 9 - Mirta Galesic on Social Learning & Decision-making

Complexity 12 - Matthew Jackson on Social & Economic Networks

Complexity 21 - Melanie Mitchell on Artificial Intelligence: What We Still Don't Know
Complexity 31 - Embracing Complexity for Systemic Interventions with David Krakauer (Transmission Series Ep. 5)

Complexity 52 - Mark Moffett on Canopy Biology & The Human Swarm

Complexity 55 - James Evans on Social Computing and Diversity by Design

Complexity 87 - Sara Walker on The Physics of Life and Planet-Scale Intelligence

Complexity 90 - Caleb Scharf on The Ascent of Information: Life in The Human Dataome

Complexity 95 - John Krakauer Part 1: Taking Multiple Perspectives on The Brain

Episode Transcription

John Krakauer (0s): As meaning machines, as semantic, understanding machines, with this superpower, we have run rough shot over all the other intelligences on the planet. So that's just empirical proof that we've got something. What is it? And you know, someone like Dick Lewontin and his famous article on this said we're simply not gonna be able to tell the evolutionary story of this. One reason is all the intermediate species between chimpanzees and us, the 24 humanoids are gone. But I'm much more on the side that there is something new that's discontinuous emergent and fascinating. 

We don't know how to conceptualize about it. Either we think we can extrapolate from the road in the neuroscience, or we think we'll stumble across it at deep mind and open AI. That may be true, but at the moment I don't see it in a form where you can go, ah, yes, 

Michael Garfield (1m 15s): What makes us human? Over the last several decades, the once vast island of human exceptionalism has lost significant ground to wave upon wave of research revealing cognition, emotion, problem solving, and tool use in other organisms. But there remains a clear sense that humans stand apart, evidenced by our unique capacity to overrun the planet and remake it in our image. What is unique about the human mind and how might we engage this question rigorously through the lens of neuroscience? 

How are our gifts of simulation and imagination different from those of other animals? And what, if anything, can we know of the curiosity of even larger systems in which we're embedded to social super organisms, ecosystems, technosphere within which we exist, like neurons in the brain? Welcome to Complexity, the official podcast of the Santa Fe Institute. I'm your host, Michael Garfield, and every other week we'll bring you with us for far ranging conversations with our worldwide network of rigorous researchers developing new frameworks to explain the deepest mysteries of the universe. 

This week we conclude a two-part conversation with SFI External Professor John Krakauer, Professor of Neurology and Director of the Center for the Study of Motor Learning and Brain Repair at Johns Hopkins. In this episode, we talk about the nature of curiosity and learning, and whether the difference between the cognitive capacities and inner life world of humans and other animals constitutes a matter of degree or one of kind. Be sure to check out our extensive show notes with links to all of our references at complexity.simplecast.com

If you value our research and communication efforts, please subscribe, rate and review us at Apple Podcasts or Spotify and consider making a donation or finding other ways to engage with us at Santafe.edu/engage. Please also note that we are now accepting applications for an open postdoc fellowship, next summer's undergraduate research program and the next cohort of Complexity Explorers course in the digital humanities. We welcome your submissions. Thank you for listening. 

So I actually took the most careful study of the pieces that you'd written for this conversation of this piece, perspective learning back to the future and you know, a strand that maybe apparent to people having listened to this conversation so far, I'm curious if you agree with this, but it strikes me from reading your work that basically like learning and science are effectively synonymous and in this piece, you and the other members of the Future Learning Collective make the point that the way that learning is modeled and understood mostly is retro addictive. 

I mean, we've talked about this on the show a lot when I had Cris Moore on and we were talking about, you know, the problem with things like predictive policing is that they're not actually predictive. This is increasingly well understood in terms of, you know, the machine learning overfitting to the training data. And you spend a good deal in this paper talking about, you know, out of distribution data. But yeah, this thing about we're not trying to just make sense of the past that what learning really is, is an alignment to what could be. 

And so I'd love to hear you unpack this a little bit and then I can get into the weeds of this paper. 

John Krakauer (4m 39s): You know, we can go into the weed of this paper, but I really wanna make it very, very clear that I was part of this collective, but this is really Josh Vogelstein, who is one of, who was the founding member of the Learning Salon actually. And, and Josh is this remarkably original thinker, Johns Hopkins, who is extremely interested in AI. He very much likes the idea of, of learning and very much comes out of the, the world of AI that thinks that you can be sort of a tabular rasa. 

Now of course he believes in inductive biases and long term learning, but they do think that they have within their reach algorithmic solutions to outer distribution learning and how to solve these problems. You have to also, I should tell you that I was the critic on board here. Take a step back. I mean one of the things that Josh has talked a lot about is can we create animal intelligence? In other words, let's stop talking about humans. Look at the vast repertoire that a mouse or a cat has. 

They can switch between behaviors, they're flexible, they can learn across the lifespan, they're adaptable. These are all things that AI at the current time does not have. It interpolate doesn't extrapolate, it's brittle. You learn a second task and you override the first one. You have to interleave tasks. You know, there's meta learning that's very interesting. People like Jane Wang at DeepMind at doing that kind of work. So it's very much about how do you get learning across a lifespan? How do you learn multiple things? 

How do you generalize from those things and don't have catastrophic interference between them? That's really the project is how do we get to even a flexible unwelt of a simpler animal. Now my take on that, and this is something where Josh and I argued a lot, is that there is no equivalent in non-human animals of the very thing that they're trying to get at in this paper. 

In other words, that you are not going to solve for the kind of things that we must have in our brains to be able to have this discussion for our podcast from solving the problem of lifelong learning, multiple task learning that is talked about in that paper. So that is a manifesto for at least getting AI to do what animals do and to break this problem of brittleness, not able to extrapolate out of distribution, catastrophic interference and all those things. 

But I just want you to know that I welcome that project and I think it's a really important one, but I'm not sure that it's gonna get us to the kind of psychological states that we started this conversation with that are my interest. Yann LaCun will say sometimes, and sometimes it'll same to else that look, let's at least get a cat first before we're worrying like Francois Chalet does about humans. This paper's really animal intelligence project and what can we learn about? 

And then finally, not just animal intelligent learning. In other words, Josh's fundamental belief with the rest of the people on that paper is that learning has been very poorly understood and modeled by neuroscientists and learning is what the computer scientists really know a lot about. The machine learning is a very sophisticated mathematical treatment of learning and all the different algorithms you can use and all the ways you can do in and out of distribution learning. In other words, what Josh will say is look at all the mathematics of learning that is present in the machine learning world and look at how impoverished the treatments of learning are in neuroscience. 

And surely we could begin to have an animal lab in the basement of DeepMind or open AI where there would be cross-pollination between a very sophisticated view of learning and lovely empirical neuroscience work with very poor understanding of learning. That's what this is really about, right? It's about a more sophisticated repertoire of mathematical treatments of learning and bringing them to bear on the mysteries of how animals can learn all these different tasks and be so flexible and a curse, you know, a pox on both your houses that neuroscience has a bit impoverished about, where it talks about learning and current AIs are so inflexible. 

So how do we help both those impassive, right? That's what this whole project's about. I really came in just because I'm interested in behavior, I'm interested in learning and I also am interested in the unique human perspective even though I have done animal work. And so I was sort of the fly in the ointment really on that paper. But it's very much in the tradition that mathematical theories of learning from outside of animal biology and neuroscience could greatly improve those sciences. 

That's their belief. I don't know exactly how to think about that. I'm really impressed by them. Their amazing group led by Josh I think and Konrad Kording. I'm interested to see where it will go. I should not be given driving seat credit for this paper. 

Michael Garfield (10m 2s): For sure. So, you know, I hope it doesn't, it doesn't make you an awkward 

John Krakauer (10m 7s): No, I just wanna make sure that listeners know that I was a invited guest and critic and you know, I'm very close to Josh and Konrad and Tim actually Stein, but I just don't want to be given more credit than it's at all due to me for this project, which is very much their baby. That's all I'm saying. 

Michael Garfield (10m 26s): One thing that I do want unpack for people though is the team here gives a list of the four desiderata, right? Continual learning constraints, curiosity and causal estimation and then identifies these as features that distinguish prospective learning, future oriented learning from retrospective learning. And then gives all these lovely examples of how far back you can witness behavioral evidence of anticipation all throughout, you know, the animal kingdom. 

And you can start to beg the question of whether this was convergent or whether it's just deep, deep ancient truth. And which I thought was a kind of a strange question really because I mean the intuition at SFI would be like arguably even non-animals are doing this. There are ways that you can see like mycorrhizal affiliations doing something that looks like this just because we now have dug ourselves deep enough into talking about this paper that we probably owe people. 

John Krakauer (11m 34s): Sure, sure. 

Michael Garfield (11m 35s): To map this out for them. 

John Krakauer (11m 37s): I think anticipation and prediction are not the same thing. There are many impressive anticipatory circuits that you and I use every day that are a mixture of having been hardwired and learned. Prediction is a stronger notion than anticipation and in a way implies simulation and this gets to the idea of learned models of the world that you can then simulate on to predict your next step or what you're gonna do. And then that leads to the notion of planning into the future. 

In neuroscience there's huge interest in model-based learning. You know, the two big areas are model-based navigation and model-based decision making. And for a model in my view, to really be a model is to suggest that it is a substitute for the world that you can operate on instead of the world and simulate and predict. So that's very much a view also in the computational machine learning side is simulation. 

Just to be absolutely clear, I think simulation doesn't really happen. What you see much more often is anticipation and then you get weird cognition like we do and heuristics. But simulation is a will of the wisp. It's hard to find it. Now whether when you do things in AI simulation is a worthwhile endeavor. Sure. But I just don't think it's what animals are doing. So a lot of the view here is that you can develop models of the world which you can simulate on for prediction. 

You can learn those models, you can switch between those models. The mathematical frameworks for talking about building models of the world that you can simulate on and switch between. Reinforcement learning has that framework. So the authors' of paper have differing views on how much you're gonna get outta reinforcement learning and other approaches. But it's important to see what's at stake here, which is can you explain how through a mixture of inductive biases and learning algorithms that across life you can acquire a number of separate task representations that you can switch between and do so in a predictive manner based on what you think is coming in the world. 

That's what people in animal neuroscience and animal behavior have been interested in. And the idea here is the computer scientists can bring a list of things that you need in order to be an intelligent animal. It really is this belief that there's gonna be a marriage between animal neuroscience and a better toolkit of mathematical approaches. I don't know how well the graft will take. 

Michael Garfield (14m 26s): Yeah, this is curious to me because you know, in this piece it talks about the model as hypothesis, an internal model of the world also called a hypothesis. That's a, that's a piece of the formalization. 

John Krakauer (14m 38s): Yeah, that's what I was talking about. 

Michael Garfield (14m 39s): When I think about that and I think about the way that you and the other members of this team unpack that in terms of estimations of you know, risk, its relation to curiosity. I'm curious, you know, cuz I feel like this came up actually somewhat in, you were on the intelligence panel on the interplanetary festival lineup this last weekend. And there was the conversation I heard after that panel was like, where is wasn't clear to folks. 

And I guess it's still not clear to me now precisely what you see as exceptional as a matter of kind in humans. And like you said, you must have been the fly in the ointment because this piece really seems to be trying to draw a continuity or contiguity. 

John Krakauer (15m 28s): That's exactly right. It is. And Josh and I have argued about this. That was always the fun bone of contention that I was arguing for discontinuity. 

Michael Garfield (15m 40s): Is that in our sociality? 

John Krakauer (15m 41s): No, no. That's an origin story of where it comes from. But I'm just saying we know what the word sociality means. We know what internal model means. No other animals have a clue what those things mean. They don't need to know what they mean. In other words, the point I I'm making at that panel the other day and here is that this world of meaning and pragmatics in symbols is just a different universe we stumbled across. And it gave us a, a new weapon to exploit the world with. 

Now most of the time, even us as humans and all the animals and all the modeling and all that's being discussed there don't need that. You just have your innate inductive biases. You have learning algorithms. Those learning algorithms allow you to alter what you were born with and you can do that. But if you are a cat in Istanbul and you are a cat in Boston, you basically look exactly the same.? You do very similar things. There may be some adaptations.

But your cat is a cat is a cat. Whereas humans can be trapeze artists, they can be machine learning programmers, they can be composers. There's this vast repertoire of things that you can do that are based on true models of the world that are overt, understood semantic. And this project is kind of a counter project to Melanie Mitchell who wants to look at the barrier of meaning and semantics and symbols and analogies and language. 

And make no mistake here, Michael, the world of that paper and the world that I'm more interested in Melanie, they're just at the moment they're unrelated. And that's because you can be intelligent in many different ways using really smart dumb algorithms that get 99% of the lifting required to exist on this planet without ever having to do podcasts, write books and send probes up into space. Whatever that weird mutation was and we have no idea what it is that isn't gonna get you there. 

Michael Garfield (17m 59s): Well let's talk about something that is held in common, to the point of cliche in bringing up cats. The same thing arguably that's animating if you know is like necessary but not sufficient for sending probes into space as kills the proverbial cat is our curiosity. And that the curiosity is one of the four traits that this paper identifies as future focused and is something that we do not observe in artificial intelligence as it exists today. 

And so, you know, just because this podcast basically flies the banner of curiosity, this seems like a good place to land it. Especially because as I've always reflected on it, curiosity and fear really are the sort of common language for these orientations towards fear being a conditioned aversion and curiosity being an opening to possibility. And these are anchored in the brain. You can describe them in terms of all of that reductionist, you say, oh dopamine. 

John Krakauer (19m 7s): Curiosity, I mean sort of information seeking to reduce surprise in the Frestonian sense. You know, the curiosity is simply a way of cutting down the mystery of the place the world around you as quickly as possible so that you can avoid predict. We always used to notice you introduce a new object into the house and the cats start to circle it, inspect it, what's this novel object in the space? And there's no doubt that there's some, hate to say this, but it's a little bit like a random number generator. 

There's a sort of exploratory mode gets switched on. And by the way it, it's very interesting that even if you look at primates, it seems as though animals are have almost like a probabilistic, they never just fix on the sure bet. They will see it as just a high probability versus a low probability. But they'll still explore on occasion the low probability option. It's almost as though they have a setting that is implicit, which is always check the low probability even though because the world might change and you might not notice. 

So there's almost as though there's a clever behavior that's there and you could call curiosity that, I'm gonna check on this unknown, you know, maybe there's food in that box even though there's food in the dish right to my left. I'm still curious, who knows? I mean there's a lot of work, animals classic papers in 1970s showing that rats like to work for food. They'd rather pull a lever a few times than to be given the food for free. They're all sorts of interesting phenomenon.  

Michael Garfield (20m 42s): Is that sort of the critique of John Maynard Keyne's and economic possibilities for a grandchildren where he talks about automation and yeah, no, right, right. We're like, oh we're gonna get rid of labor and it's like, no, we found more work 

John Krakauer (20m 52s): For ourselves. It's actually interesting. Yes, I never thought about that but it was called rats prefer to work for food or something like that. 

Michael Garfield (20m 57s): No luxury automated communism. Sorry. 

John Krakauer (21m 1s): It's fascinating but I just, what I wanted to say is, is that when Maxwell wanted to know, when he pointed famous, he said, what's the go of that? The curiosity of wanting to know how that works or what is that versus a cat circling a box that's just been introduced into the apartment. Both can be called curiosity. What I fear and David talks about this is here when we call them similar where in the realm of metaphor and analogy and finding mathematical optimization theories for why it's good to exploit and explore and change the ratio of that and call exploration curiosity. 

That's all good. But it's not the same as curiosity that we mean when we're talking about it in the vernacular. So therefore I actually, I disagree that there's a point of overlap, there's curiosity one and there's curiosity two and they're not the same in my view. 

Michael Garfield (21m 57s): Sit with me please and help me get there because you know, I think of two things and I don't have the citation at hand, but I remember hearing about a study in raccoons, where the raccoons living in rural areas when presented with a novel potential food source, spend less time investigating it than raccoons living in dense metropolitan centers. And the thinking around that was that by virtue of living in a more surprising, uncertain, ever-changing and novel environment, it introduces a kind of injunction to become more curious. 

And that seems related. Before we started this conversation, I was talking about some of the papers that had really founded my own curiosity and complex systems and one of them was Lachlan and Slater’s Maintenance of Vocal Learning by Gene Culture Interaction: The Cultural Trap Hypothesis, which talks about how songbirds with large vocal repertoires seem to be those with very heterogeneous environments where the likelihood of them finding a mate nearby is relatively low and they have to travel great distances. 

And so there's something about the intelligence of the individual as a function of the complexity of its environment and I'm gonna get myself fired saying this, but like this seems kind of related to the way that you see people voting with their values depending on rural or urban life. By virtue of urban scaling and just a daily encounter with diversity, you're automatically more disposed to curiosity. 

John Krakauer (23m 46s): Sure. But I mean the fact that there can be volatility in the environment, probability of change, which then leads to changes in the parameters of switching or changing, even in simple motor learning tasks, you can show that you can change your learning rate as a function of how much your environment is changing. There are lots of clever, flexible algorithms that seem to be sensitive to the statistics of the world. 

I mean Konrad and like they love this kind of stuff. The fact that you, and you know this is true of the whole free energy principle, the idea that you can be sensitive to the statistics of the world and learn to model it in order to anticipate it. And that there's a whole algorithmic repertoire that can be given to you as a species and then it can be parameterized through learning. That's all amazing. In other words, there are loads of these intelligent behaviors. I'll just give you an example in humans, anticipatory postural adjustments. In other words, I can have you hold onto a handle and just crouch while holding onto it and then unbeknownst to you, suddenly the handle will pull on you and you'll have a very quick intelligent, reflective response to stay from falling over. 

Now I put a cup of coffee in your right hand and do the same thing. Your very short latency reflective response will take into account that you're holding a coffee cup in your right hand and your quick reflective anticipatory postal adjustment will be intelligent and will change in the context of having that conversation. You have no conscious idea about what you just did. You have no concept that you did it, you have no conscious awareness of it, but you made a flexible adjustment that was very intelligent. 

Same thing I'd ask people always. The first muscle you can track when you press an elevator button is your gas in your leg. Because if you didn't, you'd fall over because your center of gravity has changed cuz you've lifted your arm. Loads and loads and loads of these intelligent behaviors. They have nothing to do with overt decision making, understanding the world they are built in, the reflexes are built in and the algorithms to adjust those reflexes are built in.

Michael Garfield (26m 6s): So you don't see this as a curiosity embellished by higher levels of representation.

John Krakauer (26m 10s):. So in other words, that sort of argument that, oh it's all this stuff John, all this innate, unconscious, intelligent, dumb, smart algorithms that are all there. And then curiosity as is, I'm curious about the origins of life, I'm curious about what's at the center of black hole, is just some minor embellishment on that is such nonsensical statements to me. And that's where you fall flat because you call one curiosity and you call the other one curiosity and therefore it's just a matter of some tweak of one to the other. 

Michael Garfield (26m 42s): But I mean I have family members that aren't curious about what's in a black hole. 

John Krakauer (26m 46s): Sure. But they wanna know what happens at the end of Game of Thrones. They're curious to know about the ending of Game of Thrones. Do you think wanting to know what happens at the end of Game of Thrones is like deciding to forage somewhere slightly different in the desert because of weather climate changes? I just don't think. 

Michael Garfield (27m 5s): You have an emotional stake in it though, right? There's like a morsel. 

John Krakauer (27m 9s): But it's not gonna change your likelihood of reproducing, it's not gonna change your chances of finding food. So why on earth do you care what's gonna happen at the end of Game of Thrones

Michael Garfield (27m 18s): I've heard the argument that with jouissance, the psycho-analytic term for the pleasure of harmful activities like smoking that addictive behavior fits into this whole free energy principle thing because if you know you're gonna be drunk tomorrow night, you know something about the future. 

John Krakauer (27m 38s): I've heard this, but Sean Carroll asked Karl Friston this on his show, is it really the same what my cat does and my daughter does? The planning to go to college in a few years and going up the stairs to wear your food bowl or the litter is, and Karl had no answer to that. He, what he said was, oh, it's like paradox, you know, when is a grain of sign become a pile of sand? It's just a continuum. 

Michael Garfield (28m 2s): Well I mean we've already established there must be differences of kind. 

John Krakauer (28m 6s): Why have we established, I mean, you know, my arm is made out of skin, bone and muscle and so is a bird wing. Is my arm a little bit of a wing? 

Michael Garfield (28m 16s): That's what I'm saying though, is that at some point there is a, there's a transition. 

John Krakauer (28m 19s): You can have substrate continuity and functional discontinuity. And all I'm saying is that when you have words like curiosity and you come up with some mathematical formulation of exploration versus exploitation, you talk about volatility of the environment making it more likely that you might switch. There are many such frameworks and they are implicit algorithms optimized to change in the face of things. That's all great. But it's not the same as saying, I'm curious to know what she will do with her career over the next several years. 

And this is where I have so little sympathy with the Frestonian Project and other sort of minimizing paired down treatments because from the homeostatic safety mechanisms of a bacterium to planning college and the reason why it's so important to make this point is it's not that there may not be some similar machinery and you can talk about bathing inference in humans and bathing integration of the century motor system. You can talk about bathing inference where you infer using concepts like gravity and Neptune and Mercury and think about those kind of things in a bathing way. 

And you can think about where's my arm space? I'm optimally integrating vision and pro perception to estimate the position of my hand in space. And it seems to be close to bayes optimal. So the machinery may be similar, but the thing is, is that representation of position of your hand and representation of the idea of an idea, they're not the same. An idea in your head and the position of your hand in space. They're just completely different things. 

Michael Garfield (30m 2s): This is awful because what we haven't done in this conversation at all is get into your work about spatial memory and coding.

John Krakauer (30m 12s): It's fine. I mean, I mean what we're talking about here is the most SFI flavored things that I care about. I mean I think sensory motor physiology is more of an empirical science and it's fascinating but I don't think it's necessarily, I think there's no coincidence that you've moved the direction that the conversation in this direction. 

Michael Garfield (30m 28s): When you're talking about the kind of end dimensional manifolds across which something is you're tracing and hamiltonians like this is the SFI thing like you know, abstract spatialization and like the symbolic constructs. 

John Krakauer (30m 41s): Just like I'm saying the creativity of the kind you were mentioning is not, is really just analogous to creativity that we use in the human sense. It's also true I think for navigation. In other words huge amount of interesting work being done in the road in hippocampus and play cells and grid cells and navigation. But you know, if you look at the way that's done, what's really fascinating in that work is that you can navigate and do really well without having to have a full-blown map in your head in the way that Tolman originally conceived of it. 

 

In those we as humans can do it, right? I can have you close your eyes and I can ask you tell me what you see as you go in your front door of your house or your apartment and you turn left what's on your right, you can do it. You can picture in your mind your house quite well and then I can actually make decisions based on that simulation. Now that's fascinating, but to navigate, most of the time, even when you drive home or you find your room in your hotel or you are a rodent, you don't have to do that work of simulating simulation. 

So in other words, we are in this very strange place where there's a lot of ways to do lots of intelligent behaviors without actually having to conjure up representations over which to simulate and to understand. Cuz you can just get away. You know the example I gave, if an alien came from outta space and watched, as I said the other night, Patman versus Ghosts and a game being played, the alien will go intelligent person playing as the ghosts and intelligent organism as the Patman. 

But the representations being used by the human and the representations being used by the machiner, just totally unrelated, but they look analogous because you're going, they're both trying to win, they're both trying to get a goal. So from one level of granularity, you call them both goal oriented, both intelligent, it's just the algorithmic universes that they exist in, in my view are completely unrelated. 

Michael Garfield (32m 45s): I don't know how you feel about me saying this, but this is, I think Simon Conway Morris' position on human intelligence and animal intelligence. He just wrote a book about dispelling six myths of evolution in which he makes a very similar point. 

John Krakauer (32m 59s): In a way it's because I've done so much work on the sensory motor system and all these unconscious smart algorithms. You could have argued, oh John should be the person who wants the sensory motor story to make it all the way to the cognitive story cuz then he's got maximum generalizability of the work he does. But I'm actually saying no, the sensory motor story is not gonna get us all the way and I'm actually reducing the generalizability of a lot of the work that I've done. And I think that we're missing something. 

You know, Ida will say from the writing salon, there are algorithms we've yet to discover. She must be right in a way that we can't possibly have found them all. I think that we also need to understand better the representations upon which you operate algorithmically. So I think the mysteries and the representations, so as I said, you can be bayesian on knowledge and you can be bayesian on sensory motor information and you can use the same machinery, but that's just presupposing the terms you plug into the base equation. 

There's nothing in the bayesian formulation that tells you where the terms come from. 

Michael Garfield (34m 8s): So you led me right to the step of the last possibly, maybe it's an impossible question I have for you, but you know, in hearing all of this, so much of the conversations I hear and I participate in at SFI are, if not explicitly pointing to at least implicitly invoking the what's over the horizon of our own knowable. Just speaking with David Wolpert had just written at length about this and I had him talk about it in the last episode. 

And so when I think about all this, it kind of begs this question like, well there are these discontinuities, I see it as a kin, like we were talking about before we even started about, it's akin to these other kinds of major revolutionary transitions of the opinion from everything I've learned here that what makes us human is actually the fact that from the beginning of what we call modern human cognition, we've been embedded in larger cognitive structures than we are that are doing something. 

That we don't understand and that are instrumentalizing us for their purposes. So if the curiosity of the rat is different of a kind from the curiosity of the human, what is the curiosity of a society or of a civilization? 

John Krakauer (35m 33s): I mean let just be very clear about that. I'm glad we've finishing on this. I mean Sheila Heyes has this book Cognitive Gadgets, a lot of people would say it's very interesting when you talk to some of the people who believe in embodiment and embeddedness and sort of free energy principle stuff which is there's a sensory motor story and then there's culture and then there's a loop between culture and sensory motor and what we call cognitive is just floating out there in the loop and you don't have to go into the brain to come up with some extra capacity that you call cognitive. 

It's kind of a consequence of being embedded in culture plus having these lower level sensory motor rodent systems. So rodent plus culture and suddenly you get human cognition kind of thing to be really overly simplistic about it. To me it's just a complete non-starter. Do you see it's just a non-starter. I'm gonna really go out on a limb here. I'm just saying look, curiosity in a rodent is almost a inherited through evolution algorithm cuz it's useful to have learning on top of innate behaviors to have some degree of flexibility in a world that's changing. 

So in other words, it made sense to have an adaptive system. So in a sense a rodent has not chosen to be curious. Now you could under argue that young children haven't chosen to be curious either. All I'm saying is it turns out that the world is amenable to science. It's amenable to being understood. You can actually work out the laws of motion, you can work out what's happening in a black hole. It turned out that the universe bizarrely is comprehensible and comprehending it in an abstract, conceptual, theoretical way is useful for world domination. 

And it just turned out for reasons we don't understand, that evolution stumbled across the fact that you can use biological tissue to comprehend and theorize about the world. It was an accident because most creatures, you could argue in humans, can survive quite well without theorizing overtly about the world and writing fictional stories about it and making TV shows about it or doing science. So let's make no mistake, we have a capacity to tap in to the comprehensibility of the universe and it just turns out that the universe is comprehensible and there's an organ that can comprehend it and you can get a lot of benefit out of that. 

Michael Garfield (38m 3s): So can we comprehend the questions that the technosphere is asking? Can we comprehend the nature of its curiosity? 

John Krakauer (38m 13s): I just don't think that there's any need yet to ascribe some meta agency to the culture because in the end it is the cumulative consequence of this weird capacity that we have to comprehend which when you put us all together with this capacity, it's extraordinary what emerges. I have no, don't get me wrong, right? But yes, there was a time when people couldn't do calculus, but the brain was there ready with its capacity is to understand calculus as soon as Newton and liveness invented it. 

But you know, you could spend hours and weeks and months with the chimpanzees, it'll never understand why not. It's embedded in the same culture. You can bring up chimpanzees with children. So you have to have this mix of capacity, which is blatantly able to learn new things, which is a fantastic consequence of that capacity tiled across many people. You get a culture. But what we don't understand is what is this capacity that we got that is probably located somewhere in prefrontal cortex and parietal cortex that allowed us to tap in to the comprehensibility of the universe and exploit it. 

And that ability to be curious to theorize and fantasize and imagine the universe, it's just not there. Now the question is, is can we get to it through some tweak in extrapolation and then when we look at the extrapolation, we'll go, ah, it's just a little bit more of X and Y and Z. You'll just be able to infer by looking at the diagram that, oh, comprehension of the universe, science, but it'd be much more modest and much more open-minded to say that we don't know it rather than to write papers like Jovi did or talk the way other people in computer science do. 

It will just happen. Do you see? And yet there's a desire to cut it down to size, whether it's information or deep learning, or whether it's large language models just somehow tame it, cut it down and eviscerate the mystery of it. And I have no axe to grind. I'm not a my, I'm more than willing, but at the moment there's no there there. And the deep irony of it, very capacity to understand the world that is leading us to want to say we've understood this mystery prematurely. 

It's the very same capacity that I'm talking about that is tripping itself up. 

Michael Garfield (40m 43s): Is that what you find yourself most curious about? 

John Krakauer (40m 46s): I would really love, whether it's the animal neuroscientists or the cognitive scientists or the computer scientists to finally explain this peculiar version of understanding of the universe that humans have that is quite distinct from all the other ways that you can be intelligent in the world and survive quite well. The ghosts in Patman will be doing a great job, but they're not frightened, they're not anxious, they're not gonna be excited when they win or lose. 

They're not invested, they're just running the algorithm. Whereas the poor human running with that pyramid, it's, oh my God, oh my god, you know, how am I gonna get away from this? I'm gonna lose, will I get the world beating score? You know, what is my friend thinking watching me? In other words, what is all that that and we just don't know. And I think that would be what I would most like somebody to come along from one of those three domains and say, John, we understand it now in a way that you'll go, aha, just the way you go aha. When I tell you how eye movements work or how the stretch reflex works, that I'll have that same moment of compressed satisfaction. 

Michael Garfield (41m 58s): It's not half stutter's strange loop. It's not Jessica Flack’s top down. 

John Krakauer (42m 2s): But those are descriptions. Yeah, they're not explanations. They don't feel like an explanation. They don't feel like, ah, that's how the stretch reflex works. Ah, that's how ATP Synthase works. Ah, that's how a ribosome does. Translation of RNA into protein. I get it. It's a re-articulation of the problem that may ultimately lead to explanations, but I think you'd agree they're not explanations of that same kind. 

Michael Garfield (42m 35s): This is exactly what I'm asking you though, cuz it strikes me that this is the nonsense that keeps me awake at night and my wife is begging me to get some sleep so that I can be present the next day. Which is that like there's something like the mirror test that we're failing as humans is a horizon to our own capacity to comprehend. 

John Krakauer (42m 57s): That may well be that the reason why there are hundreds if not thousands of books on the philosophy of mind, but none that I know of on the philosophy of the liver is because for some reason this organ gets us entangled in a world knot where the other organs don't. And we don't really know why. I mean it's still, it's made out of biological tissue. It's not magic. So why is this organ giving us such trouble? And it's not like even nervous tissue is giving us trouble all the time. We do quite well with the retina, we do quite well with the spinal cord. 

We do quite well with muscles. We can treat those pieces of the nervous system in the way that we could treat livers and lungs. And yet suddenly when we get to this language and reasoning and pragmatics and semantics and understanding, it doesn't yield in the same way. Now there are two ways out of that. One is it's not yielding cuz you've made up a fiction and you are trying to explain a fiction and you are never gonna get away from it cuz you've, there's no there there. 

Hence for some people just go, just solve goal directedness information seeking bacteria in dishes. That's all you need to think about. Everything else you're just inventing and it doesn't exist. It's the ether, it’s fastigial, consciousness and thinking are like ether are just the fastigial, you're making up a problem that's just tying you. 

Michael Garfield (44m 19s): Up. I'm not super sympathetic to that. 

John Krakauer (44m 21s): But Dan Dennett is very interesting because he believes in this cognitive capacity. He doesn't believe in any of the neuroscientific sub personal stories that get told about that capacity that seem to be imbued with that capacity. In other words, he wants his cake and eat it, which is yes, the phenomenon exists, but the way that you are gonna try and do a neuroscience to that phenomenon. 

Michael Garfield (44m 45s): Or maybe Susan Blackmore is more hard line. Maybe. 

John Krakauer (44m 48s): That's right. I would say that that view I just don't agree with. I think there's something, a new ability that we have used, I think to obvious empirical effect. We're destroying the planet as meaning machines as semantic understanding machines with this superpower. We have run rough shot over all the other intelligences on the planet. So that's just empirical proof that we've got something. What is it? And you know, someone like Dick Lewontin in his famous article on this said, we're simply not gonna be able to tell the evolutionary story of this. 

One reason is that all the intermediate species between chimpanzees and us, the 24 humanoids all gone. But I'm much more on the side that there is something new that's discontinuous emergent and fascinating. We don't know how to conceptualize about it. Either we think we can extrapolate from the road in the neuroscience or we think we'll stumble across it at deep mind and open AI. That may be true, but at the moment I don't see it in a form where you can go, ah, yes. 

Michael Garfield (45m 50s): See, and this is where, just to tie a bow on this, it strikes me that had our salon been a little bit more diverse, perhaps someone would've stepped in and said, now it's time to stop talking about it and enact this practically in some kind of way. The reason we can't reach this through abstraction is because it can only be reached through a a second person knowledge of presence, the presence of this thing that we're grasping at. 

John Krakauer (46m 21s): I mean, David, you know, talking to David about, I think he thinks it's a legitimate question, how we are going to develop a science of this particular discontinuity if it is a discontinuity. But David will say that there are discontinuities in biology. Is this one of them? I think it is. And I think the fact that we don't have AGI and computer science and the fact that we don't understand the difference between the chimpanzee and the human is evidence of the same conceptual gap in both fields. 

Michael Garfield (46m 50s): So we're alchemists at this point, right? Trying to turn lead into gold.

John Krakauer (46m 53s): Again, it could turn out that it's a non-pro, that the reason why it feels like such an impasse, such a gap is because we're just imagining the ether or it's a real problem in waiting for a revolutionary new step. And I think it's the latter, but you know, there's no room for dogmatism about this, but I'm not gonna be prematurely anesthetized out of worrying about it. 

Michael Garfield (47m 20s): Fair. Maybe someone's had to anesthetize themselves to listen to us for 120 minutes. Any parting thoughts? John has been a treat to sit here with you. 

John Krakauer (47m 38s): I mean, one thing I will say, make a plug for the multidisciplinarity of your podcast and of SFI to encourage discussions amongst enough different people. Like for example, David Wolpert and then me and then see what happens. I'm sure he would vociferously disagree. He already told me yesterday that he didn't believe in free will. I think that the best chance we have is a salon like atmosphere where these conversations happen so that you can begin to make progress. 

I think it's gonna be a kind of SFI flavored, multidisciplinary world where we'll be able to say, was this just an illusion or was there something that simply was waiting for a new conceptual framework? And I think it's discussions like this. So kudos to you. 

Michael Garfield (48m 27s): Thanks. And hey, take away fun more salons. 

John Krakauer (48m 30s): Absolutely. And not make it a hobby. Make it part of people's scientific upbringing that you can be in salons and enjoy them. 

Michael Garfield (48m 38s): Awesome. John, thank you so much for being on the show. 

John Krakauer (48m 41s): Wonderful. 

Michael Garfield (48m 43s): Thank you for listening. Complexity is produced by the Santa Fe Institute, a nonprofit hub for complex system science located in the high desert of New Mexico. For more information, including transcripts, research links, and educational resources, or to support our science and communication efforts, visit Santafe.edu/podcast