COMPLEXITY: Physics of Life

Alison Gopnik on Child Development, Elderhood, Caregiving, and A.I.

Episode Notes

Humans have an unusually long childhood — and an unusually long elderhood past the age of reproductive activity. Why do we spend so much time playing and exploring, caregiving and reflecting, learning and transmitting? What were the evolutionary circumstances that led to our unique life history among the primates? What use is the undisciplined child brain with its tendencies to drift, scatter, and explore in a world that adults understand in such very different terms? And what can we transpose from the study of human cognition as a developmental, stage-      wise process to the refinement and application of machine learning technologies?

Welcome to COMPLEXITY, the official podcast of the Santa Fe Institute. I’m your host, Michael Garfield, and every other week we’ll bring you with us for far-ranging conversations with our worldwide network of rigorous researchers developing new frameworks to explain the deepest mysteries of the universe.

This week we talk to SFI External Professor Alison Gopnik, Professor of Psychology and Affiliate Professor of Philosophy at the University of California Berkeley, author of numerous books on psych, cognitive science, childhood development. She writes a column at The Wall Street Journal, alternating with Robert Sapolsky. Slate said that Gopnik is “where to go if you want to get into the head of a baby.” In our conversation we discuss the tension between exploration and exploitation, the curious evolutionary origins of human cognition, the value of old age, and she provides a sober counterpoint about life in the age of large language machine learning models.

Be sure to check out our extensive show notes with links to all our references at If you value our research and communication efforts, please subscribe, rate and review us at Apple Podcasts or Spotify, and consider making a donation — or finding other ways to engage with us at

Lastly, we have a bevy of summer programs coming up! Join us June 19-23 for Collective Intelligence: Foundations + Radical Ideas, a first-ever event open to both academics and professionals, with sessions on adaptive matter, animal groups, brains, AI, teams, and more.  Space is limited! Apps close February 1st.

OR Apply to participate in the Complex Systems Summer School.

OR the Graduate Workshop on Complexity in Social Science.

OR the Complexity GAINS UK program for PhD students.

Thank you for listening!

Join our Facebook discussion group to meet like minds and talk about each episode.

Podcast theme music by Mitch Mignano.

Follow us on social media:
Twitter • YouTube • Facebook • Instagram • LinkedIn

Mentioned & Related Links:

Alison Gopnik at Wikipedia

Alison Gopnik’s Google Scholar page

Explanation as Orgasm
by Alison Gopnik

Twitter thread for Gopnik’s latest SFI Seminar on machine learning and child development

Changes in cognitive flexibility and hypothesis search across human life history from childhood to adolescence to adulthood
by Gopnik et al.

Pretense, Counterfactuals, and Bayesian Causal Models: Why What Is Not Real Really Matters
by Deena Weisberg & Alison Gopnik

Childhood as a solution to explore–exploit tensions
by Alison Gopnik

The Origins of Common Sense in Humans and Machines
by Kevin A Smith, Eliza Kosoy, Alison Gopnik, Deepak Pathak, Alan Fern, Joshua B Tenenbaum, & Tomer Ullman

What Does “Mind-Wandering” Mean to the Folk? An Empirical Investigation
by Zachary C. Irving, Aaron Glasser, Alison Gopnik, Verity Pinter, Chandra Sripada

Models of Human Scientific Discovery
by Robert Goldstone, Alison Gopnik, Paul Thagard, Tomer Ullman

Love Lets Us Learn: Psychological Science Makes the Case for Policies That Help Children
by Alison Gopnik at APS

Our Favorite New Things Are the Old Ones
by Alison Gopnik at The Wall Street Journal

An exchange of letters on the role of noise in collective intelligence
by Daniel Kahneman, David Krakauer, Olivier Sibony, Cass Sunstein, & David Wolpert#DEVOBIAS2018 on SFI Twitter

Coarse-graining as a downward causation mechanism
by Jessica Flack

Complexity 90: Caleb Scharf on The Ascent of Information: Life in The Human Dataome

Complexity 15: R. Maria del-Rio Chanona on Modeling Labor Markets & Tech Unemployment

Learning through the grapevine and the impact of the breadth and depth of social networks
by Matthew Jackson, Suraj Malladi, & David McAdams

The coming battle for the COVID-19 narrative
by Wendy Carlin & Sam Bowles

Complexity 83: Eric Beinhocker & Diane Coyle on Rethinking Economics for A Sustainable & Prosperous World

Complexity 97: Glen Weyl & Cris Moore on Plurality, Governance, and Decentralized Society

Derek Thompson at The Atlantic on the forces slowing innovation at scale (citing Chu & Evans)

Episode Transcription

Alison Gopnik:

There's something really special about childhood and it makes humans in particular go way out on the end of the distribution in terms of how immature we are as children and how much investment as a group, as a species we have to put into just keeping those children alive. So the sort of big general idea to start out with was, well, just having more time to learn might be the advantage of childhood. But when you look at, especially at neuroscience, we see that it isn't just that children are sort of around for longer. They really have foundationally different kinds of forms of brain and forms of learning compared to adults. 

SFI/Michael Garfield:

Humans have an unusually long childhood and an unusually long elderhood past the age of reproductive activity. Why do we spend so much time playing and exploring caregiving and reflecting, learning and transmitting? What were the evolutionary circumstances that led to our unique life history among the primates? What use is this undisciplined child brain with its tendencies to drift, scatter, and explore in a world that adults understand in such very different terms? 

And what can we transpose from the study of human cognition as a developmental stage-wise process to the refinement and application of machine learning technologies?

Welcome to Complexity, the official podcast of the Santa Fe Institute. I'm your host, Michael Garfield, and every other week we'll bring you with us for far-ranging conversations with our worldwide network of rigorous researchers developing new frameworks to explain the deepest mysteries of the universe. 

This week we talked to SFI external professor Alison Gopnik, professor of psychology and affiliate professor of philosophy at the University of California at Berkeley, author of numerous books on psychology, cognitive science, childhood development. She writes a column at the Wall Street Journal, alternating with Robert Sapolsky. Slate said Gopnik is where to go if you want to get into the head of a baby. In our conversation we discussed the tension between exploration and exploitation, curious evolutionary origins of human cognition, the value of old age, and she provides a sober counterpoint about life in the age of large language machine learning models. 

Be sure to check out our extensive show notes with links to all of our resources at If you value our research and communication efforts, please subscribe, rate, and review us at Apple Podcasts or Spotify, and consider making a donation or finding other ways to engage with us at Lastly, we have a bevy of summer programs coming up. Join us June 19th through the 23rd for Collective Intelligence: Foundations + Radical Ideas, a first-ever event open to both academics and professionals with sessions on adaptive matter, animal groups, brains, AI teams, and more. Space is limited. Applications close February 1. Or apply to participate in the Complex Systems Summer School or the Graduate Workshop on Complexity and Social Science. Or the Complexity-GAINs (UK) International Summer School program for PhD students, Links to more information for all of these programs are available in our show notes. Thank you for listening.


SFI/Michael Garfield:
Alison Gopnik, I have long waited for this and I am so excited to talk to you today. Thank you for being on Complexity podcast. 

Alison Gopnik:

Well, very happy to be here, Michael.

SFI/Michael Garfield:

Let's start by walking people back into your own history and into the story of how you became the researcher that you are now. What animates the questions that you're asking? What's the origin of all of that? 

Alison Gopnik:

I began philosophy as an undergraduate honor student, and I thought that I was just going to stay in philosophy. That was completely my focus and to some extent that's been true for my entire career. That hasn't changed. Indeed, I'm still an affiliate in the philosophy department at Berkeley. The philosophical question that I really cared about was what you might call the problem of knowledge. That problem is -- how is it that we know as much as we do about the world around us? After all, all that reaches us from that world is just a stream of photons at our eyes and streams of molecules of air at our ears. 

And yet we seem to understand that there's an abstract world full of people and places and things and leptons and quarks. And the question is how do we ever manage to do that? That's one of the great classic philosophical questions going back to Plato and Aristotle. I thought that a good way to actually answer that question was to look at children, because children are the people who do that. In point of fact, they're the ones who actually manage to learn as much about the world as they do. So seeing what they did, I thought then and I continue to think to this day, is a really important method for trying to answer that big question about how we know about the world. 

That was the sort of inspiration and, and it turned out that even though I was a philosophy honors, I still had taken a whole bunch of psychology classes and I ended up with the psychology major sort of accidentally as well. And then I went to Oxford for graduate school and I was spending half of my time down in, believe it or not, Logic Lane, which is where the Oxford Philosophy Department is and the other half up in Summertown. Those are the villas where women and children sort of were parked back in the old days of Oxford. 

Oxford is very geographically divided. The closer you are to the river, the older and more prestigious the enterprise that you're involved in. I think classical epigraphy is the top of [the list?], but developmental psychology in children are pretty low. And what I discovered was that I was spending all of my time with two different communities of people, and that I could spend the rest of my time with them: one community of completely disinterested seekers after truth who cared more about finding out about the world than pretty much anything else, and another community of spoiled egocentric people who needed women to take care of them all the time. Since the first community was the babies and the second community was the philosophers, I figured I'd actually rather spend the rest of my life with the babies than the philosophers. What happened was that at that point I kind of switched from being primarily in philosophy to becoming a developmental psychologist and actually looking at what children do that enable them to learn as much as they do.


Then in the past 20 years or so, I've extended that question to thinking about AI and computation, because of course it's the central question, especially for the modern versions of AI that rely so much on machine learning -- how can we have a computational system of any sort that could go out and learn about the world? Again, I think children are the kind of demonstration case that such a system actually exists. If we understood what they did, we would be able to answer some of these philosophical and computational questions. 

SFI/Michael Garfield:
Wonderful. Well to launch this off, let's lens this through your 2020 piece in Philosophical Transactions of the Royal Society of London, B, “Childhood as a Solution to Explore-Exploit Tensions.” I love a good review paper. I love a paper that just brings it all together and this is one of those. Can you help people understand how weird we are as human beings? 

Alison Gopnik:

As I say, I started out asking this question about what we could learn from children about how learning is possible. But there's another kind of meta question, which is, why is it the children especially seem to have these incredible learning capacities? And that's connected to a broader question, which is why do children exist all? Why do we as humans have this long period of immaturity? The more I started looking at the sort of evolutionary background is because we actually have a childhood that's twice as long as that of our closest primate relatives -- chimpanzees by the time they're seven are producing as much food as they're consuming. 

Even in forger cultures, humans aren't doing that until at least age 15 if not later. So that's really puzzling. Why do we have this very long period of childhood? And it turns out that in fact this isn't just true about humans. There's a very general relationship between how long a period of childhood an animal has and how many neurons it has, how big a brain it has, anthropomorphically, how smart it is, certainly how much it relies on learning about [??]. 

In evolutionary biology, people have talked about the idea that it is that long protective period that actually enables you to learn as much as you do. So there's something really special about childhood and it puts humans in particular way out on the end of the distribution in terms of how immature we are as children and how much investment as a group, as a species, we have to put into just keeping those children alive. Therefore, the sort of vague general idea to start out with was, well, just having more time to learn might be the advantage of childhood. 

But when you look especially at neuroscience, we see that it isn't just that children are sort of around for longer. They really have foundationally different kinds of forms of brain and forms of learning compared to adults. Many of these are actually things that might look like bugs, like not being very good at having focused attention, not being very good at long-term planning. Why would we do that? Why would we have this long period in our lives where we seem to be so incapacitated and why would that be connected to our capacities for learning? 

When I started doing the work in AI, one of the really very general ideas that comes across again and again in computer science is this idea of the explorer/exploit trade-off. The idea is that you can't get a system that is simultaneously going to optimize for actually being able to do things effectively -- that's the exploit part -- and being able to search through all the possibilities. Let me try to describe it this way. (I guess we're a podcast, so you're going to have to imagine this. Usually, I wave my arms around a lot here.) Imagine that you have some problem that you want to solve or some hypothesis that you want to discover. You can think about it as if there's a big box full of all the possible hypotheses, all the possible solutions to your problem, all the possible policies that you can have -- for instance, you're in a reinforcement learning context. Now, you’re at a particular space in that box: that’s what you have now, that's the hypothesis you have now, those are the policies you have now.


What you want to do is get somewhere else, you want to be able to find a new idea, a new solution. The question is, how do you do that? The idea is that there are actually two different kinds of strategies you could use. One of them is you could just search for solutions that are very similar to the ones you already have and you could just make small changes in what you already think to accommodate new evidence or a new problem. That has the advantage that you're going to be able to find a pretty good solution pretty quickly. But it has a disadvantage, which is that there might be a much better solution that's much further away in that high dimensional space. 

Any interesting space is going to be too large to just search completely, systematically. You're always going to have to choose which kinds of possibilities you want to consider. It could be that there's a really good solution, but it's much more different from where you currently are. The trouble is that if you just do something like what's called hill climbing, you just look locally, you're likely to get stuck in what's called a local optimum. You're likely to get into a position where every small change you can make is going to make things worse. So it's going to look like you're just should stay where you are. But these big changes could have made things better.



The way that typically gets resolved in various kinds of forms is to start out with this big broad search through lots and lots of possibilities, jump around from one possibility to another, and then slowly cool off the narrow down. The metaphor that's often used is this one about temperature. You could think about a big box as if it had air molecules in it instead of hypotheses. A low-temperature search would be just a search where you weren't moving very much. The high temperature search would be this big, much noisier, more random bouncy kind of search. I like to say sometimes to anyone who has a four-year-old at home, which of those sounds more like your four-year-old? Four-year-olds are both literally and metaphorically noisy and bouncing. So the solution is to start with this big broad search. The disadvantage of course is that you might be spending time trying out really weird strange things that aren't going to help you very much. Then when you see something that looks like it's in the right ball park, narrow into the cooler solution. It's like what happens in metallurgy with annealing, where you heat up a metal first and then gradually cool it to end up with a more robust metal.


But of course if you're thinking about childhood from that perspective, from the perspective of that kind of explore/exploit contrast or from the perspective of the high temperature, low temperature annealing, then a lot of the things that look like bugs turn out actually to be features. Actually doing a lot of random variability, being noisy, having a broad focus of attention instead of a narrow focus of attention – all those things that are really not good from the exploit perspective, when what you want to do is just implement that policy, say, as quickly and effectively as you can. Those things all turn out to be real benefits from the explorer perspective. What you want is to learn as much as you can about the world and explore as many possibilities as you can. And what my lab and a bunch of other labs recently have been doing is showing that you can demonstrate even formally that children are making those kind of high temperature explore decisions compared to adults. 

SFI/Michael Garfield:

It's really interesting to press further into your question about why is it precisely that the human childhood is so much longer than the childhood of our fellow primates. There's an evolutionary backstory that you posit in this paper, saying “it is plausible that increased environmental variability was associated, in particular, with human evolution.” I'm thinking of the paper that David Wolpert and David Krakauer just recently wrote in response to condemnation of noise by Daniel Kahneman and his co-authors. They’re saying, well, when you don't know what the problem is, you need to leave it open for a while. There's this thing that's going on where it seems like we have taken the shape of our ancestral unpredictable environment -- that in some way our plasticity is the consequence of our having been subject to climate shifts and migratory forced movement over the surface of the planet. Speak to that please. 

Alison Gopnik:

I hadn't really thought about that before, but I think it is interesting that someone like Danny Kahneman is anti-noise. Because the perspective of grown-up cognitive psychologists in the sense of people who mostly study the cognitive psychology of grown-ups and in fact the whole field that Kahneman was one of the founders of, judgment and decision making, is about judgment and decision making. It's not about learning and explorer. It's about -- once you have the information that you have, how do you actually go out and make the right kind of decision? That's been the focus of so much energy and beautiful work in grown-up adult cognitive psychology. It's interesting, as you say. But of course from that perspective, noise is the enemy and Danny's new book is about noise being the enemy. Again, this is a classic explorer / exploit trade-off if you're thinking about it – not from the perspective of judgment and decision making, not from the perspective of adult cognitive psychology where the job really is go out and make good decisions quickly and effectively. But think about it from the perspective of child psychology, where the job is to explore as many possibilities as you can. Then having a lot of noise and variability actually turns out to be crucial. It turns out to make a lot of sense -- it makes sense to have a system that can do both. It can start out being noisy and variable, but also is capable of narrowing in later on.


Now in terms of your point about what was the evolutionary trigger for this -- I think as always with evolutionary explanations, the best answer, especially for humans who evolved so distinctively so quickly, is a kind of cascade effect. There is some evidence that there were changes in the logical context because of things like climate variability. For example, now we cause climate change, whereas in the past, climate change caused us. But part of what happened is that as a response to that, you also get humans being able to do things like design their own environmental niche to alter the environment in ways that other primates, for instance, don't so systematically go about altering the world that we're in. And we also have humans being involved in cultural transmission so that one group could find something out and pass it onto another group. 

As you mentioned, one of the things that's really distinctive about humans from the time we evolved is that we're very nomadic. Primates are still pretty much in the same places in Africa that they evolved in. As soon as we were human, we were going out and moving. Put all that together, and what it means is not only were there the ecological changes in the environment, but we humans ourselves change our environment with each generation because we alter the environment and then we move around from one environment to another. All that means that there is this kind of cascade of being sensitive to environmental variability.


Then finally, because humans are such a cooperative social species, again there's a kind of interesting cascade here. People like Sarah Hrdy have said it's precisely because we have to take care of those helpless children for so long, that it requires a tremendous capacity for altruism and cooperation. then because of that, understanding all the different ways that your social world could be organized becomes another source of variability. 

So even if it started out with just the climactic variability, then you have the environmental variability that comes from humans altering your environment, you have cultural variability, and you have this nomadic variability, and that all makes for a story of environment in which early plasticity and learning is going to be emerging.

SFI/Michael Garfield:

In a 2018 workshop here at SFI on developmental biases in evolution, there was a lot of discussion on metamorphosis -- a trend towards neoteny that has hooks in the fact that evolution is lazy and that it's easier to lose traits than it is to evolve new complex traits. 

I think about, for instance, how the difference between vertebrates and our sister group, like the tunicates or sea squirts, which start as free-floating swimming larval forms with a head. But then they eventually settle down and anchor themselves to something and filter feed. That also has to do with the fact that basically they're just trying to find a sort of stable, predictable environment within which they can be embedded and then just implant themselves and slurp up a flow of nutrients. 

But then there's this other strategy, which is you never settle down, you never develop that, and so vertebrates, being mobile adults, seem like we were already preconditioned toward a sexually mature juvenile or larval form of our ancestral creature. 

Alison Gopnik:

There's a couple of things there. One of them that I think is really worth emphasizing relevant to the SFI workshop and other things that people in SFI have thought about is the general idea that often evolution is selecting for what biologists call life history. The evolution instead of selecting for here's what the adult form is going be like, here's the morphology of this animal, instead of selecting for developmental changes in the developmental trajectory. And they often end up having consequences obviously for what the adult animal is like. I think that it fits very well with this story about childhood.


Another point to make is that you mentioned the seas squirts, this general phenomenon of having plasticity early on and then less plasticity later -- that just doesn't seem to even be applicable to creatures that have sophisticated brains like ours. There's beautiful work by a biologist named Emilie Snell-Rood, some of which is in that same special issue of Phil Trans, that shows that among cabbage white butterflies, you see the same kind of difference. So butterflies don't learn very much. They're not terribly smart even as insects go. But there is nevertheless a difference between the ones who are just completely relying on their innate reflexes -- find a leaf that's green and just plant your eggs there -- and ones that are actually picking out different kinds of leaves depending on the concentration of different chemicals there or whether there's already competition and making this more learned kind of decision about planting their eggs. It turns out that that's correlated with how long a period of maturity they have. So even for butterflies, they're producing fewer young and giving them a longer chance to mature when they rely on learning. 

Emilie has made this argument even about oak trees. If you think about something like a plant's root system as being a way of trying to explore the potential in the environment, you see this relationship between the complexity of that and then the fact that the oaks take a long time to grow. To become an acorn takes a long time to become a mighty oak, partly because it has to send out a root system that's going to be able to maximize the environment it finds itself in. So there does seem to be a very general strategy across many different kinds of organisms. 

And then the last thing that you mentioned is that there's an old argument about human adults being like neotenous apes. The argument is that part of what makes even adults different from some of our primate relatives is that we are more childlike, we're more plastic, we can learn more, we vary more. But that's not true in comparison to human children. I think human children and adolescents are really the kind of cutting edge for that kind of distinctively human intelligence. 

SFI/Michael Garfield:

There's a point in this paper where you talk about a study where they’ve been told that there's different kinds of blocks, and there's a combination of blocks that they can place on top of each other, some of which will lead to rewards and some of which have costs. You say that after one negative trial, adults quickly assumed the most obvious rule and avoided the costly blocks. But then they never received evidence that showed that the actual rule was more complex. They failed to learn the correct rule and fell into a learning trap. Whereas preschoolers, by contrast, continue to try all the blocks on the machine.


This strikes me as something like a scaling law going on here. Elsewhere in this paper you say hat more than 60% of four-year-olds’ calories go to the brain at rest compared to 20% for adults. So I'm thinking of the controversial paper by John Chu and James Evans, “Slowed, Canonical Progress in Large Fields of Science.” Just to dip out of talking about individual humans for a moment and talk about the collective process of knowledge discovery or construction, it seems that we have reached a mass at which we are kind of fumbling around in the same way that you're seeing people in these experimental settings. I'm curious what your thoughts are on all that,

Alison Gopnik:

I think there's a really interesting question about how adult science works in relation to these ideas. And you know my first book was called The Scientist in the Crib and one of my slogans is that it's not that children are little scientists -- it's that adult scientists are basically big children. And I think there's some evidence for that. I think part of what may be happening is that in individual circumstances, we're able to use these kind of broader, more plastic learning mechanisms that we see. Often what you'll see is a sort of cycle, where within an individual scientist or within a scientific community for a scientific research program, you have this early period where everything's up for grabs. There's a revolution, everyone's trying different kinds of things, and then it narrows into something that looks more like normal science, where the problems have been pretty well specified and people work through those specific problems. Then again, there’s the classic Kuhnian idea about paradigm shifts. What a paradigm shift is, really, is a search in the broader space. What happens in a paradigm shift is instead of just filling in the details in your local search, you do this big broad search and you end up in a really different part of the space. You could see interesting parallels between what's happening in a social setting -- I think James Evans’ work is a really nice example of that -- and what you might see in an individual setting of an individual child, say. 

I've been working on a paper with Willem Frankenhuis, an evolutionary biologist who's done a lot of really beautiful modeling work. It's interesting that even though there’s intuitions about the developmental progress of exploration and exploitation all over the place, there really needs to be a lot of work to try and specify in more detail just what environmental situations are going to lead to, what developmental trajectories with what sort of consequences. Because there certainly are counter examples of something like cephalopods, for instance, like the octopus who are very smart animals who don't live for very long, only live for about a year, who don't really have a childhood at all. 

So there's this question about what's going on, how are those kinds of creatures resolving these kinds of tensions? There are other ways that you can do it. And science, in a way, is a nice example of this. For example, instead of having a developmental division of labor where my young self is exploring and I'm exploiting, you could have a division of labor where some people are exploring and some people are exploiting. That seems to be what insects are doing. For example, with honeybees you have different kinds of roles of the scouts and the scouts are being fed by everybody else, like they're being nurtured by the workers. 

But you still have the same kind of trade off there for the whole hive between the exploration and the exploitation. And I think to some extent in human societies, when you end up with institutions like science, the scientists are like the honeybee scouts. They're given this special role of functioning like the children and actually exploring. The developmental strategy has a nice advantage though, which is that you don't have to so worry about free riders because you are the same organism. Your exploration is going to be used to help you from an evolutionary point of view -- it's going to be used to help you continue to survive in the future. So you don't have to worry quite as much about some of the group selection problems that you have if you're trying to think about an entire society. 

SFI/Michael Garfield:

You say in your article that younger children also remember information that is outside the focus of goal-directed attention better than adults and older children. I'm thinking of all of my adult neurodiverse friends that are just casting a very wide net. Within a few days I'll be interviewing Dani Bassett and Perry Zurn about their book Curious Minds, and these different strategies that people take as they explore the busy body or the hunter or the dancer, the way that people move across these graphs of knowledge. 

It strikes me that, like you said, even within our own childlike species, there is a great deal of variability. For instance, I know that certain members of my family don't really consider dreams to be of any interest and those people, for what it's worth, also seem to be the ones that are most capable of performing in the business world. They achieve results and they have a very narrow focus, and yet they are completely uninterested in the hermeneutics of dream interpretation and this kind of thing. They say it takes a village....

Alison Gopnik:

Two things to say about that. I think there's a pretty good argument that dreaming is also serving this kind of explorer/exploit function. There's an old perception that childhood imagination, dreams, and poetry going back to Shakespeare, are all sort of similar enterprises. In fact I think there's some functional reason to believe that, just like the fact that the children are sort of incompetent is in fact a feature in terms of their being able to explore. The fact that you shut down your motor system when you're asleep actually is a bug in terms of going out and getting things done, but it's a feature in the sense that then your brain is free to do this kind of exploration and consolidation that happens when you're dreaming. So I think there's quite an interesting analogy, and other people suggested this as well, between the cycles we have between waking and sleeping and this sort of developmental cycle between childhood and adulthood.


But another thing to say is, because I talk a lot at Berkeley, so I'm talking a lot to people in Silicon Valley and so forth, almost invariably the question that people ask is, well how can we get adults to be more like kids? How can we be more exploratory, more creative? I think there's a real downplaying of the fact that people are getting out and using logistics to actually make things happen effectively. That's an incredibly valuable skill. Maybe because the writers and scientists tend to be the people who don't have as much of that skill, we downplay it compared to the skills of childhood exploration. But that exploit part of adults is really important. I mean that's what actually enables the children to flourish because someone has to go out there, actually focus, and get the resources to make things happen out there in the real world. 

So even though in some sense my sympathies as a dreamy scientist are with the four-year-olds, we would not want to be in a world that was run by four-year-olds. You really need to have that kind of capacity for long-term planning, for executive function. All those are really genuinely important skills. And the question is how do you negotiate this trade-off between those skills and these other complementary skills of exploration and possibility? One thing that I've been getting interested in historically is that you often see cultures setting up the idea of these kind of cycles. An example that I like is the fact that in medieval Japan, the assumption was that you would be a shogun or a king and you would go out in the world and do things up to a certain point and then you would retreat and become a monk. So then you could actually go and be in the monastery, be a monk who's doing this kind of exploration thinking without actually having to make anything in particular happen. And then maybe you go back to being shogun again. I think that's a nice model for human adults as well, where instead of thinking about just doing one thing or another, we do these things in cycles. I think lots of scientists will report, this is certainly true for me, that there's about a 10-year interval where you find a new problem, it's really interesting, and you do a lot of work on it. If you just found the new problem, that would not be useful because you wouldn't have actually done the work of going out and doing the experiments and checking to see if the hypothesis was right. But then there's a point we should just get bored and feel like, I don't want to do one more. Back in the eighties I was one of the first people to do what's called theory of mind research, which has become very popular now and I can remember things I don't want to do another false belief task – please, let someone else do that. Which indeed they have been doing for the last 30 years. So I think you have some of these same cycles even in your career as a scientist, where you start out exploring, you exploit for a while, and then you go back to exploring. 

SFI/Michael Garfield:

With that question of trade-offs, something that I feel very viscerally is that our current world is one of just profound, unprecedented, extraordinary novelty production. I don't want to jump ahead too quickly here into the AI discussion, but I have a lot of artist friends who are staring down the barrel of image generation tools that some of them think are taking food out of their mouths and others of them are very excited about. The question becomes what do you tell someone growing up now to study? How do you teach someone that is growing up in a world that is changing as fast as our world is. It’s a curious question.


And appended to that, I’m thinking about a fabulous episode of the Stuff to Blow Your Mind podcast a few years ago on the dark side of neuroplasticity, talking about N M D A antagonists and people trying to reopen the critical learning window in their adulthood so that they could learn another language. They said it was basically sort of like opening the hood of your car and just pouring oil everywhere. You don't know what you are rewiring. We don't have a targeted way of doing this. But at the same time, everywhere we look, it seems as though the pressure is on to encourage lifelong learning because people are being routinely displaced economically, and they're being shuffled around geographically. 

I'm curious how your reflections on the challenges facing modern adults in a world that seems as though it is demanding kind of a paradox that requires both more executive function and also more plasticity. 

Alison Gopnik:

I think that of course is the challenge of the kind of work that you do as an adult in general. So again, to think about the example of science, you want to keep that plasticity, but you know, you have to organize, run your lab, go out and get grants, and do all those kinds of executive function capacities. And the question is how can you manage to keep both of those things happening? I think it's interesting again that if you look across cultures, there's traditions of activities that people perform that are really designed to induce a kind of plasticity. 

You mentioned NMDA and I think there's pretty general consensus that the mechanism by which, say, psychedelics are therapeutic, is through this induction of plasticity that that seems to be essentially what chemicals do. Again, the problem is once you've induced plasticity, once you're in this high temperature state, how do you cool off and where do you cool? I think independently of chemicals, things like religious practices, mysticism, meditation, those are all examples of things that people have done as long as people have been around as adults that have the effect of putting them back into this kind of state of the plasticity of children. 

But again, the problem is you can't be in that state indefinitely as an adult. You need to be able to also be in the cooler exploited state and what happens at which is why things like integration with psychedelic therapy seems to be so important. You need to be able to make sense out of the experiences that you've had in order to actually make them be useful or helpful to you as you're going on. But I do think that the real work has always been done and will continue to be done by having new generations of young people who are coming in, seeing the new world and new environment from scratch, with this kind of broad exploration and making sense out of it, and then taking the things that they discover and make sense of, and applying them to the next set of problems that they're going to have to solve. 

SFI/Michael Garfield:

I'd like to take this opportunity to peg into this other article that you wrote for the American Psychological Society, “Love Lets Us Learn: Psychological Science Makes the Case for Policies that Help Children.” You're talking about the role of childhood adversity in the variability of rates at which a brain might age. What does this look like in terms of the ontogeny of an individual? There's kind of two leaves here. One is adversity as a driver of neoteny and one is adversity as curtailing childhood. I'd like to hear you unpack that for people. 

Alison Gopnik:

I've gotten increasingly interested in, as it were, the flip side of this childhood plasticity. And that flip side is that you need to have adults who are caregivers. One of the things that's very distinctive about us as humans is that we have a much wider range of caregivers for children than any other species does. So not only do we have biological mothers who are taking care of children, but we also have pair-bonded fathers. Very unusual. Only about 5% of mammals have pair-bonded fathers who are involved in caring for the young. We have what Sarah Hery, the great anthropologist, calls “alloparents,” which means people who are not biological kin who are involved in caring for the young. 

We also have my personal favorite: postmenopausal grandmothers. We have this extra 20 years of life passed around age 50. Again, very different from [??]chimpanzees. And there's a lot of argument that those extra years, even though they're not directly producing young, are helping the young to survive. So we have this very wide range of caregiving, and not only that, but we also take that kind of caregiving and we can extend it not just to children but to elders or to friends or to the ill. That capacity to care for others seems to be really very deep and important in human beings and particularly that capacity to care for children. 

What kind of function does that serve? It seems to me that a kind of complement to the childhood plasticity is to have adults who are giving signals that the world is safe. That you don't actually have to go out and do things and accomplish things -- you don't have to exploit. And that gives you the resources and capacity to explore. I think there's a number of people who recently have been suggesting in some data to support this, that when those signals aren't there or when the caregiving is unpredictable or when resources are scarce or when the world is full of threats and difficulties, then that actually affects this developmental transition. 

In particular, somewhat counterintuitively, what it seems to do is accelerate the rate of development, this explorer/exploit shift. If you think about it, that kind of makes sense. If you're getting signals that say life is going to be short and there aren't a lot of resources around and there aren't a lot of caregivers who are going to nurture you, it makes sense to move into this state of, okay, let me figure out how to make my way effectively in the world, rather than being in the state of exploring and learning as much as I can. Empirically that seems to be what happens. Adversity seems to lead to the speeding up of the developmental process, both in terms of psychology and certainly in terms of brain and neural development.


At the Center for Advanced Studies, I’m involved in a group that's trying to think about caregiving. Just as children were very neglected in thinking about a lot of these philosophical and computational problems for a long time, caregivers have been very neglected from an intellectual perspective. They've been just undervalued and overlooked in general, which you can see by looking through books of philosophy or psychology. The example I particularly like is moral psychology, where people have done enormous amounts of work about the psychological origins of our morality and yet, this very central moral domain that most of us are in – about how do you care for the people that you're close to? How do you care for children, how do you care for elders? -- is invisible in moral psychology. So thinking about how care works and how caregiving works and how it would work, for example, if we had intelligent AI systems, is a very important and very under-thought-through set of problems. 

SFI/Michael Garfield:

A key aspect of AI is its relationship as a cultural technology to the rest of us as living sentient sapient agencies in the world. I really appreciated the sobriety of your position on this in the talk that you recently gave at SFI, and I'd like to hear you unpack that for folks as well. 

Alison Gopnik:

One of the debates of course that comes up as soon as you're thinking about things like artificial intelligences is, what does it mean to have an artificial agent or the model that we understandably think of as a model of us – that somehow we are going to have individual agents who are going out in the world and doing things. I think that's a very bad model for the big advances that we actually have had in AI, which I think most people in AI would agree is misnamed. If we could go back again, artificial intelligence really isn't the right term to describe the great computational and technological advances that we've had. 

On the other hand, you might not have been able to get as many column inches if we'd called it statistical learning from large datasets, which is actually what it is. But what I've argued is instead of thinking about something like ChatGPT as if it were an agent and then debating whether it's an intelligent agent or not, the right way to think about it is as a kind of cultural technology. What I mean by cultural technology are things like writing, print, language, libraries, all these kinds of technologies we have that enable us to take information from many different people and give it to a new generation. 

So lots of people have argued, and I think this is probably right, that cultural evolution is one of our most distinctive forms of intelligence. Again, children are the ones who are doing this. The fact that each generation of children can take all the information that all those previous generations have discovered and use it themselves without having to rediscover it, that's like a great human superpower. And one of the interesting things that we've done really, again since the evolution of language, is to find new ways of making that transmission of information from one person to another more effective. If you think about the difference between writing and speaking -- an oral language -- when you have writing, then you can get information not just from the people who are within your immediate purview, but from people way off in the past, from people in many different places distant in space and in time.


What's happened, as I think there's a very good case to be made for it, is that those cultural technologies just time and again have had really deep transformative effects on our society. An example I like is that there were these changes in printing technology in the 18th century that made it much easier for basically anybody to go out and get a printing press and print pamphlets and distribute them. And that technology was really responsible for the American Revolution. A lot of the ideas about democracy and the Enlightenment got spread through these pamphlets. On the other hand, as the great historian Robert Darnton has pointed out, in France, that same technology led to this just absolute spew of libel and obscenity and things that make Twitter and Facebook, even at their worst look pretty tame by comparison also led to the distribution of ideas about democracy and Enlightenment. But it ended up in France taking a much less beneficent form than it did in America. So that's an example where you have a new technology and it makes a big difference.


I think the way we should be thinking about things like GPT is to view it more like a kind of medium than an agent that's going out and being intelligent. You mentioned the artist. I really like this. I was talking about exactly this sort of problem with my brother who's an art critic, and he quoted an artist friend of his who said, oh, these things like DALL•E are wonderful because they will immediately tell you here's the cliché that you should avoid. In other words, if DALL•E can generate it, that means it's found all the images that all those trite illustrations are using and here's a summary of the most trite, boring, cliché thing that you can imagine. So if you're really an artist, make sure you avoid that. And she was saying of course that the hardest thing to do is to avoid just copying the banal stuff that everybody else has done. So she thought DALL•E was a really great aid from that perspective. And I think that's generally true. 

Famously Plato and Socrates thought that writing was going to be a really bad idea because when you saw the thing that was written down, you'd think that it actually had an authority that it didn't because all it offered was a summary of what someone else had thought. Now, I don't think that's true for all of AI. Work on robotics, for example, is developing agents that I think look more like the kinds of agents that biologically developed in the Cambrian Explosion -- agents that can actually interact with the world, that have eyes, that have claws that are moving around, that are embodied, as people say. I think that's a closer analogy to real intelligence. But of course if you hang out with people in robotics, the roboticists are just not even in the ballpark of being able to get something that can even do just some of the simple things that humans can do. Although again, they're using information about say, childhood, as a way of trying to solve that.


But I think the way to really think about the big things like the large language and large image models is that they're as powerful as they are because they’ve crowdsourced millions of amounts of human knowledge, human images, human texts. And those techniques are really just sort of crowdsourcing what the humans already know. It's not that they're going out and figuring things out and knowing things themselves. 

SFI/Michael Garfield:

So to zag back from that into the question of the implications for a modern person, I'm curious what you as a grandmother are recommending for your own grandchildren as far as their education into a world in which the landscape of these technologies is surprising us on almost a daily basis? 

Alison Gopnik: 
There's a wonderful paper that came out recently in Psychological Science that I wrote about in my Wall Street Journal column, and it was a scientific version of a point that I've sometimes phrased by saying, the day before you're born is Eden and the day after your children are born is Mad Max. Everybody seems to think that the things that happened before they were born, that's not technology, that's just life, right? But of course the things that happened within your lifetime, especially after you're an adult, those are big technological changes and innovations. 

In this paper they made up something called [“aerogel”?] and said, here's this technology and here's what it does. How harmful do you think it is? How munificent do you think it is? And then all they did was change the date in which it was invented. So it was either 15 years before the person answering the form was born or 15 years after they were born. And sure enough, depending on where it was relative to you, the things that happened before you were born were much more munificent than the things that happened after you were born. 

And you know, it's funny because of course our first thought about this is, well, the anxieties, the moral panic about here's all the terrible things technology's going to do, especially to the children, is overblown. Which I think is actually true. But of course you could make the argument the other way around, which is, suppose you told someone that means of transportation is going to be a little bit more useful, it'll let you get around a little better, but it's going to lead to the end of the planet, and even independent of that, it's going to kill millions of people a year directly in accidents and more through pollution. You would say, no, that doesn't sound like a good idea, that doesn't sound like a good technology. So I think part of it is that the technologies that we understand and learn about when we're young are really different from the ones we have later on.


But there's another kind of interesting thing which is, if you think especially about these kind of cultural technologies, these media, it's hard to think of a case of something that was important to people as a means of communicating or carrying on ideas from one person to another that's completely disappeared. We still have dance, we still have live theater, we still have live music. Everyone thought, okay, those things are all going to disappear when you have film for example. And they didn't disappear. And then once you had film, everyone thought, well when you have TV, then film is going to disappear. And you know, even if literal film has disappeared, the institution of movies hasn't disappeared, the form of movies hasn't disappeared.


And when I look at my grandchildren, I'm struck that we're always battling with them about get away from the screens and don't do that video game and read a book, but they love reading books. They spend a lot of time reading books and, and going to theater and playing musical instruments. And they also spend a lot of time playing Fortnite and doing things on their iPads. And I have no doubt that they will very soon be using something like GPT and they do that without quite thinking that that's what they're doing. They're not thinking about it as mastering a new technology; they're just thinking about it as being in the world that they're in the first place. 

SFI/Michael Garfield: 51-52 or so

Interesting. So no concern then, really. I’d like to return to thinking of it as a scaling principle – that in the coarsest sense, when you are an adult, it's time to put the things of childhood behind you. And yet we live in a world that due to our technological intermediation has created a surface area. I heard David Krakauer describe the condition of modernity as that in which culture is learning ever faster than the individual. And so the individuals are falling ever further behind. There's a trade off.


To go back to your piece on the explore/exploit tensions, you mentioned in some research that “children appear to have the greatest advantage over adults when they must infer hypotheses that have an unusual abstract high-level structure. This makes sense from a computational perspective. High-level abstract schemas typically constrain lower-level hypotheses and shape learner's interpretation of the data.” And of course now I'm thinking of Jessica Flack and her work on core screening as downward causation and Caleb Scharf talking about the dataome and how we're all serving this information architecture that we are embedded in, as William Gibson put it, like polyps on a coral reef across the planet. 

This is a speculative kind of final shot, but do you think that we are becoming in general more childlike by necessity? Do you think that that we are maybe retreating into the embryonic in order to better network with one another and surf all of this change? 

Alison Gopnik:

I'm torn about that. I want to say that when you said, well everything's fine, you don't have to worry, of course you have to worry. You have to worry about anything that's coming up in your culture and in your time. I keep thinking about this quote from Lord of the Rings, of all places, about how these may not be the times we want, but our job is to deal with the times that we have. So in any time, we have to have the responsibility for trying to work through what would be good and what would be bad in that time.


The cultural technologies of the past can, again, be quite illuminating from that perspective. Think about those 18th century pamphleteers. There aren't really any newspapers yet or magazines or any of the apparatus that we think of as being part of our modern journalism media apparatus. And what had to happen was that you had to start inventing things like editors or journalists or newspapers, which took this great expansion -- this sort of equivalent of internet -- where anybody who wanted to could produce a pamphlet and spread it around and turn that into, oh no, you don't want to get just anybody's pamphlet, you want to get the newspaper that you have some reason to believe is going to be more accurate and more supported. 

That's just been the history of human technology all the time. Another example that I like is think about electricity. This is one of those ones where, again, it helps to think about it from the perspective of sort of putting yourself in the past. If someone came and said, we have this idea, this thing that we want to put in everybody's house, except that we know that it gets hot enough to burn down houses, but we still think it would be really useful to put it in everybody's house. So let's go ahead and do that. Well, it turns out that the reason why we can have that powerful force at everybody's house and houses aren't burned down all the time is because the insurance industry said, oh no, we've invented this thing called a circuit breaker. And if you put electricity in your house, the only way to get insured is to make sure that there's a circuit breaker there. I have a son who's a carpenter and you know, he'll show you the book of code that's this thick about what you have to do to build a house.


So the problem is that we haven't got like the code or the circuit breakers for something like Twitter or Facebook. And I think we will, but it's not like it will just happen without anybody trying. People are going to have to, to work very hard to figure out how to make those technologies be productive versus not productive. 

It does seem to me it is right that things are changing quickly now, but think about the difference between someone born in 1820 and 1860, right? So you go from the fastest thing in the world being fast [wars?] to steamships, railways, the telegram, and trains. That's a much bigger shift in what your actual lived experience is than the shifts in the 20th century and computation. 

Now computation itself is a big deal. That's a big change. But I think it's just very hard to tell how much the things that we live with seem like bigger changes with ChatGPT, for example. I was talking to someone the other day and thinking how young folks don't remember typewriters. So the shift from writing on a typewriter, let alone writing with a ballpoint pen to writing on a word processor, that's a giant shift in the kind of intellectual work that you can do. I mean that's a really big shift. And the shift to having internet search available -- that's a giant shift in the way that we use information. Because all of the people who are thinking about it now are sort of on the other side of that shift, or at least most of those people are on the other side of that shift, it doesn't seem as surprising to think that, oh, I could take a big chunk of my article, of my paragraph, and I could move it to another place just with a click of a mouse. That doesn't strike anyone as being a big deal that's really changing the way that I interact with the world. It is a big deal.


I think it's always very hard to judge exactly which kinds of things are going to make an impact and what kind of impact they're going to make. But the hope is that this framework of social norms, regulations, laws that human beings are very good at doing, that's the kind of counterpoint to our having this great capacity for technological innovation.


The other thing to say -- and again this gets back to thinking about the logistics folks – is that I just saw an interesting piece about the four-day week today and someone was saying, well what about school? What happens when you're a mom and your kids have to go to school five days a week? How's that going to play out in four-day week? And someone pointed out that most of the jobs in the United States, for example, are service industry jobs -- they're things like being a teacher or being a childcare worker or being a healthcare worker, taking care of other people or for that matter, working in restaurants, cutting hair. And of course because it's us talking – the people in the information economy who are worried about GPT coming and taking our jobs -- I don't think we think much about the fact that a lot of the activities that humans are engaged in are activities that we've always been engaged in, like especially taking care of and connecting with other humans and are going to continue to be things that humans are really good at in a way that artificial systems are going to have a much harder time doing. But again, we don't pay as much attention. We don't valorize those aspects of human intelligence as much as we do the sitting at our study and writing things down and playing with computers part. 

SFI/Michael Garfield:

There's three little flags I want to plant in all of that. One is the conversation I had with Maria del-Rio Chanona about labor market displacement and her network science research at Oxford exploring the landscape. Also, Penny Mealy has done work on this. How as more and more things become automated, where the islands will be. The other is the 2020 paper by Jaewon Shin et al: “Scale and Information-Processing Thresholds in Holocene Social Evolution.” There were a lot of SFI people on this:  Michael Price, David Wolpert, Hajime Shimao, Brendan Tracey, and Tim Kohler. 

That's another angle that speaks to the diastolic-systolic kind of rhythm that we see between a growing population and the need for the analogy of the circuit breaker: the new norms, the new regulatory structures that we put in place for this stuff. And then to anchor that somewhat concretely, we have Matthew Jackson's recent research on Facebook showing that you can curb disinformation not through censorship, but simply by throttling the reach. If you can limit the number of times a post is shared or the number of people to whom it is shared, then the problem kind of takes care of itself. 

So I wonder if we're going to find our level as far as the scale at which we are capable of thinking with one another, and that the circuit breakers we're putting into place will restore a kind of mesocosm in the way that Wendy Carlin and Sam Bowles talked about the return of the civil society during the Covid pandemic saying that it's not just state power and market power now -- it's neighborhood organizations, it's mutual aid networks. 

Alison Gopnik:

I think that's exactly right. There are interesting examples. Who knows what's happening with Twitter at the moment, but there was a little change, and I have heard from the people who've made it -- nice experiments by David Rand and Gordon Pennycook -- showing that if you just say to people, ‘did you read this article before you shared it?’, that in itself changes misinformation. And if you say to people, ‘could you just rank this other piece? How accurate do you think this is?’, before you go ahead and click the share button, that it makes people much less likely to share misinformation.


Other people have mentioned this as well, but my hobby horse about this is that we have this really fascinating example in Wikipedia of something that we would not ever have thought would have succeeded -- that you could have this kind crowdsourcing of information and knowledge and have it mostly be good, have it mostly be something that people can go to and people can have a sense of how you check it. I's mostly a really good resource. And I have to say, I suspect the big difference is that it's a nonprofit organization. 

If you think back to the days of television programs, for example -- the BBC, the Canadian Broadcasting Corporation, PBS -- those produced better quality information by and large than the networks did. Not universally, of course -- sometimes they would be dull and the networks would produce something that was valuable. I think the real central problem is that we have a business model that, as people have pointed out, ends up amplifying because the business model is catching attention through advertising. There's no reason why the net has to be designed according to that business model, and that business model ends up having a lot of negative consequences. Again, think about the internal combustion engine. The fact that there was a particular business model behind that has certainly had a lot to do with its consequences for good or ill.


I think an idea that a lot of people have had and you need to figure out how to implement is to think about some of these things as being more like public libraries or a public utility kind of model for a lot of what we do with these cultural technologies -- rather than thinking about them as basically a big advertising agency.

SFI/Michael Garfield:

For listeners, that has distinct hooks into the episodes that we did with Diane Coyle and Eric Beinhocker, where Diane Coyle was talking about her argument for reconsidering social media as public utilities. And more recently with Glen Weyl and Chris Moore, where we heard about how Glen has written extensively on how to create technologies that encourage the funding of public goods. 

I just want to thank you so much for this. Just in closing, I would love to know what the questions are that are driving you right now? As you say, I'm young, but reading your work in relation to watching my three-year-old and my one-year-old, I have this distinct sense that I am the basalt and not the molten lava. Where do you feel still molten in your questing? 

Alison Gopnik:

I think this set of ideas about caregiving is really interesting. And some of it came from just the policy questions about why is it, what do we do to make sure that caregiving is available for children, given this data that suggests how important, especially early caregiving is. But that just turns out to be a really interesting intellectual question. Even if you're thinking about something like the alignment problem in AI, how to get another autonomous intelligence system and keep it being autonomous and yet give it the kind of structure and nurture that it needs to be able to grow in a beneficent way. 

I think that's a deep human problem we haven't thought about nearly enough. And that's a big area that I'm actually working on, including trying to work on some of that empirically, trying to figure out exactly what do we need when we think about caregiving. And another piece of that which is new for me is, is actually thinking about elderhood from that perspective. One of the things that we know is justice, that childhood is really distinctive, this last 20 years of elderhood for humans is really distinctive. And there's been some arguments by people like Michael Gurian that those 20 years are really important for transmitting cultural information onto the next generation. 

That's a lot of what those grandmothers and grandfathers are doing aside from working hard to keep the babies going. From a pragmatic point of view, we're going to have these giant demographics shifts with more and more older people in society. And while this may just be autobiographical, thinking about the kinds of distinctive intelligence that go with that period of life, I feel, is something that's really important and undervalued.


In general, sorry about this, Michael, but we've tended to have this model where the 35-year-old man is the apex of intelligence. For some reason, a lot of 35-year-old men have written things that sound like this sort of apex of intelligence – that childhood is just sort of gradually building up to that 35-year-old man, and then elderhood is just sort of gradually falling off from that state. That doesn't make a lot of sense from an evolutionary perspective. Instead, one of my other slogans is that basically we're human up till puberty and after menopause -- that's when we're doing the things that make us most distinctively human, like cultural transmission. In the meantime we're sort of glorified primates who are going out in the world and establishing our place in the dominant hierarchy and trying to find mates and doing all those things that are part of our broader biological inheritance. 

I said this for a while in a kind of mean way, which I didn't really think about. But now, of course, I have 35-year-old children and find myself thinking, oh, those poor 35-year-olds, those poor dear things -- they have all this stuff they have to do and the kids and the grandparents are just hanging around and telling stories and figuring out things that are going on in the world and playing and experimenting. We get the good parts and the poor regular adults have to do all the hard work. On that note, I will thank you for your hard work in doing this podcast. 

SFI/Michael Garfield:

Well ,thank you. I am right up there at the front of the line for the universal basic income or UBI check when the technological unemployability ship comes in and we could have machines doing this podcast and it won't matter anymore. Then I can be delightfully economically irrelevant and get back to just playing guitar all the time. That will be a day of glory.


Alison Gopnik:

Could be worse. 

SFI/Michael Garfield:
Thank you so much, Alison, for your work and for taking the time to talk to us today.


Alison Gopnik:

Thank you so much for having me, Michael. 

SFI/Michael Garfield:

Thank you for listening. Complexity is produced by the Santa Fe Institute, a nonprofit hub for complex system science, located in the high desert of New Mexico. For more information, including transcripts, research links, and educational resources, or to support our science and communication efforts, visit