COMPLEXITY: Physics of Life

Mirta Galesic on Social Learning & Decision-making

Episode Notes

We live in a world so complicated and immense it challenges our comparably simple minds to even know which information we should use to make decisions. The human brain seems tuned to follow simple rules, and those rules change depending on the people we can turn to for support: when we decide to follow the majority or place our trust in experts, for example, depends on the networks in which we’re embedded. Consequently, much of learning and decision-making has as much or more to do with social implications as it has to do with an objective world of fact…and this has major consequences for the ways in which we come together to solve complex problems. Whether in governance, science, or private life, the strategies we lean on — mostly unconsciously — determine whether we form wise, effective groups, or whether our collective process gets jammed up with autocrats or bureaucrats. Sometimes the crowd is smarter than the individual, and sometimes not, and figuring out which strategies are better requires a nuanced look at how we make decisions with each other, and how information flows through human networks. Given the scale and intensity of modern life, the science of our social lives takes on profound importance.

This week’s guest is SFI Professor & Cowan Chair in Human Social Dynamics Mirta Galesic, External Faculty at the Complexity Science Hub in Vienna, and Associate Researcher at the Harding Center for Risk Literacy at the Max Planck Institute for Human Development in Berlin. In this episode we talk about her research into how simple cognitive mechanisms interact with social and physical environments to produce complex social phenomena…and how we can understand and cope with the uncertainty and complexity inherent in many everyday decisions.

If you enjoy this podcast, please help us reach a wider audience by leaving a five-star review at Apple Podcasts. Thanks for listening!

Visit our website for more information or to support our science and communication efforts.

Join our Facebook discussion group to meet like minds and talk about each episode.

Mirta’s Website.

Visit Mirta’s Google Scholar Page for links to all the papers we discuss.

Mirta’s 2015 talk at SFI: “How interaction of mind and environment shapes social judgments.”

Digital Transformation documentary about Mirta and her work.

Michelle Girvan’s SFI Community Lecture on reservoir computing.

Podcast Theme Music by Mitch Mignano.

Follow us on social media:

TwitterYouTubeFacebookInstagramLinkedIn

Episode Transcription

Michael: Mirta Galesic! It’s a pleasure to be interviewing you for Complexity.

Mirta: It's a pleasure to be here. It's a pleasure to be interviewed by you.

Michael: So this is a topic I feel of extreme importance, this question of social learning, social decision-making. We're at a point now where reading your work is actually kind of frightening, because it becomes apparent very quickly that the way that we come to decisions, both individually and collectively, seems sort of mismatched with the scale of modern life. And I'm really looking forward to getting into that with you. But first I want to take us back a little bit and invite you to talk about how you became a scientist, and what got you into this kind of research and these kinds of questions in the first place?

Mirta: I guess I was always curious as a child and always wanted to ask difficult questions. And I was pestering my parents who are also scientists. They were both scientists with different things, and my parents were chemists. So for them it was…they were like, what I call, like true scientists. They were able to do beautiful experiments with chemicals that were changing colors and when I would ask them something about chemistry, they knew all the answers to that. And they had theories, they had equations, they had models. And so I thought, oh, science is great. It gives us some certainty about life. It answers questions like maybe a scientist as well. But of course, because my parents were chemists, I didn't want to be a chemist. Of course, I wanted to be something as different as possible. And I had this really cool high school teacher in psychology who brought us to Amsterdam for our high school trip.

And he was cool. And so I said, I'm going to study psychology. And psychology was really cool in Croatia at the time and you had to pass intelligence tests to get in. And it was really prestigious for some reason. I don't know why. And I wanted to study psychology because I wanted to help people. Somehow in the process I started to feel this kind of empathy myself. Empathy maybe because my brother was a little bit disabled and so I felt that everybody can be given a chance if kind of understood and respected as a person. And so I wanted to be a clinical psychologist. I wanted to help people with problems, especially young people who feel low self esteem or who are drug addicts and so on.

But also it turns out that my department of psychology was not clinical at all. It was completely experimental. It was quantitative methods in psychology. It was a lot of math and statistics. And then as I was poor, I started to work in marketing research agencies, inputting data, analyzing data. And so I completely lost the clinical part. I never became a clinical psychologist. And also I realize how difficult it is to actually help people. I really respect people who do that. But I just saw that maybe my path is a little bit different. Maybe when I retire I'll be a clinical psychologist again. But now I kind of learned a lot about how to measure various attitudes that people have, behaviors, how to make models of cognitive processes and social processes, how to test them statistically. And so I became a scientific psychologist, let’s say. During that time while I was studying, the war had started in Croatia. I'm from Croatia.

This is one of the six republics of Yugoslavia. And we lived all happily together for my first 16 years of life. And then suddenly all hell broke loose. And that was very interesting for me to see. The whole society changed in a matter of a couple of years from relatively secular, not nationalistic, inclusive…I always say, give us one more generation. We would probably all be atheists and would kind of follow scientific principles. But somehow something happened and within a year people started killing each other because of national religious orientation. Religion came back in a big way. Nationalism came back in a big way. Suddenly everyone was judged by whether they were Croat or Serb. Suddenly it was important whether your grandmother was Catholic or Orthodox. And people became very angry and manipulation of people started on a large scale. So suddenly, until the 90s, I was like any other teenager, we had a lot of American influences. I will listen to everything from The Cure to U2 or whatever. And suddenly all the music stopped in the 1990s. And you just get nationalistic songs on the radio or like for a decade and nationalistic speeches and only one side of the whole Yugoslavian coin was presented to us in Croatia. And the same happened in other republics. And so some people just never recovered. And this division is present until now. So now we are, what is it now? 40 years later, people still kind of hate each other. There is still enormous nationalism. Religion is stronger than ever. And the people are judged by whether they belong — they are one of us or one of them. And it's easy to dismiss everything that anybody has to say. And any reason and any logic, any corruption, anything can be justified if only the person is one of us versus not one of them.

And so that's super interesting to me. You know how a society that lives for 40 years in the secular society, the non-national society was careful built for 40 years and then suddenly everything collapses. And so seeing some similar processes to the hell seen in some much stronger democracies. I'm really curious about how this all works and how can I find some certainty that my, or some beauty of quantitative models that my parents had with chemistry, with molecules and crystals? Can I find something like that in human societies? Of course, the first people think about it, the immediate answer is no. Of course not. This is too complex. We cannot model them with a few simple equations. On the other hand, I guess, didn’t we think about everything around us in that way? And gradually we managed to figure out how physics work and we sent a man to the Moon and figuring out many things. And so I'm wondering whether the next frontier could be to figure out a little bit better, at least how we function as a society and what are maybe some simple rules that guide us, that contribute to this enormous transition. So in societies that you see today. So that's what I'm doing here. God, this was a long monologue story.

Michael: No, it's good. Don't worry about it. When I spoke to Rajiv Sethi earlier and we talked about his work on stereotyping and I feel like reading your research, his work has been coming up a lot for me as well, that there's this core issue in understanding how something like this Serbo-Croatian conflict can erupt. How, like when Francisco Varela left Chile in 1973 he said, "You could turn on the radio and one station would say it was raining outside and another station would say it was sunny outside.”

And what is going on here? How is it that the United States seems to be torn between completely different realities? And something he said, and something that has come up again and again in your work, is the challenge to this traditional idea that the so-called Homo Economicus model of the rational decision maker acting on perfect information, having a sort of unassailable self interest…that we are always acting on imperfect information, that we have these cognitive limits. And so our strategies for making decisions are not ubiquitous. Like everyone kind of settles on their own strategies based on the conditions of their life.

But we're all limited by the time and attention and mental resources that we're able to devote to a decision. So I'm really curious now that we're in the meat of this, I would love to start just by talking about one of the papers you coauthored for the Association for Psychological Science on social sampling, and how we have these biases about our own individual social environments. And how that can lead to apparent radically different perspectives on what we would think of as like an objective external reality, and people are seeing it in completely different ways. So could you talk a little bit about that piece?

Mirta: Actually, I will disagree with you here a little bit. So what we are kind of claiming is that people are actually pretty good, pretty adapted to their immediate social worlds. We believe after many 10 years of research after this paper…

Michael: Seven.

Mirta: Oh yeah. After seven years of research of these paper that people actually have a quite a good idea about their friends, family, acquaintances, people that they meet on everyday basis and with whom they need to cooperate with, learn from or avoid, and that they're actually not as biased as traditional social psychology would like us to think.

And we see that because when we ask people about their friends, we see that this predicts societal trends quite well. So in one line of research, we asked a national probabilistic sample of people to tell us who their friends are going to vote for. We averaged those things across the national sample and got better prediction of election results than when we ask people about their own behavior, and this would not have happened if people were biased in reporting their friends, they must have told us something that must have given us information that's accurate and that's goes beyond their own behavior in order for that to happen to predict elections better.

And by now we saw that in five elections all together in the US 2016 in France, the Netherlands, Sweden and US 2018 and we hope to predict again 2020. You know, so things like that tell us that people actually pretty good in understanding their social circles. And then the apparent biases show up when people are asked to judge people that they don't know so well. So when I'm asked to tell you something about people in another state or another country or people from another socioeconomic cluster which I don't know well, then I am likely to have some biases.

But these biases we show can be explained by what I know about my friends. So if you ask me something like that, I would really try to answer your question honestly. And to do that, I will try to recall from my memory everything that I know about my social world. But you know, if I'm surrounded by rich people, like here on the East side of Santa Fe, it could be very difficult to imagine in what poverty people can live in other parts. And so even if I'm trying my best to recall, the most poor per person I know, I might never recall such poverty that actually exists in the world. And when asked about the overall level of income in the US I'm likely to overestimate the overall level.

And similarly, if you are poor, who are poor might have problems imagining the wealth of really rich people and they will typically underestimated the wealth of the country. So okay, let me summarize this. So this piece actually suggests that people are not that biased when it comes to judging their immediate friends. They have a lot of useful information about their friends. And pretty accurate. The biases show up when people are asked about other populations that they don't know so well, and they can be mostly explained by the structure of their own personal social networks. The more biased your social networks are, the more biased your estimates will be about the general population. Does that make sense?

Michael: Yeah, totally. So there's something in that I found really interesting about this social sampling, which is that as you mentioned, if you happen to be worse off and everyone else is worse off, as is the case with like income for example, then being worse off, you're gonna project your bias into that general population more accurately than if you're better off in some situation for which most of the population is worse off. And that these biases are not all created equal. It has to do with how they stand relative to the broader population.

Mirta: So what we show is that these kind of biases of judgements of the broader population can be explained by the structure of social network and not by some cognitive deficits or motivational bias. Some desire to be better than others or some idea that everybody's like me or some cognitive deficits that people cannot, that people are too stupid to understand how other people live. It's really determined by the context of memory — by the content of one's memory, which comes from one social circle.

Michael: I may be rushing out ahead here from scientific insight to policy advisement. But it sounds like this gives us a really clear pointer on how to correct for this handicap and that we really ought to be…perhaps when it comes time to make decisions on behalf of everyone, we should really be listening to whomever the oppressed are in that population. We should be really paying attention, for example, to laborers and students and people that are ordinarily, historically, not given a lot of political voice.

Mirta: In other, what do we need to do is broader and our social networks, include in our social networks, those people who are typically not there. So the policymakers who are making these important decisions should have, should know as many different people as possible. And we saw in related studies that people who have most of our social circles are also best able to predict societal trends and to understand how the overall population lives and what people want.

Michael: In this paper you surveyed the Dutch and I found it really interesting…this isn't just a matter of personal wealth or household wealth or personal income, household income. You looked at the number of conflicts that people are having with their romantic partners. You looked at the number of friends people have. And it starts to get into this other thing, like the friendship paradox, and how again we tend to mis-estimate the life conditions of other people. Like I was surprised to realize just how wrong people were in their estimates of how frequently other people were fighting with their spouses, for example.

Mirta: So we deliberately included different characteristics from income and education that are kind of easier to observe, to such less-observable characteristics that people often don't talk about, like conflicts with partner but also the frequency of depression, of pain and these kind of things. And so yes the less something is visible and the less it’s talked about and the more other will people start mis-estimating those. However, we still see that there is some signal there. It's not that people are completely oblivious or that they are only projecting their own frequency of conflict with their own partner.

People have some idea they, people are actually smarter than we think. I mean at least some scientists say that our big brains are result have developed because of our need to collaborate with others, and that our social cognition is perhaps the main driver of our intelligence. So it's not unusual to see for me that people are actually good in understanding who they are surrounded with because they need to know a lot about other people to choose the best cooperation partner to choose whom to learn from and who to avoid.

Michael: You mentioned in the discussion of this paper, this exact thing: the attunement to one's immediate social network can be considered adaptive. There's this trade-off between the computational cost of understanding the big picture and the ease and the efficiency of just being able to take this local sample. So how do you think about this in terms of the broader questions about the evolution of human cognitive bias? You know, stuff like Dunning Kruger or the Dunbar number, these cases where we have, I mean those are pretty different topics, how do you think about this in terms of...when I think about the broader questions of complex systems science, the evolution of intelligence and this kind of thing. It seems as though there's like a certain laziness to evolution. We talk about free-energy minimization and that kind of thing. And I'm curious just to hear you expound on that.

Mirta: This is off-topic, but as a woman with periods, I definitely understand that there are some boundaries that evolution cannot cross, there is still this, “Can’t this be better?” [Laughs.] My impression is that a lot of the story about human cognitive biases comes from imperfect measurement and comes from inadequate understanding of human cognition. I think that you're actually much less biased than the classical cognitive psychology and social psychology, tell us. If you Google cognitive bias, you'll get hundreds and hundreds of biases. Careers were made on these biases, and they're easy to show in certain laboratory experiments in certain constraint conditions.

However, once you think into account the complex situation in which we need to operate…once you take into account not only the cognitive process, but also the social network in which you operate, and the sources of information that our cognition operates on, then you see that many of these biases are actually not there. You can reverse them, you can show the opposite, you can completely erase them. For example, in this work on social networks and social judgments, we see that we can have opposite biases. Apparently opposite biases. Depending on the structure of social networks, sometimes we get people to behave as if they're enhancing their own position in the society, sometimes as if they're diminishing their position in society.

Sometimes we get biases that look like people think that everybody's like them. And sometimes as if people think that nobody's liked them. And then we show that there is nothing in the mind. It's the same simple cognitive algorithm that sees the environment, and maybe forgets some things, but nothing motivated. Which then interacts with the network structure, and then shows this apparent bias. So I do believe that Kruger-Dunning in particular, such biases, that they're actually product of imperfect measurement and that as we social scientists learn how to better measure human behavior in many different and real world contexts.

And as we learn to have better models, models that acknowledge the complex social-cognitive system in which we operate, then many of these bias will actually go away and we'll be able to marvel like biologists do, anthropologists do, to the beauty of human cognition. Everything that we can do and how well-adapted we are actually to our environments. And by doing that, then we will know maybe there's some baseline level of biases that will still remain, for example, like the Dunbar Number, which seems like a logical consequence of the way we lived through millennia and how our brain adapted. But many of these biases would go away. And so once we understand ourselves better, then we will be able also to deal with the remaining biases better and to focus on what's really important and not just completely dismiss any possibility that we can operate in some adaptive way in our world.

Michael: Yeah. Linked to that, you had a paper that you coauthored with Daniel Barkoczi in Nature Communications, where you talked about social learning strategies and network structure, which was the last thing I read before this. And I think in some respects, the most nuanced and intricate, in terms of the findings of all of the papers here. This thing about the heuristics that we use to make these decisions. I know that you left this a kind of open question towards the end of this paper, about how different social learning strategies may have evolved to suit different network structures. But I'd like to get to that through this.

And ask these questions. Because it sounds to me like it's not as simple as just suggesting that if we were just exposed to enough of the other, whatever that group is, that we would have a better outcome. Because in some of these cases, larger sample numbers actually decrease the performance of the social learning in that network. So could you talk a little bit about this particular research and the findings that you came to? Yeah, really that was whole thing. I could have just started with that. [Laughs.]

Mirta: First I should say that the paper was quarterly. But Daniel Barkoczi was my PhD student at the Max Planck Institute, who is awesome and great and all the good ideas are his in his paper. You're completely right. Even if he had the most diverse network of friends and the largest group of friends, we still might fail to understand the broader social environment if we are using inadequate rules to make decisions. If we are very focused on the advice of a particular friend, or if you're following a particular leader…even if we see today, I mean in today's world where we have access to all kinds of information, we can be everyone's friend and learn from anyone. We do tend to still use rules to integrate social information that exclude lsthr parts of this social network.

We follow our leaders. zee follow people with trust. We follow our spouse. Snd so we don't profit from all the diversity around us. And that's what the paper is showing in a way, maybe in a more positive way, but it's basically showing this interaction of network structure, and the decision rule that people are using. So when people are following one member of their network, in this case, in this paper investigating the rule “follow the best.” So follow the one who currently has the best solution…that can be good when problems are simple and when there is demonstratively-best solution and always what we should do is to find that that best way to I don't know, make a cake or go from point A to B. But when the problem is actually much is more complex, then this following the one who has the currently best solution can backfire because the whole group, the whole society, can get stuck in what we call a local optimum, or in a solution that seems all right, but actually in the long run it could have been much more improved if we were more open to other ideas.

So there is an interaction between the rule that we are using to make decisions and the networks, like how many different diverse opinions are in our network, and the problem we are solving. So kind of more generally this paper talks about the issue of diversity, and I like to question everything in my work, including concepts that we all hold dear and love. Like diversity, everybody loves diversity. We think that it's always should be encouraged. And I'm talking about diversity of opinions, not about of course diversity of visible socio-demographic characteristics, which I always advocate for, and which is certainly important. But the diversity of opinion in the group is sometimes good, sometimes bad.

Again, it depends on the task. Sometimes the task is so simple that we should just follow the one who seems to know the area best, who seems to have the best solution, and we will all be better off. Like simple mathematical equations, or finding the shortest way from A to B. But most things in life are more complex than that. There are many ways to bake a cake, or to make a new computer, or to write a scientific paper, or to arrange a political system. So zeroing in on the first solution that seems reasonable, will often lead to a suboptimal overall solution for the society. In work I like to question collective concepts, even those that we hold dear like diversity, and indeed diversity is important in many real life contexts in which we need to solve complex tasks where there are many possible solutions, and many possible ways to go. Then it's really important to surround ourselves with diverse people, to use decision rules that that enables us to open up and to explore many different options. However, there are some situations when the solution is there, it is known. I mean it is easy to know or maybe it's already found and maybe there is no need to see to hear many different opinions about a simple thing like how much is two plus two, what is the shortest distance between A and B. You should just follow the one who seems to have the best solution and oftentimes we'll be better off than discussing about it for a long period of time.

In these cases, actually, diversity is not that good. Which brings me to a controversial issue if you want to know, which is like once the society comes to a solution to a longstanding problem, such as, “Is there God?” My Catholic family is now giving up on me in Croatia. Like is there a God or is is anthropogenic climate change happening? There seems to be abundant evidence that this is happening. But somehow in a society that values diversity, we are still inviting people to have opinions about it. And I wonder this is a controversial issue at some point then when we are close to a particular solution, whether some mechanism for reducing diversity actually might be better for the society. So I think there is a delicate balance between more diversity or less diversity depending on the complexity of the problem on how close we are to the solution.

Michael: Yeah. So much of this is about I'm reminded of Jessica Flack’s work on collective computation and talks about the difference between an actual ground truth, like “There are 972 beans in that jar,” versus an effective ground truth, like, “We’re all going to drive on the right side of the street because that's what we all agree that we're going to do.” And something that's been coming up a lot is this distinction between the empirical and the social, in terms of the truth, and how sometimes it's not easy to tell which is, which — not to just dart around cavalierly through your papers, but you have this other one with Barkoczi and also Katsikopoulos on how small crowds can outperform large crowds. And it seems like the linchpin there again is about the complexity of the problem. The way that you explain this is that if the problem is simple enough for an expert to get it, then at some point…

Mirta: Larger group is better.

Michael: Yeah. A larger group is going to converge on an average. But really, really complex problems adding experts just adds noise or adds an energetic overhead, like a cost to this that isn't actually improving the solution. And so these questions of how complex is the situation in that we're actually trying to solve for seems to be the real sort of killer question here.

Mirta: Yeah, this is a really crucial question. If you knew in advance whether a problem will be simple or complex or easy or difficult, then we will be able to optimize our networks and our decision rules and always behave in a best way. Unfortunately we don't know there is uncertainty. We never know whether the next day like or oftentimes we don't know whether the next election will be easy or difficult to predict about. Maybe we know for this next one we don't know whether the next, say, job hire it will be an easier difficult decision.

So oftentimes we need to have some way of structuring our network and our decision processes that will work in many different situations, in many different circumstances. And I think what is often called a simplistic decision-making, simple heuristics, biased kind of an unsophisticated cognition is actually an attempt of the human cognitive system to find a way to work, to be adaptive in many different situations without knowing in advance what will happen. So you need to find the kind of lowest common denominator in which to work in many situations.

And so this paper with Katsikopoulos and Barkoczi shows that when you don't know whether the next task is going to be simple or difficult, it actually is better to make decision in relatively small groups rather than follow this kind of “wisdom of crowds” approach where you want to have as large group as possible, or to fall over a single leader. When you don't know it's best to have a small group somehow the small group…the drawbacks of the large group when the problem is difficult, or of the idiotic leader, somehow cancel out and you get the best performance of across a range of situations.

Michael: That sort of begs the other question, who decides who is on that jury, or who decides who's going to be in that that elite panel of tastemakers?

Mirta: Well, there is that. So this paper is kind of neat because it shows that you can just select randomly. So let's say that you somehow selected your group of experts, whoever you can profit by making them a bit smaller. [Laughs.] Just fire some of them randomly and you'll be better off in most situations.

Michael: So is Congress too big?

Mirta: Actually we're looking at this and so, I mean, most of the decisions in Congress are actually made in smaller committees, which are, you know, 20 to 40, sometimes smaller groups, and not the whole, the whole Congress comes out. Only rarely. I mean for a minority of decisions are put in front of the whole Congress.

Michael: And then they're just following, there's a second heuristic of like, well, they agree. So we're just going to follow that.

Mirta: So that's not my area, but I would guess some, some work at SFI would suggest that it probably happens in stages. You decide something in a smaller group and then representatives of these groups will meet and then make decisions among themselves and so on. So it's scaled. So, I mean, in essence, you’re dealing with relatively small groups at every stage.

Michael: The thing that's I'm racking my mind trying to figure out is what the advice is for planetary governance in this. Because the real pain point is we're all exposed to a global set of problems. We're all aware that we are in some way implicated in the burning of the Amazon or, the overfishing of the Pacific. And obviously there are small things that people can do: they can opt out of the consumer lifestyle, etc. But when it comes to these massive global issues and the fact that we don't have random panels deciding on how we're going to handle this, and we don't even have, in most cases, governance at all scales at which governance is even required. You know, I wonder what it would look like, I don't know what the right word for this would be, some sort of Planetary Ecumenical Federation of Whatever. But then what does it just like, are we just going to select people at random from all over the world? I mean, clearly stakeholders ought to be involved.

Mirta: So two things. One is that there are different stages of the decision-making process. One is collecting information, and there we could profit from large groups, a lot of diverse information, a lot of diverse experiences. At some point then the second stage comes when a group needs to make a decision whether to go left to right, or go to the Moon or again or to Mars. And so in these cases we show that smaller group can be better and this assumes that this smaller group has collected all possible information about the problem. They are as expert as they can be. So it's not like that they are deciding on, on the assigning randomly or if they're completely ignorant. So if they did not collect any information and they're actually more likely to be wrong than right, in this case, it's actually better to either choose by dictator, just have one person or to have as large a group as possible.

But if you have a relatively expert group of people who overall across many different decisions is likely to make better than chance decisions, at least, then a smaller group will be better. So maybe to summarize, so there are two stages of the decision process collecting of information, where we want to collect as much information as possible and to communicate it as many and as many diverse people as possible. And making a decision…when once we collected enough information so that we are kind of pretty confident that at least in the long run we make better than chance decisions, then it pays off statistically as we show in this paper to have smaller groups of decision makers basically randomly selected from everyone who has sufficient expertise about the topic.

Michael: So voting is kind of out in this.

Mirta: Essentially that's what it shows. So if you have 200 people who studied everything about the world and now they're maybe confident that they will make a little bit better than chance decisions and they need to make 20 decisions about the world, it is actually better that not all 200 vote about each decision but to select smaller groups of them to vote about these decisions and not across the 20 decisions they will achieve better performance.

Michael: It's funny, I'm thinking of Michelle Girvan’s community lecture that she gave on reservoir computing and just the notion that you can improve a machine learning system…you can basically keep it, I think I'm drawing this analogy correctly that machine learning has a habit of immediately finding the local optimum and then settling there and missing the bigger picture. And by adding a chaotic system to the machine learning algorithm, just a camera on a bucket of water is enough to add enough chaos into that process. I mean that sounds a lot like the lottery selection.

Mirta: This is exactly what is happening there. Definitely the phenomenon that we are describing in this paper has to do with intentional introduction of noise by selecting smaller groups of people.

Michael: I want to go back to the Nature Communications paper quickly because you get a little more granular here in terms of how people are making decisions and how in different networks people can emphasize exploration or exploitation. That at the individual level, different types of situations favor you copying the best solution and other situations favor you spending more time to explore and try things out and see for yourself. And I found it really interesting how these different strategies, these different thresholds for exploration and exploitation, compared within simple environments and within complex environments, within very inefficient networks and within efficient networks. And I would love to just unpack that a little bit.

Mirta: Actually, this paper was by itself an exploration of one conundrum in the current literature. Some researchers find that people solving complex problems solve them better if they're well connected to each other, if they work in well-connected communities where they constantly communicate. Other researchers find that it's the opposite, that actually solve complex problems better in networks that are not well-connected, where people communicate rarely. So we were wondering, and both of these kinds of findings are published in prestigious journals by prestigious authors. So we were wondering what is the catch? How can this be?

And we think that the answer is in the way we integrate information from the group. If you're in a well-connected network, but you're following one or two people that currently seem the best, you're essentially not using the whole information in the network. And so it is as if you are basically in a less-connected network. If you're in a less-connected network, but you are taking care to listen to everyone and to integrate information from everyone, you can actually receive more information than if you're in a well-connected network, but only listening to one person.

And so by introducing these decision rules, this cognitive part in this traditionally more machine learning, sociology, computational problem, we were able to show that you can get both effects. So you can be in a well-connected network, but if you listen to one or two people, then you will be actually quite good on complex tasks. But if you're in a well-connected network and you also listen to everyone and integrate information from everyone, you could get stuck, maybe like we are today in this world. We are zooming from one solution to the other. Everything is changing very fast and you can get stuck in something that seems like a good solution, but in the long run is not a good solution. So the studies so far that found the contradictory findings had only two elements. They had task complexity and social network structure. And they find that for complex tasks, either the more connected or less connected network is better and they find both ways.

But now introduce the third element. It's the human cognition, the way the information is integrated. And so we see that if people are using a decision rule that integrates the whole information from a less connected network, they solve problems as if they're in a well-connected network and vice versa. People who are in a well-connected but are not using all of the information from it, it’s kind of similar to the situation of the people who are in actually in the less-connected network. And basically by seeing this whole complex social system together — the mind, the network, the task — we can explain these apparent contradictions in the literature.

Michael: So that reminds me of two things. One is the recent research by James Evans and his collaborators on innovation and scientific research and how we have a problem right now in science, which is that the institutes are too densely-connected there. They're sharing researchers, they're sharing funding sources. And it's practically an argument for places like SFI where there's a little bit of a monasterial gap. A little bit of isolation to encourage more divergent thinking. There's also in this sort of ongoing landscape metaphor, talking about how different strategies work depending on the efficient or inefficient network reminds me also of like leaving the city where you're surrounded by people and the strategy is, you kind of ignore most everybody…and then going out into the country where everyone may have local network bias where they don't understand the stranger in the same way that you might if you had a greater rate of exposure to a diverse population. But the other thing is the neighborliness of rural communities who are like, “Oh! A visitor.” And they're eager to hear your stories and people spend longer leaning over each other's fences, catching up with each other. And so it seems like that's a really kind of grounded example of how these two strategies are expressed in these environments.

Mirta: Yeah. I think you're exactly right. And I think in science work, oftentimes teams will have a Slack or some way of frequently communicating and frequently talking to each other about every little detail of scientific work. And I don't find it so productive because we all have many ideas all the time. And if I share every possible idea with you and you with me, we will often maybe get distracted and we might think that something is a good idea, which if we had just put it a little bit more effort and exploration would see that it's not the best. And then there is something else there. I personally find that my best ideas comes when I'm alone for a long periods of time, not completely isolated.

So I need to have contact with others. But that this individual exploration is combined with occasional social exchange and testing ideas across the fence with a neighbor, I think is very important for kind of deeper scientific vote. I think the easier scientific problems, like maybe some everyday lab staff or problems about some statistical analysis of some data, will be certainly faster solved if we have frequent contact. If you work in teams where we will frequently communicate with each other and we kind of work on it together, but more complex problems, more difficult problems I think are best solved when people disperse, spend some time on their own, and occasionally come together to inform each other about their progress. And then go from there.

Michael: This actually reminds me also now that you talk about this in this way, you end this paper talking about the combinatorial nature of innovation. And it just sounds to me like what you're talking about is the evolutionary incentive for sexual reproduction. That it makes sense, once you've reached a certain level of organismal complexity, to have “the boys and the girls,” and then to have periodic mixing.

Mirta: Oh, that's interesting.

Michael: That you're going to get better solutions for moving complex organisms through a complex landscape if you partition strategy like this and then allow it to remix from time to time.

Mirta: It sounds very reasonable.

Michael: But you bring that up in this paper as a lead into a question about research which at least at the time hadn't been done. I'm curious, if you're willing to speculate into this, you mentioned that there's been very little attention devoted to the coevolution of innovation and the simultaneous diffusion of innovations. And then this gets back to something you brought up earlier in this conversation about why it is that good ideas don't spread, sometimes. Why it is that we see, as you put it, sometimes interventions aimed at changing the social environment while disregarding strategies for social learning don't produce the desired effects.

And like I'm used to thinking about this kind of stuff about inefficient networks. This paper addressed has come up in the Santa Fe Institute Facebook group.

Mirta: How?

Michael: Well, in terms of there was, I forget who it was…you can probably remind me…work done on the spread of disinformation and how they found that highly connected networks sort of prevented the spread of disinformation, but also prevented innovation.

And so it's this double edged sword where if…that’s sort of a separate thing, it gets back to the noise issue and the importance of risk in this process. But I'm just thinking about this more broadly. Okay, so we took a poll in the Facebook group recently — I asked people what they thought the most interesting questions that could be asked by complex systems science are. And one of the more popular questions was, “If we even have a good idea, how do we get people to adopt it?” This issue of, how do we work with these diverse strategies and these diverse network structures, because it's becoming increasingly clear that, like the Al Gore crew, these people are leaving the climate thing to ask this other question about the rhetorical failures of the climate movement. And why it is that people's minds aren't getting changed.

Mirta: Here's a chance to introduce some studies that are currently working on with my collaborators Tamara van der Does and Gizem Bacaksizlar and Joshua Garland. And it is all about how ideas spread in different social network structures and how network structures by themselves change as a consequence of ideas and the threats that people perceive to be experiencing. So with Tamara van der Does, we are working a lot on how and why people decide to accept or not a new scientific idea, a new scientific belief. And we are looking at beliefs about vaccination and genetically modified food and climate change.

And all of them are tightly related to both our semantic networks in the head…so the different values, and other beliefs that we have about issues…and they're related to the social metrics. So with Tamara van der Does, we are looking at how people decide whether to accept or not a new scientific belief. And traditionally science communication focused on providing facts to people. They would isolate the particular belief that people should now have, like that they should vaccinate their children or that anthropogenic climate change is happening. And then they would provide people facts only about that issue, ideally we as scientists like to devoid things of any context. And so we would provide them in a well designed way as a table or as a graph and we would expect people to take that there is a problem. And I think, I mean we are discovering this only now, I think only now that there are two types of scientific beliefs depend on two types of networks.

One is our social network and what do we think that other people in our social metrics belief bout this issue? So of course we know that the beliefs about vaccinations tend to be quite homogenous in different social circles. Parents are often surrounded by parents that also have similar opinions about vaccination I'd say. And kind of vaccinating or not vaccinating your children, depending on your social circle can be a reason for ostracism. People could tell you that you're a bad parent and harass you in some way or reduce your opportunities to cooperate, to get a babysitter, to have friends and so on. So being wrong about something might be less costly than losing friends over this issue. Especially when it comes to issues that until relatively recently did not have huge consequences for daily life like climate change. So it's fine to be wrong about climate change as long as you can keep your friends. So is the social metric aspect. But then there's also another kind of network and there's this semantic network and these are all kinds of different values that you have that are surrounding this issue.

So, especially climate change famously, and after decades of political manipulation, is tightly related to political ideology. Democrats believe one thing Republicans believe on, I think, well, whereas the issue of climate change should not be related to any political ideology. It's a human-natural phenomenon that that has nothing to do with political ideology. But in our heads, it is related in a way. You cannot be a good Republican if you're believing in anthropogenic climate change. And you can not be a good Democrat if you're not believing in it. Similar way beliefs about vaccinations and about GM food are related to our moral values of fairness. Whether something is natural, whether something is in line with our tradition, whether somebody is profiting over some people without much power, whether we have freedom to decide, and so on. And so changing beliefs about vaccinations for example, or climate change, might actually require to first change other beliefs around that issue so that people can open up and take the scientific fact. Or the fact needs to be packaged in such a way so that it has so that it somehow resonates with other values people have.

Michael: That reminds me of George Lakoff’s work on this he talks about how, in his commenting on the Clinton and Trump opposition in 2016, how the Democrats appealed to fact and the Republicans appealed to feeling. And that there is a deeper structure here. And this kind of may be a tangent, but I mean, it speaks to this question about semantic networks and their role in the decision-making and the way that things as subtle as the metaphorical entailment, the connotations of a particular phrase, can shift all of this and the connotations themselves change.

Mirta: Definitely because as the way something is said changes, different people that people think of different things, different parts of the semantic network are activated. And so words like “global warming” were often rejected by maybe more Republican leaning people as something that's not happening. So “climate change” was a way to talk about this. Anyhow, so yes, definitely different words will evoke different different other beliefs, which will then make it more or less likely to reconsider an issue.

Michael: Do you worry about this issue of ideology and the social value of agreeing with the people around you and of conforming to a collective decision? Or…this is kind of a, I don't know, maybe a feisty question. It's a known problem: politically motivated innumeracy. When we know that people that are exceedingly smart are actually better at lying to themselves in order to conform and are brilliant mathematicians are willing to overlook elemental mathematics in order to meet this. So how do you see this as a problem for the scientific community, in this kind of stuff?

Mirta: I mean so first I think that this kind of behavior is actually quite adaptive. I mean, oftentimes it's better to be able to stick with your group than to be correct about some remote truth. So it’s just a fact of life. The, problem for scientists is to reconcile our legitimate needs to study facts and study relationships in nature without a personal bias. So we must devoid our science from our personal context, from our social context, from what our friends think. This is how we achieve good conclusions. But at the same time, this is not how people operate. That's why science is so special. I mean, it needs to be taught over years in schools. It's not, that does not come naturally. And so we need to somehow find a way to also communicate our science while respecting that other people are also thinking about what their friends will think and how this all squares with their need to believe in things like, God or a leader or the sanctity of nature or whatever moral values are important to them. And so I think this is going to be the next challenge for us to learn how to communicate. I mean, advertisers, some politicians are really, really good in that. So they latch on some value that people have and they associate a particular fact that they want people to believe or think with that value. And that works pretty well. From, I don't know, using people's preference for young, beautiful faces to advertise goods, to using people's fear of others to to promote certain political ideas. It works. And maybe rather than shunning it and thinking that you should somehow cure people from it, at least for now, it would be good to better understand human nature and to try to find ways to present scientific facts in a way that that respects this.

Michael: So knowing what you know about decision-making and how you as a human being come to your own conclusions about the world, what are your self-assigned handicaps, or modifications to your process, that are keeping you from taking like a Darth Vader turn and just becoming a super politician, rhetorical, extraordinarily viral communicator of inadequately examined conclusions?

Mirta: Sometimes I have these thoughts, Oh, maybe I should just use all this and, you know, manipulate people and yeah, I just think it's immoral. I don't know, it's not in line with my personal moral values. I think maybe from my various experiences, I do believe that people should be given a chance, that they should be allowed to develop to the best or their abilities, and that manipulating them for the purpose of my personal gain is just not something that is going to lead to that goal. So in my moral value system, finding a way to allow people to grow and to develop to the best of their personal capacity is something that makes me happy. But if the NSF stops funding me… [Laughs.]

Michael: You're going to be an ad boss here before too long. [Laughs.]

Mirta: There is another thing given that you know, as a psychologist to kind of like humanity and I think that people are interesting. I had a conversations with some colleagues who are, who are telling me, well you're studying all this, all these different ways that are part of kind of how people can be manipulated. It gives you a powerful weapon. What if it comes to the wrong hands? And this is, I think a very, very important question. I try to often reflect on that, but my answer to that is, I'm a scientist. My goal is to understand how something something works and share this knowledge with as large a part of humanity as possible so that we collectively can know more about ourselves. And so I believe like if the knowledge about our possibilities and failures is public, if everybody knows it, then it will be less likely for someone to exploit it on a large scale.

If it's something that's hidden and if you don't know much about it, then people like certain politicians or certain companies could exploit it without people noticing that this is happening. But once we know to what extent and how exactly are the mechanisms in which our social networks influence us, and in which our network or beliefs influences what we are willing to believe next, then we can both avoid manipulation better and also help ourselves grow again. Find ways to surround ourselves with people or in ideas that will best help us grow in the direction we really want to go. Sounds very esoteric, but that's approximately it.

Michael: Let's bring this home with something a little bit less esoteric, I love the esoteric but…right? Something grounded, practical and broadly applicable would be…like, we've spent most of this conversation talking about big problems, big issues, problems that really challenge our ability to even gather adequate information at that scale. But most people are living their lives, making their decisions on the basis of these local networks and these decisions are affecting more or less just the people around you. What do you see as the really crucial takeaways from your research about how to actually engage in community decision-making processes at the small scale, like within families, within neighborhoods? What is your advice to community builders in homesteads and villages? And I feel there's a lot of movement of people into these, smaller and more intentional organizations. What's the way to navigate that with grace in this world?

Mirta: I mean it's important to know some regularities of human social behavior and especially to understand the interaction of a few elements that are present in most of these human social situations. And it’s, who communicates with whom, how are decisions made (by decree or by majority), and what task are we solving? And just by looking at that — and my work is ridden with kind of simple models of these things — one can actually have a pretty good idea or make some good guesses about how to organize this community to face different problems. So even in families, some decisions are made by discussion and by taking majority vote, others just bear on the sides as they think is best. So somehow people intuitively know these things. And I think in, in communities also sometimes people self organize in ways that is best for different decisions.

But sometimes because of tradition or because of the authority of someone or, or some religious belief, sometimes communities fall prey to maybe following too much a certain way of decision-making. Maybe following a local leader or a certain, set of rules rather than being more flexible and adopting to different tasks and purposes. So what seems like a complex social situation might actually be interaction of a few simple things. “I'm exploring the network structure, the rule and the task structure.” And knowing that could help us organize the society. There is another thing that I would like to mention and that’s having some idea of how we assess people maybe because of what we learned throughout our evolutionary history, react to certain things. For example, how react to threat of the unknown or if another group, and that's something that I'm starting with in my work with Gizem Bacaksizlar, and we are looking at how leaders emerge in discussion groups that feel more or less under threat. We are looking, for example, right-wing groups communicating before and after 2016 election and left wing groups. And before the election when everybody thought Hillary Clinton would win, right-wing groups were feeling more under threat and under pressure. They're unsure about their future. After the election was the other way around.

Now the left wing groups are feeling more threatened and kind of bewildered. And we see that in the discussions that were being led before and after the election that there are more leaders. The inequality of influence is larger before the election for groups that feel threatened for the right wing groups and after the election in the left wing groups. So it seems like, and this is in line with some other work that we are doing and with some old sorts of psychology findings that when groups are on the thread, they tend to restructure. And this does have some purpose. Because a unified group will be better in most cases in defeating the enemy. So a group that deliberates a lot and has long meetings and nobody agrees will be probably be less effective in defeating, you know, another enemy group, then a group who unifies behind maybe not the best, but currently, the available leader and just does something together. And so it seems that maybe such things still happen and then this has implications for the problems that the group is solving. Because if the group needs to solve a complex problem that they feel under threat and they tend to unify behind the common leader, they could be less successful in solving a complex problem. They could be better in solving a simple problem but they will be less well and less good solving a complex problem and so having some sensibility of how will the group change under threat, when people are afraid, and maybe when they are experiencing other emotions is also something that I think is important, should be considered by these community organizers.

Even the best-organized, most diverse group with wonderful, communication channels where everybody cares about each other's feelings…once the group feels under threat, it could evolve into something else, some group of scared individuals that is following a leader that is not taking into account all the available information is being less good in solving a complex issue. And kind of preparing for this, allowing people to differentiate what is actually a threat and what is just the perceived threat, allowing everyone to have the freedom to express themselves in those, these things. To just feel like society might actually help in many situations. So those are those things.

Michael: We've probably jumped the shark here at this point. But this is such an interesting thing that you're talking about here because what it sounds like to me is that there is like a meta-level problem here, which is that the more complex the world appears, the more numerous problems that we identify, the more likely we are to feel under threat and the more likely we are to adopt strategies are actually maladaptive.

Mirta: Exactly.  That’s exactly it.

Michael: So like right now we've got this, the climate issue is creating this situation where people are sort of electing charismatic autocrats against each other. And it all feels very much like a diversion from the kind of discussion that needs to be happening in order to address these issues.

Mirta: And it would be actually a reaction that that worked in many times in our history, but is currently not working. And so we as a society, we'll just need to find ways to cope with new realities with faster change, with more uncertainty overall. And without changing into this scared mob that seems to be happening now.

Michael: You got any prophylactics for unnecessary fear at the social level?

Mirta: Well, our whole relationship with uncertainty will have to change. I think one of the main problems in accepting science is this idea that things should be certain that we should know you have it 100%, whether something is so or so. And oftentimes I think leaders, even scientists tend to present their work in a way that reduces the uncertainty. That the audience proceeds with less uncertainty. So they try to increase the certainty of their findings because something, something that's more certain is more easily accepted. But it seems again another social skill that will all have to adopt, and that's kind of coping with uncertainty, better understanding that nothing will ever be known for sure. And kind of being okay with that, knowing that that just by, if you have some kind of good process in place — like the scientific process — that’s self-correcting and it's kind of unbiased to any particular group that this is probably a good process that will lead us somewhere.

Michael: So negative capability, sort of a willingness to not know or willingness to have to revise our conclusion.

Mirta: Right. This is I guess also related to trust. Trust that we have some processes and people in place that are looking or are designed to look for our best interests, which is also something that is eroding rapidly it seems.

Michael: Well I feel like we've gone full circle now because we're actually back at the point where we're talking about…you didn't use this word, but homophily: the desire to surround yourself with people like you, and how that skews our understanding of the bigger picture. And maybe it sounds like what we're really circling here is that there may be some way to offer…or it may just happen naturally out of necessity…that we find it suddenly crucially useful to intentionally associate with “the other,” to cross the aisle, to reach out to people who think very differently from ourselves because we realize that we're partners in this kind of collective decision-making process.

Mirta: I can really agree. I almost see it as a new civic duty that we'll just have to learn. Like we learn to live together without attacking strangers. We learn to obey certain laws. We've all learned certain things that we did not evolve. That are not genetically coming to us. So I think we also need to learn this tolerance for uncertainty and tolerance for another point of view and even reaching out and rather than closing ourselves in our little echo chambers, actively reaching to others and communicating with that, it's not only because we will be smarter, but because of, because they are will make better decisions the moment we'll cut the link to someone, that person also loses and is also closed in their own echo chamber.

So we are achieving the opposite of what we want to achieve. We are losing the source of information and we are also not convincing the other side to change their mind. So we will have basically a responsibility to, in this world where it's so easy to choose friends and alienate people, we will we have a responsibility to reach out and to keep the connections open. Otherwise, we will just evolve in a serious of isolated communities.

Michael: So do you have a scientific nemesis whose work I should also be reading?

Mirta: …Oh!  I see!  [Laughs.]

Michael: Anyone that with whom you might violently disagree that I should be studying very carefully, now?

Mirta: You know, we psychologists are very humble. We know that don't know anything and at the same time, there is a saying in psychology, which I noticed now in Santa Fe, it's not present in many other communities, that theories are like toothbrushes. Everybody has their own and nobody wants to use anyone's else's. Like you see you laugh. For me that's it. Like I grew up with that statement. It's such a common knowledge in psychology. So everybody has their own personal theory. Everybody knows it's probably not correct. Everybody knows that there are a hundred other equally plausible theories. So I think they kind of all…we don't hate each other. We all disagree with each other, but we are also aware that we don't know much. So there are no big haters in psychology.

Michael: But overall it might be useful to your microbiome if you share a toothbrush every once in awhile. Awesome. Mirta, it's been a pleasure to talk to you.

Mirta: It was a pleasure to being listened to by you.

Michael: If you want to point people to anything in particular beyond this conversation — educational resources or that kind of thing…?

Mirta: I didn't advertise a Joshua Garland’s work on counter speech.

Michael: Do you want to dive back into that real quick?

Mirta: Yeah. This is basically one way of reaching out to the other side. So of course we all experience a lot of this hate online and we are witnessing a lot of hate speech. And the easiest way for us to do is again, to block hate to censor hate and to disconnect from those who have a different opinion. Trying to show them the error of their ways, or maybe empathizing with the victim of hate to help the victim to kind of survive through the process and to keep on fighting. Or maybe just by flooding the conversation with some irrelevant stuff, posting pictures of puppies or something. Many of these techniques: empathizing with the weak team, empathizing with the perpetrator, providing facts, flooding the conversation, have been proposed. And they have been described. We know for example, many of these from the traditional bullying literature, from schools, from workplaces, and now people are starting to think how this could be applied to online communication. But very little about it and it's still very difficult to analyze this because we don't have good methods to analyze the topics of large amount of data online. It's still the natural language processing and all this techniques for classifying speech in hate or counter speech or neutral speech are still developing. And Joshua Garland being the wizard of everything computational, and a fantastic applied mathematician, is developing together with colleagues different algorithms to detect hate speech and counter speech, and to see what actually works. What kind of counter speech the citizens use to best either show the haters the error of their ways, or maybe more likely you know, make them stop. And to support the victims.

Michael: I always got myself out of getting my ass kicked in school by responding with complete nonsense.

Mirta: Uh huh!  Yeah.  There.  You’re the puppy strategy.

Michael: Interesting. Awesome. Well, yeah. Thanks again.

Mirta: Thank you.