Albert Kao on Animal Sociality & Collective Computation

Episode Notes

Over one hundred years ago, Sir Francis Galton asked 787 villagers to guess an ox’s weight. None of them got it right, but averaging the answers led to a near-perfect estimate. This is a textbook case of the so-called “wisdom of crowds,” in which we’re smarter as collectives than we are as individuals. But the story of why evolution sometimes favors sociality is not so simple — everyone can call up cases in which larger groups make worse decisions. More nuanced scientific research is required for a deeper understanding of the origins and fitness benefits of collective computation — how the complexity of an environment or problem, or the structure of a group, provides the evolutionary pressures that have shaped the landscape of wild and civilized societies alike. Not every group deploys the same rules for decision-making; some decide by a majority, some by consensus. Some groups break up into smaller sub-groups and evaluate things in a hierarchy of modular decisions. Some crowds are wise and some are dumber than their parts, and understanding how and when and why the living world adopts a vast diversity of different strategies for sociality yields potent insights into how to tackle the most wicked problems of our time.

This week’s guest is Albert Kao, a Baird Scholar and Omidyar Fellow here at SFI. Kao came to Santa Fe after receiving his PhD in Ecology and Evolutionary Biology at Princeton and spending three years as a James S. McDonnell fellow at Harvard. In this episode, we talk about his research into social animals and collective decision-making, just one of several reasons why a species might evolve to live in groups. What do the features of these groups, or the environments they live in, have to do with how they process information and act in the world?

If you enjoy this podcast, please help us reach a wider audience by subscribing, leaving a review, and telling your friends about the show on social media.

Thank you for listening!

Albert’s Website

Albert’s Google Scholar Page

Quanta Magazine’s “Smarter Parts Make Collective Systems Too Stubborn”

Visit our website for more information or to support our science and communication efforts.

Join our Facebook discussion group to meet like minds and talk about each episode.

Podcast Theme Music by Mitch Mignano.

Follow us on social media:

Episode Transcription

Michael: Awesome. Albert Kao, It's a pleasure to join you here amidst the complexity.

Albert: Hello. Yeah, thanks for having me.

Michael: I'd to start these conversations by inviting you to talk a little bit about how you became a scientist in the first place, how you became interested in what you're researching, and what led you to the Santa Fe Institute.

Albert: Yeah, I don't know if I'm out of the ordinary here at SFI, but I feel like compared to the average scientist, I've had a pretty loopy trajectory to get to where I am today. I think I first started getting interested in physics in middle school. For some reason, I thought I was supposed to be a medical doctor. I thought that was a family expectation. Then I learned about the structure of the atom in seventh grade and then came home to my parents and was like, "I think I want to be a physicist. Is it okay if I'm not a doctor?" They were like, "We never said you had to be one." Yeah.

Then in college I majored in physics and then got interested in biophysics and biology, started grad school in a biophysics program and did some rotations in biomechanics labs, neuroscience labs, and then came to the realization or at least the belief, that neuroscience was too difficult, what I really like along with decades off from a full understanding of a brain. Then I learned about animal groups, and how the structure and the function of animal groups in a lot of ways mimics the structure and function of brains.

Obviously, there's a lot of differences as well. It seemed like a more tractable system to study decision-making in collective systems. You can control the group size, you can interrogate it and perturb in different ways that I thought was not feasible at the time in neural systems. And yeah. I did my PhD studying animal groups, and then learned that they're not that easy to study either.

Michael: Yeah. I think we'll probably keep coming back and back to this as we do as a meta on the show in general, which is that, a lot of these things, they may be easier to study in one system or one scale, but then you end up taking this circuitous path through disciplines and realizing that you're studying something very similar to what you thought was very different once upon a time.

Albert: Yeah. I think my academic path has been very torturous, but also, I think, I get a lot of insights. I'm glad I know something about neuroscience. I'm glad I know something about physics. And applying all that seemingly random knowledge to study different animal groups has been super useful.

Michael: Right on. I'd like to start as broadly as we possibly can, because your work takes a lot of different angles to this issue of animal sociality. It is an interesting question that you raise in your work about why organisms would end up channeled down the path of least resistance into evolutionary adaptation for sociality. Frankly, as long as I've been thinking about this, I was embarrassed to realize that I had been naive about there maybe only one reason why animals would choose a social organization. Your research suggests that there are multiple reasons for this, and I'd love to hear you talk a little bit more about that.

Albert: Yeah. I and other postdocs here at SFI and also postdocs on the James McDonnell Foundation Fellowship, have been working on this project to look super broadly across the tree of life, from bacteria, to insects, to birds, to mammals, and look at the literature and see, what are all the different ways being social can benefit an organism? There's so many different ways, just dozens, probably hundreds of hypothetical ways in which being social can help you. Raising offspring, raising together, you can provide more food for them, or defend against predators, or you can huddle and keep warm together. You can sometimes decrease the risk of disease by picking fleas off each other, but oftentimes you can also increase risk of disease.

A lot of these benefits have to do with getting resources. Whether it's locating prey, or capturing prey, maybe as a group, you can capture larger prey items than you could alone. Maybe you want to catch prey, you can defend the prey against other animals who might steal it from you. We documented just dozens and dozens of these different benefits. Then one thing that I thought was cool about this project is that we categorized them into fundamentally different kinds of benefits. In terms of getting resources, we found that there's basically six kinds of ways in which sociality can be beneficial. There's also lots of ways and specifically in different contexts, where that benefit can play out and manifest itself. Fundamentally, we found there's only six different ways in which being social can help you get resources.

Michael: Correct me if I'm wrong about this, but it seems like you can loosely say that it's about metabolic, like you said, resource acquisition or energy saving, huddling for warmth, or that there is a decision-making benefit and those collective computation, right, broadly. Those two things seem like, in a way really intimately related. That's one of the things again, that seems like an overarching theme about a lot of the work that's being done here, which is, how do we articulate and unify a physics of information and a physics of thermodynamics? Maybe I'm getting ahead of the conversation here, but specifically, when we're talking about collective biological computation, what do we mean?

Albert: I guess I alluded to it a few minutes ago about the mapping potentially, between neural systems and animal groups. That's the bulk of the research that I've done so far is thinking about, what unique computational abilities does being social get you for free, basically? And is it easier to evolve social interactions compared to evolving a larger brain, for example, or better individual sensory organs like eyes or ears? Can you just combine a bunch of ears together, like crappy ears, and then have a lot better power compared to evolving a more precise ear?

Approximately, you can apply that in a lot of different cases, so you can get food better if you're searching together and looking for prey together, you can detect and run away from predators better, but also things like migration, detecting the north or south, or following landmarks and things like that.

A lot of models assume that individuals are guessing independently of each other. We know in real life, a lot of estimates aren't independent. We all read a small number of newspapers or listen to small number of radio shows and so our opinions about certain topics are not generally independent, but they're correlated because we read the same stuff. I asked, what effect does that have on collective decision making? One pretty robust result that I got was that, this leads to some optimal group size for decision-making, that in contrast to the basic assumption that decision accuracy increases monotonically with group size, so it just increases more and more as groups get larger…in fact, in these environments, where you have correlations, you get some optimal group size, like 10 or 15 is best. And larger groups do worse and smaller groups do worse.

Michael: This is related to the conversation I had with Mirta Galesic on the show where, she was looking at related research on an optimal group size for decision-making. The thing that I like about both of your research and her research, you mentioned it actually in the in this Quanta Magazine article, where you're commenting on another research paper about collective decision-making. You say, “It feels like what you're doing is a second wave of research. The first wave was naive enthusiasm for collective systems. It's like, there has been a surge, even recently in the last few years, of interest and evangelizing about the wisdom of crowds. I think you're right to point out this particular nuance, which is that we ourselves are social creatures and working online in social media, it's clear that certain people's voices have disproportionate impact and that adding more people who just agree with someone due to charisma or whatever, isn't actually making the collective any smarter.

It was easy to read this paper, “Decision accuracy in complex environments is often maximized by small group sizes,” that you did with Iain Couzin. Like you said, to see in it a clear analogy to the way that our media landscape has changed over the last few decades, the way that we're no longer unanimously correlated in that everyone's reading the same newspaper. But now we have an opposite problem, which is that there are too many contradicting news sources. I'm curious how you understand the problems of communication at scale in modern society, in light of your research, whether there's any insights into how, given the media landscape that we have, should we be making more of an effort to be reading the same thing, or should we be making an effort to change the size of the groups in which we are making decisions based on the fragmentation of information?

Albert: Yeah, it's a dual problem as well, not to make it worse. Like you said, there's the media landscape, so how many different news sources are we paying attention to? What is the influence of the most powerful ones? Secondly, also the fact that with social media, we talk to each other a lot more, so this is not my work, but a follow-up work by other researchers, found a similar phenomenon that I did, but here correlations came about because of social influence. Individually, this model could look at the decisions of its previous group mates, and then make its own decision based on its own information, but also by looking at these previous decisions.

They found a similar thing where you get an optimal group size. And we think there's a strong mapping between my paper and their paper where there's something to do with correlations being generated, not by an external force here, but by social interactions. This global social media landscape ties these two papers together, where people are talking to each other a lot more, but also, maybe the topics of conversation are being dominated by a small number of influential people or news organizations.

So what to do about that? It's a challenge being a theorist to try to map these simple abstract models to the real world, whether it's animals or humans, because ideally, that's what we want to do. We want to say something useful about the world and try to make the world a better place. At the same time, I think we have to be careful about how we interpret these models. They are very simple. Again, what features are we missing that we need to add in order to be more confident in the recommendations we make to policy makers or to the public or to companies? Like I've said, previously, with work they've already done, adding some of these features can really change the predictions of your models. So we have to be careful and make sure that we're incorporating all of the really important features and leaving out less important ones.

Michael: So we are also talking not just about the paper I just mentioned, but we're also referencing this other paper that you wrote here. You were the lead author on a paper “Counteracting estimation bias and social influence to improve the wisdom of crowds.” Some of what you said is coming straight out of this paper. One of the things that I found interesting is that your team played a trick in this particular study,  where what you were telling people with social information about other people's estimates, and seeing if it would change their decisions, their estimates, and that, over a third of the participants completely discounted social information. 231 out of 602 participants were immune to this hack. This might be a tangent, but I'm curious why you think some people were waiting social information more greatly than other people and why some people don't consider it at all.

Albert: We don't have direct data on why those people decided to ignore social information, so this is all speculation. It could be that there's a certain fraction of the population that are just generally immune, regardless of context. It could also be the case where in this particular estimation task, which was a simple jelly bean jar estimation task, which we got a lot of mileage off of. We did some experiments here at SFI. You'd be surprised at how poorly some people guessed despite knowing all the theory about packing fractions and spherical objects and things like that. Anyway. Yeah, so they could have been really confident on this specific task, and therefore, because they were confident, they discounted social information. There's other papers in our literature showing that confidence can correlate with social influence or propensity to be influenced by social information.

There's a separate question of whether confidence correlates with actual ability. We know the whole Dunning-Kruger work that maybe not or maybe even a negative correlation. Anecdotally we interviewed a couple people from this experiment and asked them why they guessed the way they guessed. Some people were very poor guessers, but very confident. If I could just quickly tell two anecdotes…

Michael: Please.

Albert: There was roughly like 600 jelly beans in this jar. Guesses ranged from 80 to 10,000, so a super wide range of guesses. We interviewed one of the guys who guessed really low, like 80 something. He was an undergrad at Princeton. He's an undergrad Princeton who's majoring either in physics or engineering. He did all his calculations and came up with, 85 or something, and the actual number was like 600 like wealth. Another dude guessed 10,000 and so we asked him why. He was a jelly bean delivery man. He's like, "I do this for a living. I know. I look at that jelly bean, I know the size. It's for sure 10,000 on that order." I was like, "Okay, you're off by one and a half orders magnitude, but.

Yeah. In this task, it was a pretty, seemingly easy task, but then very hard for humans to do. We had a super wide range of guesses. In classic wisdom of crowds fashion, we found that the average of those guesses was pretty close to the true answer.

Michael: This paper gets a little bit more into how you can correct for those kinds of intense biases that you're talking about. One of the figures talks about there being a relation between the probability that an individual is   affected by social information and their displacement in the society. This is interesting because this kind of segues, I would say the question lurking behind this whole conversation for me is, when are larger and smaller groups adaptable? In what settings are groups of different sizes the best decision for the complexity of a particular problem given a society of a particular structure?

Like you said, it's a simple jelly bean counting exercise. But the question about social influence seems to be related to your other work you're talking about on correlation of sensory inputs, which is related to this other paper you wrote with Iain Couzin on modularity and how groups can naturally start forming modular subgroups in order to improve their decision making ability. 

So I'm curious… What a messy question. To add another fold to this, when we're talking about groups, we're talking about groups that are not homogeneous. They have a structure. People are in different locations within that group. They're not just seeing different things outside of the group. They're having different informational relationships within the group.

Possibly, the actual question I'm asking is for you to talk a little bit about this paper on modularity, and why you found this counterintuitive result, which is that sometimes, if you break a group up, you end up with a better decision at the level of, when everyone comes back together to compare notes.

Albert: Yeah. That paper, instead of looking at one group, say making decisions by majority rule, it was structured so that they exist in subgroups. Subgroups make decisions, say by majority rule, and then the decisions of the subgroups get combined, and so on and so forth. You can have as many tiers as you want in this hierarchy until you get one consensus decision. We studied that, because there's some evidence in the literature that a lot of animal groups exist in some sort of modular structure like that, whether it's something as simple as fish schools…if you track the motion of each fish in a school, you find that subgroups of fish stayed together longer than you expected. They were just randomly mixing, up to actual hierarchies like primate societies or elephants that live in family groups, but then the family groups are connected over large spatial scales to each other.

The first thing that we found there was that these modular structures always result in a loss of information. We think the general idea is that, the wisdom of crowds works because that group has more information than each individual and having more information is better and so they make a better decision. We found that modular structure always leads to a loss of information and so you would guess that that would be bad for collective wisdom, and in some cases, it is true. But in the correlation context, we found the opposite was true, that having the structure could actually improve. It still leads to information loss, but paradoxically, it can lead to gains in decision accuracy.

This ties into the whole optimal group size result that we talked about earlier because what modular structure does is, it allows groups to behave as if they were a smaller group because of this loss of information. A group of 1000 with some sort of modular structure has effectively the information of, say a group of size 60, or something. If you have an optimal group size that's smaller than your actual group size, then what we think is that one strategy groups could use in order to make better decisions is to behave as if they were smaller by creating this modular structure. We think that, especially for animal groups, this could be really useful and interesting, because like we said, at the start of this talk, that there's tons of reasons why being social can benefit you.

For certain benefits, it might be better to be in a large group, for example, defending against predators. Then for other benefits, might be better to be in a small group, like decision-making. Having this modular structure gets you the best of both worlds. We can actually be a big group and defend against predators, but then in a decision-making context you behave as if you're a smaller group. It just seems like a really interesting way in which animals and groups can tune what we call the effective group size for different contexts.

Michael: This seems somewhat related to research you mentioned elsewhere, that there's a negative correlation between the size of an ant colony and the size of the individual ant brains. I remember David Krakauer talking at UBS for an ACtioN Network meeting earlier last year, where he was saying something similar about, the more we embed ourselves in this information technology milieu, the more we outboard cognitive resources and relying on these collective computations. There's a pretty common expectation that, we're getting dumber as individuals, as the collective of humanity is getting smarter.

Albert: Yeah. We just Google everything instead of remembering things.

Michael: The Quanta article that you were commenting on comes up a lot, in conversations in the Facebook group and elsewhere about a similar thing where, if the memory of nodes in a network is too long, then the network loses adaptability. It becomes dumber. There are all of these different ways that it seems as though the networks within individuals are shaped and in some sense constrained by the selection pressure on the networks of individuals. This is Jessica Flack's work on coarse-graining as downward causation. It seems like what you're talking about, that the modular structure allows the collective social organism to coarse-grain the information that it's getting and make a better judgment. What do you think of all this?

Albert: Yeah. I don't know. I think we don't know very much about the relationship between individual cognition and collective cognition. I think as, say, physicist studying collective behavior, there's a natural tendency, aesthetically, to ask the question, how dumb can the individual components be, to still have this interesting collective ability? Aesthetically, that's something that people in my field like to show. It's like, "Oh, you can have this awesome collective ability and look, the individuals can be so dumb and still have it." Right. It's interesting, you get so much at the collective level.

But I think that's not the right question to ask. I think the question should be, “What is the relationship between individual cognition and collective cognition?” Why should you have dumb components when you're in a group or can you have smart components, and even smarter collective abilities? Is there some trade off between the two, where maybe there's an energetic trade off where by offloading computation from individual brains and doing it collectively through social interactions, are you saving metabolic energy or something like that? Or is there some computational bottleneck if you have individual components that are too smart?

I was talking to a visitor here a few months ago and he actually brought up that observation from the literature showing that in a certain subgroup of ants, the larger the colony, the smaller the brain size of the individual ants. We were brainstorming is there a fundamental reason why that might be the case? I think it's its wide open. I think we know very little about it. I think it's a really cool area for future research, but also, I think it's another to hearken back to an earlier point, why I want to be cautious when applying some of these results to humans, which are, we think, quite smart, most of the time, sometimes.

They're not just simple automata making decisions through majority rule. They can have conversations with each other. They can influence each other. There's all these sorts of subtle signals that you send to each other. People like Mirta who are more sociologists or others who are psychologists, I think linking that understanding of the human brain to these theoretical models of collective decision making, and to animal groups can be a really interesting intersection of these Venn diagrams of related but different areas of research.

Michael: I think now seems like a good time to ask the question we've been dancing around this whole time, which is, all of these different research vectors... I feel like you must have developed by now an intuition for the evolutionary context within which you're going to see large groups form that are benefiting from the wisdom of crowds and the kinds of decisions that the group has to make for that to be the way to go, and then when those large groups might want to specialize into some sort of hierarchical structures, so that there's differences in influence…when a group might want to correlate itself very strongly over a small geographic area, when it makes more sense to spread out. When our crowds wise? When are they dumb? These deeper questions that are alluded to by all of your research.

If you were to create a handbook to this kind of a thing, I know you've already made it clear you're very careful about policy advisement, but Aeon Magazine article by a bunch of SFI people, Doyne Farmer, Beinhocker, Rasmussen and non-SFI author Fontina Markopoulou just wrote this great piece on the lag between our physical technologies and our social technologies. Most of the people I know that are smart, young folks working in technology, are deeply concerned about how can we better organize ourselves? How can we organize ourselves more smartly, to handle the complexity of the problems that we're facing? That's the motivation for this kind of a question.

Albert: Yeah. I've been prefacing everything with like, I don't want to go on record making very specific recommendations. However, I do think it's super important. A friend of mine and collaborator, just got a bunch of people, including me, together to write this perspectives piece on this where he calls collective behavior a “crisis discipline.”

I believe it was coined for conservation biology. It's the same thing where, for conservation biology is like, we don't have a full understanding of ecological systems, yet these habitats are being destroyed super rapidly and so we need to make some decision about what to do, even with imperfect information. This perspectives piece is making the same argument, but for collective behavior, where so many people are online for so much of the day on Twitter, Facebook, whatever…and this is having meaningful impacts on political systems like public discourse, all these things. But at the same time we have a poor understanding of collective behavior. The state space is so large. Right? There's different size of groups, there's different network structures, there's different decision contexts, there's all sorts of variables that we need to play around with, ideally, in experiments.

And yet, we can't wait another 10 years or 20 years before we start to make recommendations. And so this piece, I think, is really interesting and important, because it makes me feel so uncomfortable. Okay, say in the next two years, what kind of experiments should we do? What kind of modeling should we do now in the next two years, in order to say something concrete about how to regulate these companies? Or how should governments counteract bad actors on these networks or other governments? Even if we're not completely confident of what we know, we still have to say something and not saying anything at all, is also a decision. Despite and because of my reluctance, we wrote this piece, and I think it's really important, it's really urgent as we know from every news cycle.

Michael: Just to anchor this a little bit, in the discussion on your paper on modularity you and Couzin say, “Silencing the minority opinion within subgroups modularity necessarily causes a loss of information. In general modular structure is detrimental to collective decision accuracy in simple environments.” I was thinking about this in terms of the conversation I had with Andy Dobson recently, and island biogeography, and how it seems like, if we want to take this another way, maybe a more comfortable analogy in the biological rather than the social, it seems like genetic drift is helping to accomplish something similar in population biogeography. That when you have a large population on land, through the grace of genetic drift alone, a lot of these edge opinions, if you will, mutations that don't necessarily have a benefit or a detriment, are lost in the wash. Whereas on a smaller Island, you get a different kind of decision-making.

I'm curious how you think that this work on decision-making might shed more light onto processes that are not intentional or decisive in the way that we would understand them, how cognition might be recognized or understood at the level of ecological networks, rather than individual organisms working in association with one another?

Albert: Yeah. If I understand your question correctly, the thing that links those islands together and the social systems together, especially the modular ones, is diversity. Right? In islands, since they're isolated from each other, you get diversity of genetic material that can then be more fit than other variants. Similarly, with the social systems, what's really important is diversity of opinion. If you have some very influential person, even if that person is very smart, you miss out on that collective wisdom that you get just from diversity of opinion. A lot of that is noise. Each individual person could be very noisy and very inaccurate, but then there's some process which we're still trying to understand in which the average can be quite good. Creating modular structures can permit in a similar way to islands, isolation from other opinions and you can breed different opinions with any subgroup, which many of them might be bad, but then the average of them or some combination of them can be quite good.

Michael: I might be stretching this to the breaking point, but it seems almost as though... If you think about evolution as a cognitive process, then the diversity of biological computation is a reflection of or an epiphenomenon of the geographic and environmental diversity. So there's almost a conservation argument here that's similar to the argument made about ethnobotanical conservation, that we don't know what we're losing, by homogenizing habitats, because then we're ultimately homogenizing the cognitive strategies available to us moving forward.

Albert: Yeah. There's pros and cons, obviously, to things like social media, where you permit the formation of really niche groups. Where, say, like globally, there might only be a handful of people or 100 people or whatever, who have some niche interest. If they were to seek each other out, geographically, it'd be impossible, but on the internet, you can find each other make a subreddit, now you're connected. That can be good and bad, right, depending on what those people are interested in.

So yeah. I think some of the conversation in the media focuses on the negative of Nazis can find each other more easily and communicate online and things like that, so how do we find them? How do we discourage those kinds of groups from forming?

Also, it can be positive, which is by breeding diversity of thought and diversity of opinion. You just get more raw material to work. Humanity gets more broad material to work with. And perhaps in some sense, that's not the worst thing. Once in a while, we might generate some awesome idea from it and make some forward progress, but then also we need to filter out some of the neutral or even actively harmful elements as well. It's a balance between taking a really forceful top-down approach to regulating things on the internet, but also permitting a diversity of opinions is still bubbled up from the bottom-up.

Michael: Yeah. When I think about this stuff, I think about it in terms of the cost of innovation. This is this may be out of place, but there's that theological argument that it's like you can't have choice without evil. Right? It's a very similar kind of scientific formulation or same thing, that's like, if we want to encourage creative solutions, then there is a certain amount of accepting the fact that those structures also empower individuals to raise the bar on existential risk of civilization, et cetera.

Albert: Yeah, I don't know. I think one way into the question is like, can you differentiate between say, good things and bad things on the internet in some ways so that you can control one, but also allow the other to persist? I think there is some evidence about that. There was a paper that came out a couple years ago showing that false news spreads on Twitter differently than true news. It's it spreads faster and penetrates more deeply into the social network. So there might be signatures. There might be signatures of good things and bad things. Then by detecting, by classifying those two, maybe we could identify and target false news and, allow true news too. I don't know. It's very speculative, but there might be something there as a strategy.

Michael: Interesting. Actually, you're reminding me of work currently unpublished by Josh Garland and Mirta Galesic on the network structures of conversations on Twitter. They were talking about how somebody who has an inflammatory post that just generates responses to the original post rather than a debate that branches, it's got a higher fractal dimension. There's more branches because people are taking the time to respond to things in sub-points and sub-sub-points.

That work suggests that we can actually take an orbital view on different conversations and identify whether they're worth getting involved in, by… Like you talked about elsewhere in your work, we were talking about earlier, when everything is correlated to that original comment versus when everybody is coming at it from a different angle and looking at a different piece of it and that, in a way, it's almost, I might be over stretching it here, but I look at Garland and Galesic work on that stuff and I see that there might be a way for us to talk about good things and bad things online in terms of whether they actually facilitate collective decision-making in an effective way, or whether they are actively draining our mental resources and creating these self-replicating viral structures that just absorb our brains.

Albert: Yeah. I guess, a thing to add on to what we're talking about with modular structure permitting diversity of opinion, and maybe that could be good, is that, I guess that corollary to that is that you also need them to talk to each other, eventually. These different modules, now that you’ve generalized it for diversity, they need to talk to each other and then create your improved solution. Perhaps the counter argument is that, in our current state, that's not happening. Right? Factions are becoming more and more polarized and there's no sign of that reversing in trend and so maybe that's the missing piece where like, yes, these factions might serve some purpose, but only if, at some point in the future, you come back to the table and talk and make decisions together. That seems to be what's missing currently, in the state of play, either on the internet or in politics.

Michael: This seems the perfect place to bring up the last paper I wanted to talk with you about that you just published, the pre-print on “The wisdom of stalemates.” Obviously, most people find a stalemate immensely frustrating, but it sounds like you've got an argument here that, viewed from the level of the social organism, a stalemate, it actually has evolutionary value. That there are good reasons for us to get to these points. Could you unpack that a little?

Albert: Yeah. In most models, and the models we've talked about in this conversation, it usually plays out like this. You have some decision scenario, individuals get information, maybe they talk to each other; at the end of the day, the group has to make some sort of collective decision. We asked the question, What if they don't have to? What if there's a third option? Usually, these are binary scenarios where you have option A, option B, you have to choose one. We asked, what if there's a third scenario in which you can have a stalemate and just have no decision? What effect does that have?

And so we analyzed the scenarios that we've talked about already, simple environments in which the wisdom of crowds comes about and then more complicated scenarios like the ones with correlations and things like that. We found that stalemates can be almost always good for collective accuracy, because you reach a stalemate more often in cases where, if you were forced to make a collective decision, you would have made a bad one, the incorrect one. And so stalemates can save you from making a bad decision and put off the decision to another day in which you're more likely to make a better decision.

The value of stalemates obviously depends on the cost of a stalemate compared to the cost or benefit of option A versus option B. If stalemates are super costly, then you don't want to use that strategy. For example, if you're a school of fish, and a shark is coming at you full speed, and you're trying to decide what is the best direction to run away in, it doesn't matter. Right? Whatever direction is better than the shark direction, and so it doesn't matter what it is.

Michael: Into the net.

Albert: Yeah. Stalemates are the worst, pick any direction. You don't want to use them right there. In a lot of cases, to go back to fish, stalemates might be not costly at all. A lot of fish hide out in weeds, just hiding out avoiding predators and only come out once in a while to look for food. If you're not sure where food is, it doesn't really matter. You can just hide out some more. It's pretty safe, and then wait a little while trying to make a decision again. And then when you do make a decision, because you allow stalemates, that decision is more likely to be accurate, and you'll find food and avoid predators with a higher probability.

I thought this paper was really cool and it was spearheaded by Claudia Winklemeyer, who's now a PhD student in Berlin. It introduces this idea of stalemates as a possibly useful function in some cases for animals. Maybe humans. We haven't thought too much about that, but, yeah, instead of casting stalemates in a uniformly negative light, that in some scenarios that could actually be really useful for collective accuracy.

Michael: Well, I hope I don't offend anyone who loves me by saying this, but it sounds a whole lot like the way most people describe their marriage. To shoot from the hip yet again, it sounds almost like this work is analogous to other work that has suggested that there are... It’s like the Lokta-Volterra equation. There are stable populations of predator and prey in balance with each other. If you regard each one as a sort of evolutionary model of its environment, then you don't want either of them to win the game. Right? If every type of organism is a proposal by the biosphere about how the world is, then the entire biosphere is the actual answer to that question, right?  The wisest people I know are more than willing to cross the aisle and engage in synthetic discourse with people that they violently disagree with.

If you start to see it the way that I feel your research paints things, then political polarization may be actually adaptive at the level of this society. Or like in philosophy, materialism and idealism, are these completely robust positions that neither one of them seems to have been able to win out against the other for thousands of years. If I’m going to be bold here, I feel that seeing your work in a certain way allows one to restore their faith in humanity, because you realize that all the people that you think are idiots, that you disagree with, are contributing to a more intelligent, more creative society and that if everyone agreed with you, we'd probably go off the cliff right away.

Albert: Yeah. There's a collective cognition, probably a collective memory as well, where culturally, we're storing all of these different ideas and beliefs into our “cultural cloud” and any one person in that society will disagree with the majority of those, but it's useful to have those as a library of ideas. Not even stockpile of ideas, but a generator of new ideas and new directions of thought and inventions. Yeah. As an optimistic note, that could be a really useful function of social interactions, culture, society. And if we were homogenous, and we all agreed with each other, then something's wrong. We're doing something wrong. We're not taking full advantage of what we could, as a culture.

Michael: I guess just to send everyone out on a note of curiosity and intellectual adventure, I'd love to hear you talk a little bit more about the questions that are animating you right now in your research. What's on the horizon for you? What do you feel is naggingly, painfully unresolved?

Albert: Yeah. I think at the start of this conversation, I was talking about this paper we just finished about this broad view of “Why are organisms social?” I think that's very thought provoking. In that paper, what we found is that, how group sizes change under different conditions can be really informative about why a species is social. Say in drought conditions, when food becomes really scares, a lot of animal groups gets smaller. It's like lions, things like that, which makes sense intuitively to us, but then other groups get bigger when food gets scarce. Some bacteria, locusts form this swarms when food runs out.

Our model found that whether a group has increase or decrease, or they don't change, really reflects the underlying reason why they're social. I think that's a really simple, but big question of fundamentally, what's the major reason why this species is social or that species is social? We have guesses. For fish, a lot of times, we think it's for predator avoidance or for locusts, it's to find food. We don't really know that for sure, for any species.

When I was working on this paper, I was thinking that, it's so important to know why. For example, if you scanned across the tree of life and really figured out if you had a trillion dollars of research money and 10,000 undergrads, you could figure out specifically why are each of these 1,000 species social? It'd be so interesting to look at that catalog. Is it that there's a certain benefit, a sociality that dominates across the tree of life? It's like it tends to be this one benefit, and then rarely is it this other benefit. And you can think broadly about the evolution of sociality. So maybe if that was the case, maybe a lot of organisms became social for a similar reason, and then they accrued other benefits secondarily, like flying in a V formation for energetics, or whatever. Maybe that's a secondary benefit.

So I think looking at the super-zoomed-out view of the tree of life and sociality, from bacteria, to elephants, to birds, and trying to think of, what's the distribution of benefits? Why are they social? How do they evolve to be social? For me has been really thought-provoking. Becoming social is a major transition in evolution. It's tied to multicellularity, it's tied to the evolution of eusociality. I think understanding very broadly why things are social takes us some distance to understanding evolution in general and biology in general.

Michael: Awesome. Very important questions. Yes.

Albert: I hope so.

Michael: Right on well, Albert, thank you so much for indulging all of my free wheeling nonsense and telling people about your super-fascinating work.

Albert: Thanks a lot. Yeah, thanks for giving me the space to talk about my research. It's been really fun.