COMPLEXITY: Physics of Life

Paul Smaldino & C. Thi Nguyen on Problems with Value Metrics & Governance at Scale (EPE 06)

Episode Notes

There are maps, and there are territories, and humans frequently confuse the two. No matter how insistently this point has been made by cognitive neuroscience, epistemology, economics, and a score of other disciplines, one common human error is to act as if we know what we should measure, and that what we measure is what matters. But what we value doesn’t even always have a metric. And even reasonable proxies can distort our understanding of and behavior in the world we want to navigate. Even carefully collected biometric data can occlude the other factors that determine health, or can oversimplify a nuanced conversation on the plural and contextual dimensions of health, transforming goals like functional fitness into something easier to quantify but far less useful. This philosophical conundrum magnifies when we consider governance at scales beyond those at which Homo sapiens evolved to grasp intuitively: What should we count to wisely operate a nation-state? How do we practice social science in a way that can inform new, smarter species of   political economy? And how can we escape the seductive but false clarity of systems that rain information but do not enhance collective wisdom?

Welcome to COMPLEXITY, the official podcast of the Santa Fe Institute. I’m your host, Michael Garfield, and every other week we’ll bring you with us for far-ranging conversations with our worldwide network of rigorous researchers developing new frameworks to explain the deepest mysteries of the universe.

This week on the show we talk to SFI External Professor Paul Smaldino at UC Merced and University of Utah Professor of Philosophy  C. Thi Nguyen. In this episode we talk about   value capture and legibility, viewpoint diversity, issues that plague big governments, and expert identification problems…and map the challenges “ahead of us” as SFI continues as the hub of a five-year international research collaboration into emergent political economies. (Find links to all previous episodes in this sub-series in the notes below.)

Be sure to check out our extensive show notes with links to all our references at complexity.simplecast.com. If you value our research and communication efforts, please subscribe, rate and review us at Apple Podcasts or Spotify, and consider making a donation — or finding other ways to engage with us — at santafe.edu/engage.

If you’d like some HD virtual backgrounds of the SFI campus to use on video calls and a chance to win a signed copy of one of our books from the SFI Press, help us improve our science communication by completing a survey about our various scicomm channels. Thanks for your time!

Lastly, we have a bevy of summer programs coming up! Join us June 19-23 for Collective Intelligence: Foundations + Radical Ideas, a first-ever event open to both academics and professionals, with sessions on adaptive matter, animal groups, brains, AI, teams, and more.  Space is limited!  The application deadline has been extended to March 1st.

OR apply to the Graduate Workshop on Complexity in Social Science.

OR the Complex ity GAINS UK program for PhD students.

(OR check our open listings for a staff or research job!)

Join our Facebook discussion group to meet like minds and talk about each episode.

Podcast theme music by Mitch Mignano.

Follow us on social media:
Twitter • YouTube • Facebook • Instagram • LinkedIn

Mentioned & Related Links:

Transparency Is Surveillance
by C. Thi Nguyen

The Seductions of Clarity
by C. Thi Nguyen

The Natural Selection of Bad Science
by Paul Smaldino and Richard McElreath

Maintaining transient diversity is a general principle for improving collective problem solving
by Paul Smaldino, Cody Moser, Alejandro Pérez Velilla, Mikkel Werling

The Division of Cognitive Labor
by Philip Kitcher

The Unreasonable Effectiveness of Mathematics in The Natural Sciences
by Eugene Wigner

On Crashing The Barrier of Meaning in A.I.
by Melanie Mitchell

Seeing Like A State
by James C. Scott

Jim Rutt

Slowed Canonical Progress in Large Fields of Science
by Johan Chu and James Evans

The Coming Battle for the COVID-19 Narrative
by Wendy Carlin and Samuel Bowles

Peter Turchin

In The Country of The Blind
by Michael Flynn

82 - David Krakauer on Emergent Political Economies and A Science of Possibility (EPE 01)

83 - Eric Beinhocker & Diane Coyle on Rethinking Economics for A Sustainable & Prosperous World (EPE 02)

84 - Ricardo Hausmann & J. Doyne Farmer on Evolving Technologies & Market Ecologies (EPE 03)

91 - Steven Teles & Rajiv Sethi on Jailbreaking The Captured Economy (EPE 04)

97 - Glen Weyl & Cris Moore on Plurality, Governance, and Decentralized Society (EPE 05)

Episode Transcription

Paul Smaldino (0s): In social theories and cognitive theories so often our theories are about relating constructs and then we have proxy measurements. But the theory isn't about the relationship between the proxy measures. The theory is about the constructs and the relationships between the constructs that are social in nature, that are cognitive in nature, but aren't the things that are being measured. So there's this gap, and I don't know the extent to which that gap can be overcome.

C. Thi Nguyen (30s): I think we are very much on the same page. One way of putting it is you can say math can explain everything if you've restricted the scope of everything to the kinds of things that math is good at explaining. This is kind of my background, worry about the evidence-based outcomes world, which is if you insist that the only outcomes we're gonna pay attention to are the ones that are amenable to large scale measurement and mathematization, then you're gonna leave out any of the outcomes that aren't the kind of thing that of that.

Michael Garfield (1m 23s): There are maps and there are territories, and humans frequently confuse the two. No matter how consistently this point has been made by cognitive neuroscience, epistemology, economics, and a score of other disciplines one common human error is to act as if we know what we should measure and that what we measure is what matters. But what we value doesn't even always have a metric. And even reasonable proxies can distort our understanding of and behavior in the world we want to navigate. Even carefully collected biometric data can include the other factors that determine health or can oversimplify a nuanced conversation on the plural and contextual dimensions of health transforming goals like functional fitness into something easier to quantify, but far less useful. This philosophical conundrum magnifies when we consider governance at scales beyond those at which homo sapiens evolved to grasp intuitively. What should we count to wisely operate a nation state?

How do we practice social science in a way that can inform new, smarter species of political economy? And how can we escape the seductive but false clarity of systems that reign information but do not enhance collective wisdom? Welcome to 

Complexity

, the official podcast of the Santa Fe Institute. I'm your host, Michael Garfield, and every other week we'll bring you with us for far ranging conversations with our worldwide network of rigorous researchers developing new frameworks to explain the deepest mysteries of the universe.

This week on the show we talked to SFI External Professor Paul Smaldino at UC, Merced, and University of Utah Professor of Philosophy, C. Thi Nguyen. In this episode, we talk about value capture and legibility, viewpoint diversity, issues that plague big governments, expert identification problems. And we map the challenges ahead of us as SFI continues as the hub of a five-year international research collaboration into emergent political economies.

Be sure to check out our extensive show notes with links to all of our references at 

complexity.simplecast.com

. If you value our research and communication efforts, please subscribe, rate and review us at Apple Podcasts or Spotify and consider making a donation or finding other ways to engage with us at 

Santafe.edu/engage

. If you'd like virtual backgrounds of the SFI campus to use on video calls and a chance to win a signed copy of one of our books from the SFI Press, help us improve our science communication by completing a survey about our various sycom channels linked from the show notes, and thank you in advance for your feedback.

Lastly, we have a bevy of summer programs coming up. Join us June 19th through the 23rd for 

Collective Intelligence Foundations and Radical Ideas

, a first ever hybrid event open to both academics and professionals with sessions on adaptive matter, animal groups, brains, AI teams, and more. Space is limited, but we've just extended the application deadline to March 1st or apply to the graduate workshop on complexity and social science or the Complexity Gains UK program for PhD students, or check our open listings for a staff or research job.

You can also join our Facebook discussion group to meet like minds and talk about each episode. Thank you for listening. Paul Smaldino, C. Thi Nguyen, welcome to 

Complexity Podcasts.

C. Thi Nguyen (5m 1s): Hooray.

Michael Garfield (5m 4s): So this was a real catch getting the two of you on together because for our emergent political economies sub series of this show, and this was planned over a year ago but then over a year ago wasn't on the docket, was like, you know, Elon Musk buys Twitter and becomes the bad emperor. And as excited as I was to talk with you about your respective works with social media and all of that, here we are in 2023 and it's a very different animal and I think your respective work stands the pressure test.

C. Thi Nguyen (5m 49s): Hey Michael? Yeah, for my amusement, would you tell me in a brief summary why in your mind you thought Paul and I belonged on the same show together? I mean, I have my own guess, but I want to hear your version of this.

Michael Garfield (6m 4s): Well, I mean, so Paul, you work on the evolution of covert signaling and T. you talk about transparency is surveillance and that's the the morsel. That was the first thing that made it obvious that the two of you should be in a room together.

C. Thi Nguyen (6m 24s): So when I heard about Paul's work, I heard about through my friend the philosopher Cailin O'Connor, Paul's paper, the 

Cultural Evolution of Bad Science

, which I won't try to explain here. Paul can explain it. But I immediately thought that we had shared the same obsession with the thinness and game ability of outcomes. And in particular, I mean I would say that your work is a very careful study into something that I'm currently obsessed with, which is the way that large scale institutions tend to thin out what they can measure and the way that thinness is gamed and exploitable and the way that we become kind of our vision can be deeply narrowed when we're bound by these very thin institutional outcomes measures.

Paul Smaldino (7m 9s): In preparation for this I, I read a a couple of your papers, Thi, and I like that was the thing that really caught my attention was this sort of shared interest in what sometimes is called perverse incentives, but really just the way that gamifying things and metricizing things changes the nature of the game and selects for things that aren't necessarily the outcomes that we want.

C. Thi Nguyen (7m 36s): There's a really interesting gap here because an interesting thing for us to talk is that you are really interested in the stuff that's on the outside, the perverse incentives and I'm really interested in the things that are on the inside, the ways that people change their value in the background. For the listeners, Paul read a paper of mine that's in draft called 

Value Capture 

and 

Value Capture 

is when the world presents us with external simplified versions of our values and they swamp our system, they take over what we value. I just wanna start by saying I think there's something really different between perverse incentives and value capture because you can be perversely incentive to incentivized without changing what you deeply care about, right?

So if I'm a philosopher and I care about wisdom and the world is handing me the incentives to write shitty stupid papers on poppy topics, I might still hang on to what I care about. But I know in order to make money and survive professionally, I have to pursue this other stuff. That's enough to have a perverse incentive. And I kind of think that people like you are the best people to study perverse incentives because you're studying the changes in the way people behave and the changes in outcomes. Maybe I'm wrong, but I feel like the weird thing that the philosopher here is caring about is what happens like to our souls when that swamps. How do we change in a deep way when we, when we are not merely incentivized, but when our core values are captured by these simplified metrics?

Paul Smaldino (9m 0s): Oh, so I think that's a really interesting take. I hear what you're saying and I think your characterization of perverse incentives is the way that it's usually talked about. I think the way I think about these things is that these two things are basically the same and I think it's partly because I think about these things not from a perspective, generally from a perspective of strategic behavior, which is how a lot of social scientists think about things but from a perspective of cultural evolution and the ways that people behave and the strategies that they use for engaging in the world, which includes the values that they have that lead to those strategies are shaped over time by the nature of the kind of social and cultural and institutional environments that they're in.

So I think of these things like being deeply interconnected,

C. Thi Nguyen (9m 48s): Is your view that perverse incentives always leach into the center of our hearts?

Paul Smaldino (9m 52s): I'm always hesitant to say always, and I know there's an irony and I just said it. I'm hesitant to say always, but yeah, I mean I think that we use heuristics to make sense of the world all the time. And so people often try to come up with the simplest model that allows them to engage successfully in their world. And so value capture is a way of shaping and changing heuristics so that it gives us a new set of rules for engaging with the world that aren't necessarily the things that we would've started out with.

C. Thi Nguyen (10m 26s): Wait, I think of this moment, you might be one of the few human beings I talked to recently that might be more cynical than me on this point. So let me give you two examples and see what you think about, about why I still wanna maintain this difference. 

Paul Smaldino (10m 38s): Thespoiler is I'm very cynical.

C. Thi Nguyen (10m 40s): I can tell when I read your cultural evolution of bad science, I just laughed in delight because well put profound, empirically supported cynicism is beautiful. Anyway, so here's two examples about why I still think they're different. One comes from one of my favorite books I've read in the 

Sociology of Quantification Engines of Anxiety

. Do you know this book?

Paul Smaldino (10m 60s): I've only heard you mention it in in a talking you gave, but no.

C. Thi Nguyen (11m 4s): So Wendy Esplin and Michael Souder are sociologists who work in quantification, the culture of quantification. 

Engines of Anxiety 

is a book that's a carefully documented study of what happens to legal educational culture when the US newsroom rural report starts issuing university rankings. And one of the interesting things is they talk about two stages of impact. The first stage of impact is kind of I think of as the perverse incentive stage. So first of all, the US News and World Report rankings are really simple, really brute, they just measure a few things as far as we can tell incoming class GPA outgoing class employment rate, it doesn't track a lot of the values that we care about in legal education and it doesn't track any plurality of values.

So stage one they say is when the US News and World Report came out, a lot of the deans and administrators immediately saw that the only way for their law school to survive was to stay in the rankings, but they didn't care immediately. Their first goal wasn't to rise in the rankings. And so I think they engaged in trade-offs. They were like, well I need to do this in order to get the school to rise in the rankings, but still we're not gonna do everything we can because we still care about this other thing. We're gonna save, hold something back.

One of the things I say is stage two, what happens later on? The original deans and administrators burn out because of how much they hate the US New and World Report rankings and they get replaced by people who are all it who think the only point is rise in the rankings and those people don't hold back. They only have one target. I think something similar is the difference between, so, realizing I need a lot of money in order a decent amount of money to support my family, but not thinking money is the point of life. And similarly realizing that getting a decent number of publications and citations is necessary for a job versus thinking the goal of my life is to max out citations.

And for me there's a huge gulf between those things.

Paul Smaldino (12m 54s): Well, here's where I think they're connected because I see the difference and I understand the difference you're talking about, but I think the difference is that is this temporal dynamics where you start out with let's say perverse incentives and people saying, well I don't necessarily value these things but I have to shape my behavior in order to succeed in this system. But the thing is, the system being the way it is creates a filter and the people who are the best at figuring out how to operate in that are the ones that then end up being successful and they're the ones that teach the next generation or emulated by the next generation.

And over time the people for whatever reason psychologically or behaviorally or whatever their path is, are best able to exploit the system are gonna be able to thrive in it. And I think that because of that you end up selecting for people with certain kinds of values because they're gonna be the people who there's always exceptions but are gonna be best able to thrive in this kind of thing.

C. Thi Nguyen (13m 51s): I have something to say in response, but I recently had a thought, see what you think of this thought. Sometimes I think that like Silicon Valley is doing really well right now and people think it's very successful, but one of the reasons it's successful is that Silicon Valley has been really good at seizing the terms of what counts as success. So there are two stages here. One is there's a kind of success that people target and the people that are willing to target it are the people will succeed. And there's another level that's like, no, no, we're gonna redefine success to be the kind of thing that we can target easily with tech.

I just wanted to know what you thought of those two stages because the work I've seen of yours seems to be more about people that successfully target the outcome as given. And I'm wondering if you have thoughts about how people can kind of redefine or seize control of the outcomes.

Paul Smaldino (14m 39s): Well, briefly I'll say I think that distinction makes sense. One of the things I really enjoy about your work is that as a philosopher you're able to sort of get deep in into the nuance of these things. And because I've sort of, for whatever it is about my trajectory or philosophy or the way I operate, I tend to gravitate toward the things I write about, tend to be things I can formalize, figured out how to formalize and it's easier, there's a meta structure here where it's incentives itself but I don't necessarily think they're perverse, but it's easier to formalize things that are on that surface level, these kinds of responses to incentives and harder to formalize things like values.

But you know, I would love to go there more and you know, doing my best. But I think that what you're talking about is really interesting and really important and I think I know what you mean but do you have an example?

C. Thi Nguyen (15m 35s): Let me lay some cards on the table. I'm actually kind of worried in the background about a kind of seizure of our targets by operationalizability or formal liability. So I mean a lot of the things I've been worried about lately are what we can measure at scale is quite limited. There are all kinds of things we can't measure easily at scale and when we get obsessed with kind of large scale metrics or outcomes or even the kinds of things that are easy for us to scientifically measure right now. So here's one version of the concern, education.

It's really easy to measure student graduation rate, student graduation, speed satisfaction scores and employment salaries after college. It's really hard to measure whether they became more wise, whether they became more curious, whether they are more reflective. Similarly and his is a maybe a little bit above my pay grade, I'm a little worried that the medical world for example, targets things that are easily immeasurable like lifespan and saving lives and doesn't target things that are harder to measure like various forms of rich and complicated quality of life.

Paul Smaldino (16m 43s): So I see a problem with your having the two of us on, Michael, which is that we might be too much in agreement and we're just like yes to all of that. This is something actually that I've been thinking about more which is there's often a kind of hand waving in a lot of institutions about goals. Not all. There are certainly some institutions and I think one of the things where the corporate world is actually much better at this than the academic world or the educational world because their goal is profit.

So it's very clear. It's much harder to say what the goal of an educational institution is. It feels like it should be obvious, but within the sort of general goal of like we want to produce successful well-rounded people, there's a lot of disagreement about what the goals are. And so shaping the institutional incentives around those goals becomes extremely difficult because not only do we have to worry about perverse incentives, but we have to worry about vigorous disagreement about the kinds of things that are valued in the first place.

And I think exactly what you're talking about Thi is something that if you went to a bunch of university administrators let's say, or medical school administrators or doctors and you said what is the point of what you're doing? Is it to produce wise, well-rounded people? Is it to minimize costs to insurance companies? Is it to increase donor contributions? What is it? And there are all these competing goals. And so there's this constant infighting about among different people who have different versions of what the best version of their institution is and it's so difficult to articulate what that is.

C. Thi Nguyen (18m 24s): I wonder if we're in different types of this cuz are you like worried about the hardness of the, it sounds like you think it's a problem that it's hard to come to agreement and articulate a goal where I actually prefer the university that disagrees, has many plural goals and worry that when it articulates an outcome clearly and starts orienting around that outcome, that's when it starts shedding a lot of what was good about the kind of pluralistic more. So let me just give you, this is like from my life, right?

So a university I've been employed at has started moving towards orienting everything around student success where student success is defined as graduation rate, graduation speed, salary after graduation. When you define that outcome, it becomes really easy to target and the people that are targeting it, as you say, the people that target it well tend to rise. People that are willing to go all in on targeting that stuff instead of caring about all the other weird shit that education might be for tend to have better recordable outcomes and tend to rise in the university structure.

So I actually am happier for something as complicated with education in which different groups have different conceptions of values about what they're doing and we don't actually try to settle it and we don't hold them all to a high articulablity constraint because I think the business school and the CS department have more easily articulable outcomes than the creative writing department, art history department. A lot of the stuff that I'm writing right now is about like this defense of the inarticulable.

Paul Smaldino (20m 4s): It's a hard question to answer because I think that there are multiple levels of organization going on here. There's like a top administrator level because these institutions tend to be pretty hierarchical. I think at the top of the hierarchy there has to be some sort of reasonably well defined goal even if it doesn't specify what every individual component of the organization or institution is doing. And I think that that trickles down to those levels though and creates incentives. Regardless of whether or not it's a good thing I think there has to be some sort of coherence at the very top level, even if it doesn't dictate what each individual component is doing.

C. Thi Nguyen (20m 45s): What's the force of that have to, what does the grounds of the have to? Why does it have to?

Paul Smaldino (20m 49s): Assuming that institutions tend to be reasonably hierarchical and there's going to be a top level which is let's say a chancellor or a dean or a board. They're gonna have to make decisions which affects the entire rest of the organization. And there are sort of two options there, right? One is that they have coherent goals which is going to shape the choices they make that affect the rest of the institution. And the other is that they have incoherent goals. Now you might argue, well this is the point of having a board where it's not one person with multiple people and you can have different factions arguing for different things and this creates sort of weeds out terrible decisions and allows for diversity.

And I think that there is a good argument to be made for that. But often there's selection even then within these words and stuff and sometimes that leads to certain kinds of directions, certain kinds of coherence of decision making and possibly making your argument here, which is to say that it may be better to have diversity even at the top to avoid it going down into some sort of terrible perverse incentive direction. I just wanna say before we move on that I've been increasingly studying the literature on things like collective problem solving and norms in structured populations and over and over and over and over and over again this literature points out through empirical studies as well as lots and lots of different kinds of formal models the importance not just of diversity but of a kind of structure where everyone is not always interacting and talking to everyone but you have different groups that are able to pursue different paths and different goals and do different things so that well for one you have then a population of different solutions which can then be compared and so you constantly have a breeding ground for new ideas and new innovations. One thing that most of those models don't look at, but it is something that is important that you've been getting at, Thi,  is that individuals within different groups have different goals and if goals are vaguely enough to find at an institution, it may allow for the pursuit of different kinds of values to be simultaneously coherent with some sort of overarching organization.

Yes, on the surface that seems okay to me. I think it depends again what the sort of larger goals of the kind of society we want be living in are.

C. Thi Nguyen (23m 14s): Part of this literature comes from a bunch of philosophers of science who ended up moving into like modeling and I think interacting with a lot of the people from your world, Paul. So for me from my background, the key figure here is this guy named Philip Kitcher who's a philosopher of science and amazing and here's these papers on the division of cognitive labor and the importance of cognitive diversity. Maybe at a point that kind of changed my life as a philosopher and an epistemologist, his point was this. Imagine you have a population of scientists and they all are identically rational in the same way they believe whatever theory has the most evidence behind it, this seems good, right?

They're all behaving rationally, they're believing the evidence. Kitcher says this is actually a terrible way to structure science. So imagine there are three theories, theory A, theory B and theory C. Theory A has 60% of the evidence behind it. Theory B has 30% of the evidence behind it. Theory C has 10% of the evidence behind it. If all scientists are operating on the same rational procedure, believe whatever has the most evidence, they'll all hop on board theory A, they'll all pursue theory A, that's actually terrible. You don't want that cuz we're not actually sure of theory A. What you actually want is most of the scientists but not all pursuing theory A, some for theorying theory B, some taking long shots on theory C or like maybe a few taking a real long shot in a theory D that has like only a tiny bit of evidence for it.

You want a diverse environment in which people have different sensibilities. Maybe some people are conservative and do most of the evidence and maybe some people just fall in love with weird fucking theories and once in a while those pay off and that turns out to be your general theory relativity or your hyper string theory or whatever, right? So I think the way you just put it now Paul really helped me think this through. So I've been thinking a lot about why it might be good to have in Kuwait values. So I had an individual argument that I've been working up.

So the individual argument is something like this, when you highly specify what your values are, it's very easy to dismiss from your attention things that fall outside of that. So if you highly specify your value is money or lots of publications, it's really easy to dismiss anything that doesn't fall inside of that. And so you're functionally close-minded. You won't explore the kinds of things that might change your mind. Like someone who thinks that all that's valuable is money won't, I don't know, read poetry or listen to weird fucking music that might change their mind. So that's the individual level.

You just gave me an argument that I hadn't heard before that I love and that argument is when you inchoatelyspecify something at the institutional level that permits a greater diversity of interpretations which permits the kind of thing that Kitcher wants. So the university, I think it's actually awesome if the university says we care about student education and student intellectual wellbeing, what's that mean? Every department figures that out for themselves. That's actually really good. That gets the kind of diversity you want. What's not good is we care about student education and that means high salary.

That is the thing I'm worried about first

Paul Smaldino (26m 4s): Of all, yes, I think what's cool about this kind of division of cognitive labor stuff the Kitcher talked about and other sort of philosophers who've gotten into modeling like Kevin Zalman and Cailin O'Connor and others, a lot of their work have been really taking a deep dive at this. It echoes similar ideas that have been popping up over the last few decades in organizational science and business and psychology and sociology and psychophysics, from totally different backgrounds and different modeling frameworks. People have come up with almost the exact same set of ideas.

We wrote a paper recently called 

Maintaining Transient Diversity

 as a general principle of collective problem solving. This just sort of takes any of these mechanisms that allows for the diversity of opinions or beliefs to persist and be maintained over time is going to be a benefit in the long run for coming up with better solutions at the group level. Many different mechanisms but they're all doing it through the same route. What you were just talking about though before about this kind of ambiguity, a paper that hugely influenced me when I read it a decade ago and I'm gonna get the title wrong probably with something like ambiguity as strategic communication device.

C. Thi Nguyen (27m 19s): I know this paper.

Paul Smaldino (27m 20s): It's Eisenberg 1984 in like communication monographs or something. It's this great rambling paper and this idea has been massively influential to me but he's basically arguing that would seem like the point of communication should be clarity to be as clear as possible for me to say I mean this and you didn't know exactly what I mean and that's the goal and ambiguity is therefore a bad thing. And he argues that actually, no, ambiguity is a really important thing and other people have expanded on this.

So now the way I think about this is like a blend of Eisenberg and then other people who've come a bit later. But that in a lot of ways if you're trying to get, let's say a coalition, you don't wanna say this is exactly what our goal is and this is what we're trying to do. You wanna use vague terms so that a bunch of people can sort of map whatever they think that the goal is onto and say that's consistent. It also leads to a reduction in accountability cause after you do something and someone says, you said you were gonna do this and you say no, listen to what I said.

It's consistent with what I did because what I said was ambiguous. So it's pernicious in a way too. It's used nefariously in a lot of ways by let's say politicians and other kinds of leaders to avoid accountability. But it's also just a general principle of communication I think.

C. Thi Nguyen (28m 48s): So there are two major things I wanna talk about that are in two totally directions. So let me just name them so maybe Michael can hold me this cuz I'm dying to talk about both of the, the first is kind of about large scale coherent and the second is about the metaphysics of ambiguity. So let me do large scale coherent first. This is going back to our discussion about what the grounds is of the claim that there has to be coherence. So lemme give you my most cynical worry.Tthis is my most pessimistic nightmare. Here's the pessimistic nightmare. It is really good and healthy for human beings to live in an ambiguous environment with a pluralistic set of goals, many of which are in Kuwait.

That is an essential tension with the methods of large scale collective organization. If it's true that for an organization to cohere it needs to have clear policies so it can act coherently, then we should not expect that kind of ambiguity to survive at scale. And I think what you are describing, so I tend to think about since I'm a philosopher, like what makes something constitutively coherent and what you're describing is a kind of evolutionary process. You know some organizations are gonna be more coherent than others and some people are more interested in coherence and the people that are more interested in following the strict outcome are gonna rise in the organization and the organizations that have clear outcomes are gonna get be better at achieving those outcomes.

And so our world's gonna be full of large organizations staffed with people that have very, very clear specifications of outcomes and there's something inhumane and bad about that for individuals. But that's what happens when we need to organize and large scale collectives. I've probably oversimplified everything.

Paul Smaldino (30m 23s): This gets at something I think really important because it's forced me to think really hard about something that, so for in a few papers I argued that sort of reducing ambiguity and being as clear as possible is a really important goal for science, not for living in society but for being a scientist and doing science. And I think these are different things. If we're gonna do science, we're gonna say I have a theory about how things work. I have a hypothesis that I want to test. You need to be able to specify really clearly the scope of what that theory is, when it holds, when it doesn't what you mean by the hypothesis, that's how you falsify something you need to know exactly what you're talking about. And it's a problem when scientific culture tolerates too much ambiguity. There's always a caveat there, which is that at the early stage of theory development, sometimes you need ambiguity cuz you don't actually know really what you're talking about yet. And so you need to allow for multiple interpretations to be possible until you can figure out what you mean.

But a mature theory should be minimally ambiguous. This is at odds with things like metrics in terms of let's say how to evaluate something because people think oh well it's scientific therefore I want to use this to then therefore impose a value judgment on something. It's better because it has a higher score on this. But that's not what science is actually able to do, right? Science can say it has this score and it measures this thing because what it measures is this, if you say what it measures is this and therefore it means this other thing that's a problem cause that's a false mapping and it's not really about ambiguity versus precision, it's about I think the imprecision of the mapping between the measure and the term.

So if you wanna measure something like happiness or economic prosperity, you can say well we'll measure the genie coefficient, we'll measure GDP. But those are rigorous, clearly unambiguous measures. They have a meaning this is what they are, this is how we measure and we can compare things on this measure. And that's not problematic until you then then say, and it is better to have a higher GDP full stop.

C. Thi Nguyen (32m 44s): We're on exactly the same page. Like I think I've gone through this process so many times where I've read some piece of pop science journalism that says, you know, science has proved that people are happier when they do X. And you look at the study and it says people report having more productive work hours when they do x and I'm you're like that's not, there's a big ass gap, right? This is leading into like one of the things I've been thinking about obsessively. So I keep seeing this gap where there's the deep human value and then there's the metric, the operationalization, the precise thing we study and I've always wondered why there's a gap there.

One possibility for me is the gap is just contingent. We can find a way to precisify our value. We just haven't gotten there yet. Another possibility is it's just an institutional thing. You could offer a precisification of happiness or wellbeing but we're just unlikely to get it in the kind of bureaucracies we have. But you could do it. And the thing I've been really interested in is maybe the possibility that there's some deep metaphysical way in which values resist explicit precisification that they can't be.

And that's what I've been trying to chase lately.

Paul Smaldino (33m 54s): This is great because I write about this a little bit in the textbook on modeling that I have coming out. So there's this old paper from the, I think 1960s by Eugene Wigner, the Nobel Prize physicist. It's called something like on the 

Unreasonable Effectiveness of Mathematics

, the fun paper. And he is like, there's no good reason why mathematics should work as well as it does and there's no good reason why there should be a tool that allows humans to predict things as well as math does.

There's no good reason, it's kind of nuts and we should all just be grateful. And then he says some other things but he's basically just kind of in awe about how great mathematics is and how there's no good reason why it should be and it's pretty cool that it does work so well. I think that there's a counter to that which is that not everything is that well that easily described at mathematics. And there's lots of things that for which mathematics is not that effective at describing and it's actually just the things that were well described or easily described by mathematics are the things that were discovered using mathematical tools.

They're the things that lend themselves that were amenable to mathematical inquiry. And a lot of the things that we're interested in terms of social science and cognitive science and the related philosophical inquiry are things that are much less tangible in terms of this kind of specification. And you can see it like in a physics equation, a physical theory whether it's about mass or electricity or something else. You have a theory about how things work and then you can write out equations and all the terms in the equations have units and they're all directly related to the things that are measurable.

The theories are directly about relationships between things that are measured and in social theories and cognitive theories so often our theories are about relating constructs and then we have proxy measurements. But the theory isn't about the relationship between the proxy measures. The theory is about the constructs and the relationships between the constructs that are social in nature, that are cognitive in nature but aren't the things that are being measured.

And so there's this gap and I don't know the extent to which that gap can be overcome. I think that we can get some distance otherwise I wouldn't be a modeler of social systems. I think that we can do things but it's necessarily going to be a bit more abstract.

C. Thi Nguyen (36m 32s): That was one of the more amazing things I've heard in a very long time. Let me try to say some of that back to you to see if I got it cuz I think we are very much on the same page. One way of putting it is this relates back to the Silicon Valley looks good if you let it define like you can say math can explain everything. If you've restricted the scope of everything to the kinds of things that math is good at explaining. This is kind of my background worry about the world, the evidence-based outcomes world, which is if you insist that the only good outcomes that the only outcomes we're gonna pay attention to are the ones that are amenable to large scale measurement and mathematization, then you're gonna leave out any of the outcomes that aren't the kind of thing that admit of that.

Okay, I'm gonna go in some that I'm unfamiliar with and then you can kind of correct me my kind of like rough thinking here is that math and science are really good at tracking qualities that are abstractable and abstraction involves a kind of similarity across different contexts. So the place I've thought about this the most is in medicine because I talk to people in philosophy of medicine. So there's this guy Jacob Stegenga who I find really interesting and he, he's a philosopher of medicine and one of the things that he convinced me of in a talk is that the gold standard medicine right now is the double blind study.

And the double-blind study works really well when you have an invariant fix for an invariant problem that works the same across context. So antibiotics are great for this because the same bacteria is infecting lots of different people, the same antibiotic is gonna fix it. And so that's the kind of thing that we can pick up on easily using the tools of meth science. The things that are really hard to pick up from what I can tell are like problems and fixes that are incredibly variable and different between people in different contexts.

So say complex psychological problems that vary between cultural contexts and for people in general and fixes that require a lot of fine tuning and adaptation to the particular context are not the kinds of things that are gonna be picked up with by the methodology of the double blind study. So for example, if there's a complex variable psychological problem where the intervention is often something like yoga where the yoga isn't something that everyone has to do in exactly the same way but has to be heavily adapted to different people's psychologies, the success of that is not gonna be picked up well by a double blind study.

Paul Smaldino (38m 55s): There are a number of studies that report that all mechanisms, all strategies for psychotherapy are about as effective as any other on average. And what that doesn't pick up is the fact that certain methods work better for certain people.

Michael Garfield (39m 11s): I wanna anchor this again in the frame of emergent political economy. I want to think about again this question of, okay so we know that there's this tension between precision and ambiguity. Like let's, I'm just gonna cut away most of what has been discussed here because it's been done very well. There has to be a way, and I'm curious based on your respective channels of research, there has to be a way to assess contextually what kind of portfolio of metrics are most dynamic and adaptive and like holding one another in balance because obviously like if you window it down to just one or two things we're screwed.

But clearly as you've both noted, what we witness in the wild is a remarkable diversity of different approaches to this and some of them in your right to note that economies at a scale seem to prefer what some might call a more sociopathic strategy as something gets larger and larger. And then this is where the issue of covert signaling and transparency gets kinda interesting to me because you have partitioning of the social space, like you're all standing in the same room but you're still somehow finding a way to cut it up so that you can speak to each other.

This is my attempt to articulate the two different axes I recognize in this conversation and both of which are about the fact that there is clearly not a single regime that works for everything and then also simply that we must, we must have a regime.

C. Thi Nguyen (41m 5s): So you said that there must be a package of metrics and I really wonder where that must come from if it's really true that there must be. Now we're distinguishing two questions that there must be some way of evaluating something and there must be a metric for evaluation cuz metrics are really different. So here's a worry and this is a little detour through a literature in from science and technology studies, what makes a metric? So if you listen to people like Theodore Porter, this historian of quantification difference between just an evaluation and a metric is a metric proceeds from the shared application of some kind of measurement procedure that can be executed by different people across different contexts.

So I can evaluate all kinds of things for which there is no metric, right? I have complex intuitive aesthetic evaluations about things I love that when I try to articulate them I they fall apart and those evaluations, different people can't perform the same kind of evaluation cuz that evaluation requires some kind of subtle sensitivity. So if you buy this view that what it is to be a metric is for the measurement input procedure to be exportable across contexts, that requires that the criteria for the metric be something that many people can share and understand.

But one worry might be that there are a lot of standards of evaluation that don't admit of this kind of cross contextual universalization.

Michael Garfield (42m 28s): This is akin to Melanie Mitchell's barrier of meaning with artificial intelligence and the fact that trans contextual is the holy grail. It's elusive in machine learning.

C. Thi Nguyen (42m 40s): So there's a whole branch of social philosophy and critical theory that says something like look, there's this tension between localized knowledge and hyper universalized knowledge and the metric is the darling of cross contextual universal knowledge but it misses out on the kinds of things that require a huge amount of subtle situational awareness or context. My first question is like why must there be a metric by which we can judge success?

Paul Smaldino (43m 10s): I think that there is a metric regardless of whether or not we want there to be. And I think there's a metric regardless of whether or not we can articulate it. I respond a lot to what you're saying and I and I generally agree unsurprisingly, right? I agree with most of what you just said T. but I think that to try to get it back to question you asked Michael, I'm gonna get a little bit speculative and big picture here and I'm also gonna try to bring it back to what we were talking about earlier about diversity and in multiple goals coexisting. What we have now in a lot of our society, especially in the U.S. and some of the west now we have things like value capture and we have certain selective filters that shape the incentives and the therefore the values and the conversation. They change the conversation and the metrics of success and the strategies that are allowed for success in the business world, in economic world, in the political world, there are certain strategies that do not work and there are certain strategies that are very effective and very effective at what right?

Very effective at getting your voice heard about getting economic success, about getting social influence so that other people then see these strategies work and emulate them whether by design or by just virtue of the filter also applying to them. Now I think what would be good is for there to be a diversity of opinions and viewpoints in the conversation, in the economic conversation, in the political conversation, right?

I think most people want this. There are some people who say I only want my side to dictate whatever everyone else does and that's awful, it's fascism. But I think a lot of people want the diversity, they just don't know how to make it happen. And you get this extreme polarization and this ever insane sort of clown show that we have in our political world and ever extreme inequality and kind of insanity in the business world.

And it's by virtue of the fact that we don't have a good diversity of viewpoints being promoted in the conversation. We don't have an incentive structure that allows for multiple competing viewpoints, that there are certain views that the filters that are in place. Because they've been captured by certain interests, they steer conversations in certain directions and not in others. I know I'm sounding a little bit like a crazy conspiracy theorist here, but I don't mean it in that way like there's a dark kabbalah.

What I mean is just by virtue of the fact that the media is controlled by a small number of companies, political organizations are dictated in the US at least by two parties that are able to shape the conversations that happening. We see this happening now in congress where the competition for the house speaker is just being shaped in these insane ways by virtue of the fact that certain kinds of conversations are just off the table and it's because that the people having those conversations aren't picked up by news outlets, they're not supported by donors and therefore they get shut down and the landscape of conversations is heavily skewed in certain directions.

It seems to me the incentives need to change and how do we get them from here is continually the big question.

C. Thi Nguyen (46m 44s): I think I'm starting to pick up on a profound difference in our approaches that I think is really interesting. I mean obviously there's, but like I think I'm starting to see it better and I think it's a difference between you saying there always is a metric and me saying like there's not always a metric and I think we mean very different things here. So my guess is what we're saying are actually quite compatible. I think what you're saying is something like the world will always find a metric. There's something that it'll judge you on and that's something will rise to the top and be effective and function as a filter. And I'm saying is something like sometimes you value something and there isn't a good public metric that will capture what you actually value.

Paul Smaldino (47m 19s): Yeah, I agree with that.

C. Thi Nguyen (47m 20s): There might be something out there, there might be a public metric that is judging you for your actions on some scale that is public but it's not gonna capture the thing that you care about. I was realizing that when you were saying like it's inevitable that we arrive at a certain coherence, you're really studying these kinds of like large scale social forces and what bubbles to the top. And a lot of the times what I'm thinking about is like how does an individual survive in this fucked up world? And one of the answers that I get from the older philosophers I read is like a sense of irony and a sense of irony is like not necessarily believing in the metrics and filters that bubble to the top and being like, wait, that's part of this fucked up thing that's going on that's driving this kind of evolution of these like gross metrics.

And I think you're like, but that's not gonna matter, that's not gonna drive the large-scale filters.

Paul Smaldino (48m 14s): I agree with that. I think that a sense of irony is massively important. It's a hugely important tool for successfully navigating this world without going crazy and also for sort of seeing things in multiple lights. And I think that also these kinds of conversations are important because whatever the terms of the conversation are, are that's what gets pushed around. So if you can figure out what's wrong with a metric, then you can point out that it's wrong and therefore the conversation shifts to is there something better?

C. Thi Nguyen (48m 47s): This is I think again a core nugget of difference between us. It seems to me that you're like look these incentive systems are worse, we need to find a better incentive system.

Paul Smaldino (48m 59s): I think that they're always bottlenecks and the bottlenecks are rarely random and so to some extent people have the ability or some people have the ability to influence those bottlenecks and who's more likely to get pushed through. There's also I guess a larger question which is can we change where the bottlenecks are and what the bottlenecks are but there's always gonna be bottlenecks and therefore I think there will always be certain sets of characteristics that lead individuals to be more likely than others to succeed in certain domains.

So in that sense I think that there will always be incentives but it doesn't necessarily mean that there have to be quantitative metrics and it doesn't necessarily mean that there has to be one big bottleneck that everyone's trying to get through. Maybe there are many small bottlenecks in that might be better.

C. Thi Nguyen (49m 47s): One of the most influential ideas for me recently has been from James Scott's book 

Seeing Like a State 

and Scott has this idea that like what large people organizations wants it's legibility and legibility is a kind of clear coherence that's aggregatable to a kind of higher level view. So a simple version might be like look, if you're CEO, you can't have every department have its own obscure little value system. You need a single collective value system or something close to it so you can get production and profit measures and aggregate them in what Scott says is bring the whole organization into view.

So one way to put my worry is that what would be good for human life is an incredible diversity of bottlenecks which work on different often non-metrofied systems. If Scott is right, large scale institutions will tend towards is a kind of monolithic measurement system that moves towards let's have a small number of bottlenecks and let's have a unified measure. And so like the heart of my worry is that organized behavior at scale is inevitably intention with what a diverse population of individuals needs and that's just an unfixable problem.

Let me just give one quick example. In the educational system, the dominant measure is GPA. You can add other shade like I can write in my note all kinds of other shit about what students are good at. That barely matters cuz that's not aggregatable. When a law school admissions officer is doing their spreadsheet to do the first main cutoff. Nothing in my weird little notes is gonna make it into that first level cutoff. The big moving forces just look at GPA.

Paul Smaldino (51m 24s): Yeah, so one of the things that got me into social science and cultural evolution than later was I used to have a lot of conversations back in my twenties about what would be great, what would be better, what would make society better. It would be better if everyone did like this, if everyone would be better if things were different in this way. And one question that kept haunting me and still haunts me is let's pretend that you've come up with a better solution. How do you get there from here? It's not that easy to change individual behavior to change individual structure.

You could say, look, if we all did this other thing, it would be better for 90% of people and then 10% for whom it wouldn't be better definitely gonna be like, no, no, no. But the rest of people are gonna be like, well maybe but if everyone else isn't doing it, I'm not gonna do it either because it's costly or I'm gonna look like an idiot or I don't trust you or whatever. And so the question of how you move from something in one direction to another is always kind of humming in the background in all these conversations.

C. Thi Nguyen (52m 27s): It's funny, I think I can again feel the difference between the eye of the social scientist and the eye of the philosopher. Your concern is here are some fixes that might work, but how do we implement them? Where I'm chasing is like what if there's an unsolvable problem baked into our nature, which is that social organization at scale will never adequately serve individuals.

Paul Smaldino (52m 49s): Well no, I mean I think that that's an important problem too.

Michael Garfield (52m 52s): We need Jim Rutt on this conversation, right? Because ultimately this is about have we actually overshot the scale at which we can effectively coordinate and all these studies like you know, I know it's controversial but like the slowed canonical progress of science, these kinds of questions, they seem related in a way to the sigmoidal curve of population growth. Have we risen above a level at which intelligibility can actually happen and if so, where was that level?

I mean I remember, you know, Sam Bowles is another person who has been looming large for me over this whole conversation not only for his work on the problems of viewing humans as agents that can be governed through behavioral engineering via incentive, but also because of the paper that he wrote with Wendy Carlin, the article he wrote in Vox EU in 2020 on the battle for the Covid 19 narrative, which talked about the return of the civil society, you know, meaning the mesoscopic world of guilds and church groups and sports clubs and pubs and neighborhood organizations, mutual aid networks and all of these other human scale sub dunbar number structures that we found ourselves suddenly very much in need of and yet we're eroded by the radical success of both state power and market power. In every way it feels like we are in a kind of clash of the titans right now where like we, you know, we watch institutions going up against large institutions and people are struggling to remain unpulverized underfoot. At some point something has to give, right?

C. Thi Nguyen (54m 51s): I think the problem is even worse than what you're describing. I'm gonna try to pessimism what you just said. I mean when you ask a question like have we gone past the ideal scale of humanity that implies that there is an ideal scale that we could plausibly hit if we could somehow convince people to scale back. For me the real worry is there's no ideal scale of humanity cuz different things we wanna be involved in demand different scales. Science works really big good on a huge scale solving problems like climate change are massive scale problems that everyone has to get together on.

And then there are other things that work at medium or small scales and there's just this unsolvable scale clash. My real worry is that different parts of us and our needs call us to different scales and there is not an optimal scale. And so I have to participate in these different scales are an intention with each other and also the big scales tend to win because they get really powerful and so they squash out the small scales

Michael Garfield (55m 50s): Over short time scales though, right? Because over long time scales, those like, you know, this is the complex system, large complex system be stable question. It's like at some point those things tend to implode. So it's not about like an equilibrium so much as it is about a dynamic balance or a zone at which these different forces are able to coexist. How do you deal with all of this in light of both the need for global coordination and bio-regional organization and neighborhood level personal relationships, et cetera?

Paul Smaldino (56m 33s): The thoughts I'm having right now are, I think maybe even too cynical for a podcast. I'm gonna hold off on some of them. What you're talking about is important because complex systems embedded in complex systems embedded in complex systems, right? I mean we have thousands of years of human history where, you know, since the agricultural revolution and the dawn of city states, it's just been constant change and one could argue that on a long-ish, you know, say century time scale we haven't been at equilibrium in 10,000 years.

What's next? Right? How are all these nested feedback loops churning around between, you know, societal structure and environmental structure to change the shape of society in the next couple hundred years? Peter Turchin probably knows this better than I do, but this is where I think thinking about these things at population scales rather than individual scales is it really helps me because when I think about things at the individual level, like what can I do? How do I live in this society, right?

I find myself slightly distraught about like, well I don't know. I'm just a speck in the wind getting blown around by this maelstrom of society by trying to sort of think about the way the whole system is evolving. I can see it's not that I'm hurdling through space, it's that we're all hurdling through space together in similar ways and that creates patterns that can then be identified. What do you do with those patterns?

Well then, you know, you get a professorship and you get to talk about it. That helps sometimes.

Michael Garfield (58m 11s): So let me just ask you the both of you then, we've done a very good job over the course of this conversation articulating concerns. What, if anything, do you recognize as affordances in the sense that the curse often has its inherent gift or opportunity? These tend not to be inseparable in that way. What is it about the nature of this particular insane pickle that may ultimately prove of benefit?

And I'll just volunteer something which is I think that concerns regarding the possible subception of the entire world under one despotic regime are unfounded. If you think about it like that, then we can say that there is a sense at which we can return to a valuation of, if not a quantization of endeavors and communities of manageable scale, rather than simply being worried that we are growing beyond our capacity to manage this stuff.

I mean, we will live in a wilderness at this point that of our own construction. So I'd love to hear your thoughts on that in like a closing, what do you see as the handle on this that people can take and use and actually use it to refine the way that they work within and strategize for and lead organizations the way that people can conduct the inquiries of science and philosophy and the arts?

Paul Smaldino (59m 60s): I think there's never been a better time in terms of the affordance of information gathering, in terms of the opportunities to information gather. Like there's a dark side to this which is that lots of people, whatever, controlling what kinds of information goes in and out in different places and uncritical thinkers can get sucked into really terrible information zones. But I think that for those of us who are interested in inquiry, I mean it's amazing the fact that we're all in three different cities talking to each other right now.

I'm able to have this conversation and I was able to read instantly, you know, one of T.'s papers this morning because I was like, let me just get that I can collaborate with people across the world, which I do all the time. And this is not just in science, right? I mean, I have art projects where I collaborate with people in different cities also and send them files over the internet instantly and disseminate that widely. And that's pretty cool. And it's, I think a really nice benefit of the world that we live in.

But also to get to the specific problems you're talking about, I think that you're right, which is that I think that a sort of global, or even, you know, let's say U.S. national or European national or large scale, despite taking hold in a really strong way is gonna come up against a lot of resistance because of this kind of information flow. Because controlling information flow is really difficult. And because the affordances that we have are really amazing.

Now this is not universal, right? There are ways to cut those things off, but they're also, it was to, you know, short circuit those control, right? We see this let's say in China where the government has a lot of top-down control over what information flows over various networks. But we also see continuously the rise of various kind of encrypted signals or covert signals being used by people or the use of various sort of networks or even in person mechanisms for getting together and sharing information that are able to bypass these systems in a lot of communities.

It doesn't mean that there's not a lot of oppression, there's a ton of oppression and globally like sort of averaged over, right? That oppression is not gonna end anytime soon and it's really dark and sad. But there are still opportunities for communities continuously, continuously, continuously rise up that bulk at these kinds of top-down controls at bulk, at these kinds of oppressive means that are able to find more meaning in a more, let's say, egalitarian or convivial lifestyle.

What kind of system wins out in the long run? You know, maybe less optimistic, but in the sort of short long run, I think there's a lot of cool stuff happening and maybe that's enough.

C. Thi Nguyen (1h 2m 56s): If you're talking about affordances for survival in a highly scaled up environment, I think actually we kind of already know what those are and they're kind of old fashioned. Like I think about things like the play attitude, the aesthetic attitude, irony. I think there's a long-standing recognition that various attitudes of kind of personal, self-guided weird in Kuwait navigating towards your own joy or whatever are resistant attitudes towards large scaled up simplified value systems.

So that's not a social level solution or a, that's like here's what might help you survive.

Michael Garfield (1h 3m 41s): Well, for a show that's supposedly intended to sing the praises of an institution committed to parsimonious quantitative models, I hope this episode doesn't get me canned, but I think that this is the fray at the edge of the weave of the scientific enterprise. And as such, it is precisely what keeps people motivated to continue doing research, whether that research be quantitative or otherwise.

Paul Smaldino (1h 4m 14s): I think that actually myself and others, a lot of people are working on precise quantitative models about these kind of big picture social dynamics. The thing is that it's very hard, and I think that having these more ambiguous verbal conversations are really important for grounding and figuring out what's important to build into our more formal models. And I think that the back and forth, right, you can't only be precise and quantitative. This gets back to what we were talking about earlier. You also have to be vague and ambiguous, especially when you don't know exactly what you're talking about.

But that may help us kind of circle back in on the more quantitative aspects, the things that we really care about, and to try to build models that capture some of those things. And it's gonna take lots of families and models and different perspectives and things, but I think that increasingly there are really smart people from different fields talking to each other to try to get at quantitative descriptions of the social world. And I'm optimistic that in the next few decades there's gonna be some really, really cool advances in our understanding of these kinds of things.

Michael Garfield (1h 5m 23s): I call transparency is surveillance tease paper at the beginning of this conversation in which he says in the abstract, by forcing reasoning into the explicit and public sphere, transparency roots out corruption, but it also inhibits the full application of expert skill sensitivity and subtle shared understandings. So the question for me, I guess in parting would be, if we have good ideas, what does the best complex system science and the best philosophy tell us we can do to disseminate them because I witnessed this on a daily basis at SFI, that these people have brilliant ideas and know hooks into the machinery to actually implement them. The situation is moving faster than the explanation for how to course correct can even be communicated. This is my quandary.

Paul Smaldino (1h 6m 21s): Yeah. So I agree with that. What I said I think was an attempted optimism and I, I believe it at times, but I think that what you say is also correct, right? And it, it's something that so to speak, keeps me up at night, which is that we are not there now. We don't have currently very good coherent theories about social dynamics that work at scale. We know almost nothing about how cultural evolution works in a modern wired, large scale, diverse society because all of the models, you know, or so many of the models were developed for sort of pre-industrial societies cuz the systems are simpler. There's also, you know, the communication aspect, right? Which is we can come up with, let's say I have an amazing theory that it's a real, I've managed, you know, me and my team have collaborators have come up with a really coherent formal theory about the way things work. One such a theory is gonna be flawed and potentially dangerous in the wrong hands because if we really have a good formal theory about the way social systems work, it can be immediately exploited by, you know, nefarious actors. Also it's gonna be really difficult to understand, but it's also important to disseminate that understanding because people should know what to look out for, right? If you know how, let's say certain actors are behaving and what the consequences of those actions are, you're more likely to be able to say, wait a second, I know the consequences of you doing that. I don't want you to do that. Whereas if you don't know why some actor is doing something, you're like, well, I don't know why they pass that law. I guess it's fine. Tease transparency argument I find actually extremely compelling, which is if I can badly summarize it in part, is that if you have complete transparency, one problem with complete transparency is that most people are not experts in how to use the information that is being transparent and therefore they're going to make flawed conclusions.

And it doesn't mean that there's like a class of experts and non-experts. It means that all experts are domain specific. So if I'm doing such and such and some domain, then I know how to use these theories of social behavior. If my plans based on those theories are then accessible to the public, I think the concern possibly is that people are going to misunderstand what the intentions are, misunderstand our motivations, misunderstand the reasoning behind them. And this can lead to problems.

I don't know how to solve this problem, but luckily we don't have those theories, so we don't have to make those decisions yet.

Michael Garfield (1h 8m 56s): Isn't the answer to act in secrecy? Like, I mean, that's messed up, right? Because that's what everyone assumes is happening, but perhaps they're not wrong in as much as if you realize that if a good idea cannot be explained, I mean, this is the critique of social engineering through nudges, right? Rather than laying it all out. I actually found Peter Turchin's work through the science fiction novel of Michael Flynn who wrote about two warring secret organizations that had figured out a way to quantify predictive future histories.

And they didn't discover each other for over a hundred years because they didn't understand that the actions of each other organization was the X factor that was jamming their own calculations. I've really done it this time. Anyway, thank you. Thank you both. Yeah. I wanna give you the last word.

C. Thi Nguyen (1h 9m 54s): Oh yeah. I'll give you the ultra-cynical argument that's in my head right now that is kind of the secret heart of transparency surveillance. So this is again about the worries about government at scale. So for a long time I would say that the problem I've been most obsessed with is something I call the expert identification problem. That's like, how did the non-expert figure out which expert to trust if they don't have the expertise? And one of the worries about a democracy is that it runs straight into the expert identification problem, right? Like if we're democratically voting on what to do, we are aggregate non-experts.

I mean, I'm not talking here about like, oh, we are the experts and you all are not, even if you are the world expert in X, you're a non-expert in a million other fields, right? So as an aggregate, we are non-expert. So here's the real worry for me. If you have the right solution, how would that get democratically approved? Helene Landemore is this political theorist I really like. She's part of a movement who are epistemic democrat, and they think that democracies the best way to harness the intellectual power of diversity. And the basic model is something like diverse people will come up with a better set of solutions, and when you put them together, the best solutions will rise to the top.

And my worry is, how will the Democratic entity recognize which are the best solutions? Because if the best solution requires expertise to recognize, and the democratic entity as an aggregate is not an expert, how will they figure it out? And that's a problem I'm not sure there's a solution to, and I also can't think of a better way to organize the world than democratically. So have a great day everyone.

Michael Garfield (1h 11m 33s): I wanna thank you both for making it so clear why this enormous research collaboration for which Santa Fe Institute is at the center is so urgent and so important, and why this is a noble goal to pursue, but ultimately one that will challenge us to our very marrow.

So thank you both for the work that you do.

C. Thi Nguyen (1h 12m 6s): Thank you. Thank you.

Michael Garfield (1h 12m 7s): Thanks for taking the time.

Michael Garfield (1h 12m 10s): Thank you for listening. 

Complexity

 is produced by the Santa Fe Institute, a nonprofit hub for complex system science, located in the high desert of New Mexico. For more information, including transcripts, research links, and educational resources, or to support our science and communication efforts, visit 

Santafe.edu/podcast