COMPLEXITY: Physics of Life

Glen Weyl & Cris Moore on Plurality, Governance, and Decentralized Society (EPE 05)

Episode Notes

In his foundational 1972 paper “More Is Different,” physicist Phil Anderson made the case that reducing the objects of scientific study to their smallest components does not allow researchers to predict the behaviors of those systems upon reconstruction. Another way of putting this is that different disciplines reveal different truths at different scales. Contrary to long-held convictions that there would one day be one great unifying theory to explain it all, fundamental research in this century looks more like a bouquet of complementary approaches. This pluralistic thinking hearkens back to the work of 19th century psychologist William James and looks forward into the growing popularity of evidence-based approaches that cultivate diversity in team-building, governance, and ecological systems. Context-dependent theory and practice calls for choirs of voices…so how do we encourage this? New systems must emerge to handle the complexity of digital society…what might they look like?

Welcome to COMPLEXITY, the official podcast of the Santa Fe Institute. I’m your host, Michael Garfield, and every other week we’ll bring you with us for far-ranging conversations with our worldwide network of rigorous researchers developing new frameworks to explain the deepest mysteries of the universe.

This week on the show we dip back into our sub-series on SFI’s Emergent Political Economies research theme with a trialogue featuring Microsoft Research Lead Glen Weyl (founder of RadicalXChange and founder-chair of The Plurality Institute), and SFI Resident Professor Cristopher Moore (author of over 150 papers at the intersection of physics and computer science). In our conversation we discuss the case for a radically pluralistic approach, explore the links between plurality and quantum mechanics, and outline potential technological solutions to the “sense-making” problems of the 21st century.

Be sure to check out our extensive show notes with links to all our references at complexity.simplecast.com. If you value our research and communication efforts, please subscribe, rate and review us at Apple Podcasts or Spotify, and consider making a donation — or finding other ways to engage with us, including our upcoming program for Undergraduate Complexity Research, our new SFI Press book Ex Machina by John H. Miller, and an open postdoctoral fellowship in Belief Dynamics — at santafe.edu/engage.

Thank you for listening!

Join our Facebook discussion group to meet like minds and talk about each episode.

Podcast theme music by Mitch Mignano.

Follow us on social media:
Twitter • YouTube • Facebook • Instagram • LinkedIn

Referenced & Related Works

Why I Am A Pluralist
by Glen Weyl

Reflecting on A Possible Quadratic Wormhole between Quantum Mechanics and Plurality
by Michael Freedman, Michal Fabinger, Glen Weyl

Decentralized Society: Finding Web3's Soul
by Glen Weyl, Puja Ohlhaver, Vitalik Buterin

AI is an Ideology, Not a Technology
by Glen Weyl & Jaron Lanier

How Civic Technology Can Help Stop a Pandemic
by Jaron Lanier & Glen Weyl

A Flexible Design for Funding Public Goods
by Vitalik Buterin, Zöe Hitzig, Glen Weyl

Equality of Power and Fair Public Decision-making
by Nicole Immorlica, Benjamin Plautt, Glen Weyl

Scale and information-processing thresholds in Holocene social evolution
by Jaeweon Shin, Michael Holton Price, David Wolpert, Hajime Shimao, Brendan Tracey & Timothy Kohler 

Toward a Connected Society
by Danielle Allen

The role of directionality, heterogeneity and correlations in epidemic risk and spread
by Antoine Allard, Cris Moore, Samuel Scarpino, Benjamin Althouse, and Laurent Hébert-Dufresne

The Generals’ Scuttlebutt: Byzantine-Resilient Gossip Protocols
by Sandro Coretti, Aggelos Kiayias, Cristopher Moore, Alexander Russell

Effective Resistance for Pandemics: Mobility Network Sparsification for High-Fidelity Epidemic Simulation
by Alexander Mercier, Samuel Scarpino, and Cris Moore

How Accurate are Rebuttable Presumptions of Pretrial Dangerousness? A Natural Experiment from New Mexico
by Cris Moore, Elise Ferguson, Paul Guerin

The Uncertainty Principle: In an age of profound disagreements, mathematics shows us how to pursue truth together
by Cris Moore & John Kaag

On Becoming Aware: A pragmatics of experiencing
by Nathalie Depraz, Francisco Varela, and Pierre Vermersch

The Beginning of Infinity: Explanations That Transform The World
by David Deutsch

[Twitter thread on chess]
by Vitalik Buterin

Letter from Birmingham Jail
by Martin Luther King, Jr.

The End of History and The Last Man
by Francis Fukuyama

Enabling the Individual: Simmel, Dewey and “The Need for a Philosophy of Education”
by H. Koenig

Encyclical Letter Fratelli Tutti of The Holy Father Francis on Fraternity and Social Friendship
by Pope Francis

What can we know about that which we cannot even imagine?
by David Wolpert

J.C.R. Licklider (1, 2)

Allison Duettman (re: existential hope)

Evan Miyazono (re: Protocol Labs research)

Intangible Capital (“an open access scientific journal that publishes theoretical or empirical peer-reviewed articles, which contribute to advance the understanding of phenomena related with all aspects of management and organizational behavior, approached from the perspectives of intellectual capital, strategic management, human resource management, applied psychology, education, IT, supply chain management, accounting…”)

Polis (“a real-time system for gathering, analyzing and understanding what large groups of people think in their own words, enabled by advanced statistics and machine learning”)

Related Complexity Podcast Episodes

7 - Rajiv Sethi on Stereotypes, Crime, and The Pursuit of Justice

51 - Cris Moore on Algorithmic Justice & The Physics of Inference

55 - James Evans on Social Computing and Diversity by Design

68 - W. Brian Arthur on Economics in Nouns and Verbs (Part 1)

69 - W. Brian Arthur (Part 2) on "Prim Dreams of Order vs. Messy Vitality" in Economics, Math, and Physics

82 - David Krakauer on Emergent Political Economies and A Science of Possibility (EPE 01)

83 - Eric Beinhocker & Diane Coyle on Rethinking Economics for A Sustainable & Prosperous World (EPE 02)

84 - Ricardo Hausmann & J. Doyne Farmer on Evolving Technologies & Market Ecologies (EPE 03)

91 - Steven Teles & Rajiv Sethi on Jailbreaking The Captured Economy (EPE 04)

Episode Transcription

Glen Weyl (0s): You know, I think some people have as their image of science. You know, imagine we're sitting on the surface of a sphere and they think they're kind of digging down to the core of the truth. They're like discarding the earth beneath them, the falsities and they're gonna hit the truth. And I think that the image I have instead is there's an infinite vacuum outside of that sphere and there are trees growing out from the surface of the sphere in all directions. And as they grow out, more space is available and they branch and expand.

And that just goes on and it like gets more and more complex the further you get out. And that's kind of how I think of the search for the truth

Cris Moore (41s): That if everyone you're talking to is very similar and 10 people tell you the same thing, you have not received 10 bits of information. You've received somewhere between one and 10 bits of information. And if 10 people who are of the same ideological background, they're in the same political party, they watch the same media, if they tell you something, you should not take it as seriously as if you heard it from 10 different people who are really from 10 different backgrounds, different walks of life.

But of course the problem is that we're social primates. We seem to do a very simple kind of arithmetic intuitively when we add up the opinions of the people we see and have that affect our own opinion. And the problem is that the signals we are getting are very correlated.

Michael Garfield (1m 54s): In his foundational 1972 paper More is Different, physicist Phil Anderson made the case that reducing the objects of scientific study to their smallest components does not allow researchers to predict the behaviors of those systems upon reconstruction. Another way of putting this is that different disciplines reveal different truths at different scales. Contrary to long-held convictions that there would one day be at great unifying theory to explain it all, fundamental research in this century looks more like a bouquet of complimentary approaches.

This pluralistic thinking hearkens back to the work of 19th century psychologist William James and looks forward into the growing popularity of evidence-based approaches that cultivate diversity in team-building, governance and ecological systems. Context-dependent theory and practice calls for choirs of voices. So how do we encourage this? New systems must emerge to handle the complexity of digital society. What might they look like? Welcome to Complexity, the official podcast of the Santa Fe Institute.

I'm your, your host, Michael Garfield. And every other week we'll bring you with us for far ranging conversations with our worldwide network of rigorous researchers developing new frameworks to explain the deepest mysteries of the universe. This week on the show we dip back into our sub-series on SFI’s emergent political economy's research team with a trial log featuring Microsoft research lead Glen Weyl founder of RadicalXChange and Founder Chair of the Plurality Institute, an SFI resident professor Cristopher Moore, author of over 150 papers at the intersection of physics and computer science.

In our conversation we discussed the case for a radically pluralistic approach. Explore the links between plurality and quantum mechanics and outline potential technological solutions to the sense making problems of the 21st century. Be sure to check out our extensive show notes with links to all of ourreferences complexity.simplecast.com. If you value our research and communication efforts, please subscribe, rate and review us at Apple Podcasts or Spotify and consider making a donation or finding other ways to engage with us, including our upcoming program for undergraduate complexity research, our new SFI press book Ex Machina by John H. Miller and an open post-doctoral fellowship in beliefdynamics at santafe.edu/engage.

Thank you for listening. Glen, Cris, it's a pleasure to have you on complexity podcast.

Cris Moore (4m 31s): It's a pleasure to be here. It's an honor, Michael.

Michael Garfield (4m 34s): So there are a few things that I want to walk through here with the both of you. It's a pleasure in particular to really get to use this show as a platform to discuss how we might actually implement complex systems thinking in the code of some of these emergent political economies that this particular sub series is devoted to. And Glen, some of the prep reading I did for this has made it clear that you've thought through this more than almost anyone.

And Cris, this work addresses concerns that you have raised in your own research and in your life as a responsible citizen. And I intend to draw on both of those in this conversation. I'd like to start with this piece, Glen, that you wrote called Why I Am a Pluralist because I think starting with the widest possible aperture and then tunneling down into concrete applications will be fun. But actually before we do that, can I ask you to give people a little background about yourself as a person and how you got into doing the kind of research and technological work that you do?

Cris Moore (5m 48s): I'm a physicist by training and I got into issues at the border of computer science and physics in my PhD thesis. And then I came to the Santa Fe Institute to be a postdoc. And so I had worked for a long time on things like quantum computation and phase transitions in computer science and statistical physics approaches to problems in computer science and machine learning. But I had this other side of my life that you were alluding to that I was also on the Santa Fe City Council for two terms.

I like to say I served two consecutive four year sentences and especially in the last few years, I think a lot of people like me have been wondering what does it mean to be a scientist in this moment and a citizen in this moment with the rise of authoritarianism, the climate crisis, the fragmentation of society and so on. So I had a bit of an existential crisis a few years ago like, oh, should I drop everything and work on climate? Should I drop everything and work on fighting authoritarianism, whatever that means.

What does that mean as a scientist? I have done some work on use of algorithms in criminal justice and we can talk about that a little bit if you want. And there's a fascinating debate there about the relationship between automated and human decision making. So having learned more about Glen's stuff and his collaborator's stuff, I'm a little inspired. I'm starting to sip the Kool-Aid because I find it, it is a more dynamic kind of optimism I think than what I sometimes find to be a kind of naive desire to remake the world on the part of people coming out of Silicon Valley.

And I confess that from too great a distance at first I thought that's kind of what Glen was doing, but now I realize he's doing something much deeper and much more engaged with thinking about the forces of history and the nature of society and the nature of humanity and I'm now quite excited.

Glen Weyl (7m 45s): That's very generous, Cris. In terms of my background, Michael, I think the way I describe it is leaning on a phrase that I often use to sustainably describe some of the work, which is it's drawn from Star Trek and the Vulcan philosophy of infinite difference in infinite combinations. And it says that all truth, beauty and progress comes from the union of the unlike. And I think that that's a good description of my career. I was a socialist campaigner before I was 10 and I was head of the National Teenage Republican organization a few years after that. I was a technocratic economist and total basher of the web three space and now I'm something of a figure in that space and I've been connected to populist political movements of various stripes and also to like the neoliberal establishment.

I'm into these contradictions and to trying to make something of that. Now I'm very much sort of balanced across a few of these different spheres leading possibly the largest web three research consortium in the world that's partnership between Microsoft, MIT, Harvard, EY, Pond Ventures and Protocol Labs, which you know, well Michael, but I'm also, I founded a civic organization called RadicalXChange. I just set up a new academic research network that I invited you guys to called Plurality Research Network.

And if anyone listening feels called to academic research in those areas, definitely encourage you to reach out to me at glen at pluralitynetwork.org. I'm also involved in some things that are adjacent to politics, so I'm sort of in different spheres and try to weave the contradictions of those different spheres together into something better than the materials if possible.

Michael Garfield (9m 37s): Wonderful. Well let's, now that we've already been invited on a second date, let's back up and actually get to know each other a little better. You've got this really in depth self-declaration, why I am a pluralist and in it you talk a little bit about the empirical and philosophical basis for pluralism. This is one of my favorite topics on the show. We had a great conversation with Brian Arthur a while back, I think episode 68 and 69 in which he advocated for a methodological pluralism in his approach to economics.

You know, and of course Cris, when we had you on in episode 51, we spent a lot of time talking about related issues in terms of false optimization problems and like not knowing whether you're actually climbing the right hill to begin with. So I would love to hear both of you starting with you, Glen, if you could outline a little bit of your thinking on plural thinking and then I'd love to pass it back to you Cris to bolster that in whatever ways you see fit from your own angles on this.

Cuz I think that one of the most common mistakes I hear people make in their understanding of what it is that goes on at SFI at least specifically is that everyone here is questing after sort of the one ring, you know, like one approach to dominate them all. And you know, fixing that I think is a matter of importance. 

Glen Weyl (11m 14s): I think rather than be deductive, I'd like to be evocative with a few metaphors. One metaphor I like is that I think some people have as their image of science, imagine we're sitting on the surface of a sphere and they think they're kind of digging down to the like, you know, core of the truth. They're like discarding the earth beneath them, the falsities and they're gonna hit the truth. 

Michael Garfield (11m 39s): We'recarving away everything that isn't science you're saying.

Glen Weyl (11m 41s): Yeah and I think that like the image I have instead is there's an infinite vacuum outside of that sphere and there are trees growing out from the surface of the sphere in all directions and as they grow out, more space is available and they branch and expand and that just goes on and it like gets more and more complex the further you get out. And that's kind of how I think of the search for the truth that strikes people maybe initially as a little bit weird.

I guess that's how I interpret like beginning of infinity, you know David Deutsche's phrase. But another way to see that is ecology like the way the species work. Species are all after some abstracted fitness landscape I guess is one way to conceive of it. But somehow we don't end up with like one solution to that problem. In fact like we get a bunch of solutions to the problem and as that problem gets solved, it actually like changes the problem because like now there's all these other species you gotta deal with and there's other species that you can eat. There's all kinds of stuff going on, right?

That's how I think about it. Like I reflecting infinite diversity in infinite combinations. I think that there's just like a lot of things going on and you can build a lot of complexity from a small set of ingredients and you shouldn't expect to like get down to the core. You should expect to like branch out from the core. Another analogy I like a lot that Vitalik Buterindraws on is chess. He says there's no right way to play chess. I mean, and chess is a really simple game. I mean let's be honest compared y'all, like compared to any other thing that we're gonna talk about in this conversation, chess is really simple and yet still there's no right way to play chess.

Like there's different styles of play and some will beat others and whatever, but you can't just like optimize the game of chess otherwise there would be no game that would be worth playing there.

Cris Moore (13m 29s): I really like both analogies, but especially the first one because at one I remember one point when I was pretty stressed out and I was saying to my wife, oh my gosh I've got all these different projects and I have to work on this one and I have to work on that one and I have to go to work in the mines, you know, I have to go chip away at this project and it might work out, it might not work out. And she said to me, you should think of all your projects as more of a garden. You're planting lots of ideas, some of them will come up, some of them won't. That's a little unpredictable but you should think of it that way rather than going down with your hard hat and your pick and toiling away in the pit to find the seam of truth.

So I, I wanna give a shout out to my wife on that. I think that was a, that was a very nice point and the point about everything good coming from uniting the different, of course that's a very Santa Fe Institute theme as well and the idea that, you know, we don't even have departments, we just throw people together who are from all these different disciplinary backgrounds. And it's interesting for me that, so for instance, when I think about algorithmic justice, there are lots of people who are trying to address that from a computer science point of view or an economics point of view by designing algorithms, by designing mechanisms and so on.

And that's good work and yet the more you get into it you realize that there are very different other modes of thought that are very important as well. The ways that ethicists think, the way that legal scholars think, the way that historians think. So for me there's been a broadening not just a desire to work with scientists of all stripes, but a recognition that there are a lot of important kinds of reasoning out there that are not quantitative, there are not always easily mathematized and yet they are reasoning, you know, legal reasoning is reasoning even though it looks very different from the kind of reasoning that I like to do and it has of course this foibles, but it is reasoning and even contributions from the humanities and the arts are important and we've been having more writers and artists and philosophers spend time in Santa Fe and it's been very eye-opening. I think the idea of not just a plurality of methodology but you know, even a plural notion of what does it mean for a question to be a good question and what does it mean to arrive at a good answer to that question and including broadening that for me beyond what I'm used to thinking of as scientific reasoning.

Glen Weyl (15m 58s): Yeah, I would reinforce that by saying I think one thing that even places that are as broad as Santa Fe often miss is things like continental philosophy and religious thinking. I think one of the most underrepresented minorities in the tech industry and in the academy is deeply religious people. There's a lot of interesting stuff there. I think, you know, one of my favorite thinkers today is Pope Francis.

Cris Moore (16m 23s): I read the recent encyclical and I really enjoyed it, could use an editor, but you know,

Glen Weyl (16m 28s): I do think that there are many types of thinking and we really gained a lot from them. I mean one tradition that in the last few years I've gained a tremendous amount from is Catholic social thought and conservative political thinking, not libertarian, which is usually what people think conservative means, but actually conservative political thinking. There's a lot of depth there. Anyway, so I both sort of the intersectionality continental tradition and the conservative tradition are ones that I think folks with my type of background don't usually engage with seriously.

And I've gained a lot from both.

Cris Moore (17m 8s): I have to share another anecdote. I have a wonderful former student Alexander Mercier, who's just recently started graduate school in the Harvard School of Public Health and we had a paper together on epidemiology but we were talking about all sorts of things and I sent him a copy of Martin Luther King's letter from Birmingham Jail because I've been trying to educate myself partly about the history of racism and segregation in this country. And of course in that letter King talks about the idea that some laws are unjust and don't need to be adhered to and Alexander said yes, this comes right out of Aquinas and I read it every year and I always get something new out of it.

It he could quote which volume of Aquinas it was and it's like, okay, yeah, there's a lot of people have been struggling with these issues about what is good and what is right for many years and it would be silly to pretend that we can't learn from them.

Michael Garfield (18m 5s): So Glenn in this paper in your mention of religions as part of this tour of plural thinking, you've got a couple great zingers in this. One is “there does not appear to be any steady, peaceful flow of existing faiths into a universal attractor” and then later attempts at Grand Unity have not simply failed historically. It is hard to “imagine what a coherent attempt that success would even look like.” And then lastly, “why not aspire to the increasing speciation and differentiation of knowledge as well as to active investment in the bridging across such specialties to develop specific applications and technologies.”

And so this is the spirit in which I really feel that starting to get a little bit more into the details of how we imagine actually structuring our systems to support this kind of thinking and this kind of practice. This is where I'd like to take it because you know, it strikes me that a lot of what you propose in your work is a more kind of explicit and formal approach to the way that knowledge is already kind of structured and coded.

You know, and I'm thinking about just like you know, Francisco Varela who some listeners might know as the co-author of the Theory of Autopoiesis with Humberto Maturana, you know, the way that living systems kind of make sense of their environment. Varela co-authored this book called On Becoming Aware, which is more a book on phenomenology but establishes this argument that validity claims start as first person insights and then are confirmed, validated by the second person which is kind of where I feel like religious thinking and philosophy and community practice does, you know, have a lot to offer in terms of the, you know, that inner subjective domain and then things are bolstered with a a third person approach.
So this is the bridge I'd like to make into the way that we think about some of these ideas about consilience and intersectionality and the way that you apply like a quadratic voting kind of thinking to voting but also to funding and, and other ways of sense making. You've got a quote here and then I'll just let the two of you kind of rip on this. You mentioned you know, that the principle of consilience suggests that a course of action supported by socially disparate groups that are unlikely to be correlated deserves greater relative credence than any symmetric or exchangeable function of the credence of individuals.

You know, so I think about this in terms of, again, you know, the way that modern thinking, you know, the movement from looking at the inner subjective to objective over inner objective agreement in terms of how we understand truth and how we, the relative weight that we give to different claims. Again, like all of this strikes me as a way of just instantiating in code the ways that we already are likely to believe very different strangers a little bit more than we are likely to believe strangers that we suspect are all very similar people. I would just love to hear you give people a little bit of background on quadratic voting and maybe here you wanna lean also on the connections that you've discussed with Michael Friedman and Michal Fabiner on how there's a kind of a a wormhole if you will, or an analogy between quadratic voting and quantum mechanics.

Glen Weyl (21m 45s): One story I like to tell is about networks as a way of thinking and their role in sort of thinking and then technology of the 20th Century. I think to me networks and you might call it complexity or I don't know what term you wanna use exactly because most of these things aren't literally described by like a flat graph or something like that. But I think they were really the fundamental idea of the 20th Century. I think they light the core of ecology, the core of quantum mechanics, the idea that we should move beyond discreet sort of optimizing particles and to a notion of partially entangled collections of things that have a distinctness to them and yet are defined by their interactions rather than by their separateness.

To me that is like the core of what gets you from sort of Darwinian survival of the fittest to ecology. It's the core of what takes you from classical to quantum physics. It's the core of what takes you from the Homunculus in neuroscience to the neural network. And there was a vision of how to do social science in a similarly ambitious way that I would trace to thinkers like Georg Simmel and John Dewey. It's my view that the internet was originally envisioned as an attempt to build technologies for social interaction that mimic that model in the same way that so much of the physical technology of the 20th century was an attempt to using these more accurate models of things going on biologically and physically build systems that are more consistent with that.

So yeah, to me there's a project of building what I'd call a network society that the internet was just a first proof concept of that goes along exactly with what you're describing and I think we're just starting to come to grips with what that might require. I've come to believe that recently the Quadratic voting is really just a very special application of a much broader principle which is currently named Degressive proportionality, but I don't, I think that's way too clunky of a name for something that's so important and fundamental.

It was originated by a guy named Lionel Penrose, the father of Roger Penrose the physicist. And it was the observation that if you want to give a certain amount of power to different people, it's important that you not give votes in proportion to that power. You have to instead give votes in a way that accounts for the correlations and down weights correlated signals. The original application of this was to how to represent subunits in a federal body on the assumption that the subunit participants were correlated.

So like this was actually used in some European treaties to determine the voting weights of countries based on their populations. But of course countries are just one correlating factor. quadratic voting is the application of this notion to individuals who might want to express stronger preferences on one issue versus another. But of course individuals aren't the only site of correlation either. There are many sites of correlation and coordination in systems and the quadratic rule in this actually has a very simple statistical explanation that actually can even be brought back to acoustics.

The statistical explanation is that uncorrelated signals grow only as the square root of their aggregate size because they on average cancel each other out. And so the average size of that signal will be as the square root of the number of signals, whereas correlated signals grow linearly in their strength. And this is something that shows up in acoustics all the time. So if you are in a room with lots of voices, most of them will kind of cancel out and just become noise in the background.

And if one is just a bit louder than the others, you'll hear it far louder. And this really is partly driven by human focusing but to a significant extent driven by literally just these statistical features of how the acoustics work because if something's a a bit louder and it's all correlated, a lot of uncorrelated stuff just cancels each other out and becomes noise. And so we can, you know, apply that metaphor to like thinking about how we have to hear voices in a fair way.

And I think that the, you know, quadratic voting and this application to countries are all just very, very special cases of a far broader rule that we're only beginning to understand how to apply, which is how do we take seriously all the sources of coordination and correlation and make these adjustments to them so that we can hear all the voices fairly.

Cris Moore (26m 49s): A couple riffs on that, I mean one of the interesting things about quantum computing is that when you design a quantum algorithm, what you're trying to do is literally make the wrong answers cancel out like things that are out of phase with each other, like with the noise canceling headphones and you're trying to make the right answer all the different ways to arrive at the right answer, add up in phase like a laser so that the right answer is the one that comes out. But to continue the riff in social media and in a lot of what I would call the naively network society that I think you talked about how the internet was a hope to create a network society that doesn't seem to be what happened.

And instead what happens is it's not just the echo chamber, which is maybe not the right metaphor here, it's that if everyone you're talking to is very similar and 10 people tell you the same thing, you have not received 10 bits of information. You've received somewhere between one and 10 bits of information, maybe square root of 10 for instance. And if 10 people who are of the same ideological background, they're in the same political party, they watch the same media, if they tell you something, you should not take it as seriously as if you heard it from 10 different people who are really from 10 different backgrounds, different regions of the country, different races, different classes, different walks of life.

But of course the problem is that, you know, we're social primates, we seem to do a very simple kind of arithmetic intuitively when we add up the opinions of the people we see and have that affect our own opinion. And the problem is that the signals we are getting are very correlated. And you know Danielle Allen, one of your collaborators and a wonderful writer and academic, one of the essays which I just read on the idea of a network society talks about these bridging ties which we need.

And you know, sometimes in networks we hear about the idea of the strength of weak ties, which is a kind of similar idea, but you know, what we need is more links in our network to people unlike ourselves. And then, you know, frankly if you get information from that type of link, you should take it much more seriously than the information you get from people just like yourself. But unfortunately as social primates we seem to be wired the other way around. We tend to take more seriously the signals we get from people like ourselves.

And so probably when people who are not that fond of mathematics hear quadratic voting, it might create a cytokine storm, a histamine reaction like, oh my god, I did not enjoy memorizing quadratic equations in high school. But what it really is, is a compromise between the strength. I more people think something or if people with more devotion to a topic care about something, that should be taken a bit more seriously.

But we have to compensate for mob rule. We have to compensate for some of our primitive ways in which we engage in group think. And of course even in the United States we have a bicameral legislature, we have the House of Representatives, which is roughly speaking proportional to the population of each state. And then we have the US Senate, which for better or worse gives every state an equal vote, whether there are many people living there or very few. And you can view quadratic voting as a compromise between the two that we would, you know, have a weight which does increase with population but less than linearly.

So I think people who are interested in this idea shouldn't necessarily even view it too hard as the mathematically quadratic, even though there are lots of good reasons to adopt that mathematical form. They should view it as a way to give larger groups who are more passionate groups, yes, more influence but not as much more as they have nowadays. One of the fascinating things by the way, I find, and this came up in our conversations at the Santa Fe Institute, we had a symposium this past weekend with some great speakers including from Protocol Labs.

The idea of social media was that, oh well everybody will express their opinion anonymously for better or worse and then that'll be this public square and out of that will emerge some kind of truth. And of course that doesn't seem to be how it works. And one of the problems with social media is that the content, to put it charitably that many people admit, it's meant to seem like the autonomous grassroots bottom up expression of the organic feelings of the participants.

But in reality a lot of it is just stuff that people are retweeting without thinking very much about. And a lot of it is powerful central actors shining out a laser of badness and then lots and lots of their followers reflecting that without very much independent thought. And even when you see the authoritarians in our society in other societies, they even code much of what they say deliberately in this way. They say things like, lots of people are saying that or lots of people are asking whether, or people out there wanna know if, in other words they're coding their own centralized venom as if it were arising in a natural way from the population when it is doing nothing of the sort.

So I like the idea of quadratic voting because it's trying to push back against some of that, the ability of powerful central figures to frankly collect a mob and then to direct that mob frankly as if that mob should be taken as seriously as if each of its individual members had sat down and calmly and openly thought about things and had arrived at that opinion. If that were true, well maybe a mob of a million people should be given the same respect according to a million individuals.

But if that's not true and it's clearly not true, then a mob of a million people should not have the power of a million individuals. It should have less

Glen Weyl (33m 5s): Few things to react to there. But there are so many, again, wonderful intertwined threads. The first thing I would highlight is that the internet is not turned out as well as one might have hoped, but it was fairly predictable I believe, and in fact predicted by the people who had the best version of the vision for it, that that was going to happen. JCR Licklider, who I think was probably the most important figure in taking those early ideas of Dewey and Simmel and like turning them into a technical substrate.

He was the arpanet program officer who funded the first five computer science departments in the country and who built the arpanet in that in 1979 as TCP/IP was coming together, he wrote a wonderful piece called Computers and Governmentin which he said, look, you know, we did some proof concept of the basic information structure here, but for this thing to actually work, here's all the other things we're gonna need. And if you don't build them through public-private partnerships like how we built TCP/IP and instead just leave it to the private sector, he said, you know, IBM's gonna own the whole thing and here's how they're gonna ruin it.

Now of course it wasn't IBM, it was, you know, the future IBM's right? But you got exactly what he predicted and in fact many of the pathologies or like literally described in graphic detail as they've turned out in that piece. So I think it was to people who were really focused on it quite predictable that there were other elements that were needed to build a network society and that they couldn't be supplied by the capitalist process on its own. So I think it's very sad we've ended up there, but I don't think it's right to think of it as kind of mostly a failure of kind of those original visionaries.

I think it's mostly a failure of the capitalists, you know, the tender graces of capitalism to which the process was eventually left.

Cris Moore (34m 54s): You mentioned the notion of flat networks and networks as being a central idea of the 20th century and especially I would say of the late 20th century in terms of the study of social networks and food webs and everything was a network all of a sudden, right? It used to be in the 1850 everything was a steam engine and then for a little while everything was a computer and then everything was an economy or if you were a bit more into cooperation and symbiosis, everything was oncology, you know, even the brain was an economy of neurons exchanging attention or something and then all of a sudden everything is a network.

And each of these metaphors of course makes it easier to see some aspects of reality and harder to see others. And I often reflect, especially lately that the Santa Fe Institute was founded, you know, in the late 1980s and really got going in the early 1990s. And this was a heady time. You know, this was a time when the Soviet Union had fallen, it looked briefly as if the former Soviet republics, most of them were going to become democracies instead of kleptocracies, at least some of them might.

And you know, the European Union was coming together to form this new voluntary confederation and the internet was getting going. And I think as you say, some of the people closest to it did predict the fact that many of these things would not turn out to be the decentralizing and democratizing force that we thought they would be. But I, along with a lot of people at the time were very naively optimistic and it seemed as if authoritarianism could never happen again. There could never again be a dictator who would deceive their citizens because now there was gonna be all this free communication. iIt seemed as if, you know, there were silly books getting written about the he end of history and it seemed as if pluralist liberal democracy were historically inevitable.

And that may still be what we want, but it doesn't seem to be historically inevitable anymore. And so what I think of you and your collaborators are as doing in a lot of ways is not holding onto the naive optimism that all these networks will make us wonderfully decentralized. Not being blind to the fact that it turns out that many of these networks are quite easily captured and weaponized by powerful central players and at the same time not doing what I often do, which is just grump about it and say, oh I wish, you know, we were back in, let's see choose a decade but before Twitter, before, you know, maybe before the internet or something.

And it just did as horrible modernity, yuck. You know, so I, I think that trying to take the dynamical ability of modern technology, including digital technology, but seeing it as a way to harness the diversity of opinions that we have and not have it be sucked into these black holes that we are of the Twitter verse and so on. I think that's a really good thing to try to do. I don't know if you'll succeed, but I, I admire the optimism and the dynamism of it.

Michael Garfield (37m 54s): Thank you. Cuz I, I agree with that at this point I feel like I would like to dig into a paper that you Glen co-authored with Puja Ohlhaver and Vitalik Buterin Decentralized Society: Finding Web3's Soul. Because I feel like this is where we can actually get into some of the specifics about how you and your co-authors and now kind of widening ring of other people are thinking about how we can address some of these issues that now seem so inescapable in the kind of traumatic moment where we realize this whole thing is not operating as it was advertised.


In particular, to just loop back to the questions about the way that we, how we check knowledge and how we can think about viewpoint diversity specifically. You know, this is related, Cris, to points that you and other members of the Algorithmic Justice project have mentioned about biases in data collection and, and my suspicion is in talking about how to weight different claims according to correlations between those claims and between the actors making those claims, that we can start thinking about how to again, transfer more of the practices that work to establish trust and consensus into digital spaces without assuming any knowledge on the part of our audience.

Glen, if, if you can kind of introduce the problem and the proposed solution here about what we see web3 doing today, what it's not doing, and then how you and your co-authors imagine we can improve on this. And I think that at that point we can get into the more granular stuff.

Glen Weyl (39m 51s): Yeah. So Web3 is very much built around an imagination of these pseudonym or in some imaginations anonymous accounts that hold assets that are, the imagination there is that they're financial and fully transferable in character and that has some relevance at least within a capitalist economy. But almost all the things that you'd like to trade or transfer in that way are founded in things that are not transferrable and are much more social in nature.

So like to give an illustration, most of the value of most of the tech companies, which are of course traded on financial exchanges, lies not in any set of physical capital they have, but in the relationships that exist among their employees. And in fact, there's wonderful literature on this about intangible capital. So there's something in the culture, there's something in how they work, how they work together that is the value of that organization. And so even though Web3 wants to trade value, the value itself is relational and not financial.

It can't be decomposed into its constituent parts and traded. So I submit that if Web3 wants to actually as it imagines itself be self-contained, it needs to have a way to represent these non-transferable and socially connected elements. It needs to do that even just to do a rental contract because you don't rent to anyone and just be able to sell that. You rent to a person. So you need to have a notion of who that person is and what the relationship between them and other people are that causes you to entrust them with a rental and not someone else.

If you want to have an NFT that actually has any value to begin with, that person needs to have a representation of their standing. Who is it that's making this thing and how does it relate to other objects that are out there?  So like many of just basic functionalities, the space needs depends on this social substrate which requires things not to be fully transferable, to encode social relationships, to encode some form of collective human ownership, et cetera.

Michael Garfield (42m 2s): I was gonna kick this one back to you, Cris, because you know, you've done work on the use of algorithms to determine someone's fate in the rental market as with predictive policing, you know, and you've brought people in, hosted folks at SFI who have critiqued this and this is a, you know, a, a subclass of a much bigger problem that has to do with the way that we train algorithms on limited datasets.

They become overfit to those data sets and then that bias becomes destiny. And so the gap between the unfair world that you're describing here and the kind of fairness that Glen is proposing here I find really interesting. I just, before I invite you to riff on that a little bit, I would just say that there's, you know, today is an election day. We're recording on the eighth and you know, as we've already mentioned in this call, questions of political funding are again, sort of in a similar way fundamentally related to, you know, questions and about the way that people trust research nowadays.

You know, and this notion that so much of research is, is private or privately funded, you know, with an agenda. You know, I look at this stuff and I, I see very deep epistemological questions. Actually you've said this elsewhere in this, this paper Glen, that markets and governance are perceived as a kind of two separate areas, but there's that they have very deep things in common and, and that there's a lot of kind of overlapping terrain.

Cris Moore (43m 42s): Yeah, that's a sweater full of threads there. I think one interesting thing which might connect with what Glen is doing, the debate about the role of algorithms in housing, criminal justice, all of these high stakes decisions, this debate continues to rage and it should rage, but for me the debate has been shifting a little bit because the one attitude is to say, well algorithms are bad and human decision making is good and so-called actuarial decision making where we assume that people's behavior or that their trustworthiness can be guessed or approximated based on their belonging to a bunch of other individuals that they seem similar to, for instance, with similar criminal records or similar demographics and so on.

The idea that that's bad and that human decision making is individualized and a judge can look into your soul and see that you, you know, yes you have a prior conviction but you've turned your life around or ultimately that you don't have a prior conviction, but that you are looking forward to killing the witness who is going to testify against you. You know the idea that one of these things is soulless and biased and the other is warm and human and individualized and therefore represents due process.

Well I mean of course that's a simplistic line to draw because historically human decision making has been a garbage fire too. And we know that lots of human judges are capable of terrible bias and we know that human landlords and human lenders have traditionally been capable of terrible bias and we know that there's still things going on where people will assess property with a much lower value in black neighborhoods, even if it is a comparable property to those in white neighborhoods.

So there's a lot of human bias going on. And so then the algorithmic folks say, well look, let's try to be more objective and look at data and look at statistics and set aside our stereotypes and that will be more fair. And then the response comes back saying yes. But if you're basing it on historical data, then you're feeding in biases of the past which you are going to propagate into the future. There is a kind of new attitude about all this, which is kind of orthogonal to these two axes, which I personally find pretty compelling and it's come up in from a couple of different places independently.

I could drop a few names but let me just say that the attitude is that algorithms at their best offer a new way for decision making to be transparent and accountable. That's at their best. So you know, if an algorithm is something that everyone understands how it works, everyone understands why we are chose to use this algorithm, how it was trained, and it's something which can be independently audited. It's even something which could be tinkered with to see if it could be made more fair and more accurate.

That kind of algorithm could raise the standard of decision making in many areas and let us detect bias where it crops up and also help us detect where historical patterns are being perpetuated and what we might do to fix that. But the big but is they have to be transparent, they have to be independently audited. They can't be proprietary and opaque and hidden behind veils of intellectual property and they also can't just be snake oil.


So there is a lot of snake oil out there. There's a lot of products being put out to market which have not in any sense been independently verified or validated and where their users and customers frankly don't really know whether their results ought to be interpreted the way they ought to be interpreted. And so there needs to be a lot more critical thinking aimed at these. For instance, you mentioned tenants, you know, there's a lot of products out there.

They might be called AI or machine learning, but in many cases they're just glorified background checks where they look at publicly available databases to see if there was an eviction order placed against me at some point in the past. And there's all sorts of problems with this. One is, was it actually me? And in many cases it these things are based on approximate matches to people's names. But even if it was me, there are other questions like, well maybe my landlord wasn't maintaining the property and withholding rent was the only way I could get his attention.

And then the question is, well am I in the tiny fraction of tenants who have the resources to take my landlord to court? And if so, maybe the court found in my favor. But if there's just this kind of crummy, shabby use of these databases being passed around from one data break data broker to another, well yes it's true. There was this eviction order. So does that mean that I'm a an untrustworthy tenant? So this is a good example of, you know, maybe the data just doesn't mean what we think it means and maybe there should be an independent study to check to see does this actually have predictive value in terms of whether I will pay my rent on time assuming my landlord does their job.

That wouldn't be an easy study to carry out, but that's the kind of critical thinking that should be applied to some of these things. So I think it's a fascinating question about in which domains this type of thinking works. I think there are domains where you do want a human decision maker. I think for instance in the criminal justice system, I don't think there is any good statistical way to predict that well, whether you will be a trustworthy defendant in the sense that can we release you between your arrest and your trial?

Not that most things go to trial nowadays, but can we release you pretrial knowing that you probably are not gonna then commit a serious crime cause it's expensive and disruptive to your life and kind of big waste of resources to jail you in the meantime. And jailing pretrial defendants is a big part of why we have so much incarceration in this country. So this is one of these domains where people have tried to look at it statistically and the problem is it's, you know, the best statistical methods people have come up with still have a great deal of uncertainty to them.

This is not minority report where we can predict with 90% accuracy. So my colleagues and I have looked at this and found that pretty much the best signals you can find if you try to narrow in on a set of defendants and say, oh these defendants look really dangerous, the chances that those defendants are going to commit a serious crime of release, it's like 7%, it's not 20 and it's certainly not 90, it's like seven. And by the way, the chances that they'll commit like a first or second degree felony, first degree felony is about one in a thousand and a second-degree felony is about one half of 1%.

So on the other hand, there are probably cases where a prosecutor could say, look judge, we have a witness who has heard this defendant, both that first chance they get they're gonna kill this witness who's gonna testify against them. And if that is the case, I think a judge could be justified in detaining that defendant. It's also true on the other side that even if somebody has a bunch of prior convictions, if it's clear that they have really turned their lives around and that they have a pattern of complying with conditions of release and that they've completed rehabilitation programs, well then that should override these statistical algorithms in the other direction.

And yes, the judge could be biased, but well then they should be accountable and we should collect data about judges. So data and algorithms shouldn't just be something used on us, the massive citizens by government and corporations. There's also this great idea of surveillance and you know, sunshine laws and transparency. If judges show a clear statistical pattern of detaining people who don't need to be detained or detaining people specifically because of their race or other factors, even though they're similar to other defendants that that judge released, well then that should be called out.

So the point is that it's not that data and algorithms are bad, it's that they need to be applied in a way which is transparent and which is democratic and which empowers all of us to carry on these debates rather than simply being tools which accurately or inaccurately are being used to by the powerful to control the rest of us. It's silly to argue about which is better, you know, computer decision making or human decision making. That's really not the point.

I mean, the point is we should have accountable transparent decision making instead of b.s. There's human b.s., which comes in the form of stereotypes in ideology and there's algorithmic b.s. which comes in the form of naive machine learning without thinking enough about its applications, snake oil being pushed out there and sold as fancy AI even when it's kind of crap. That for me rotates the debate about 90 degrees and I think that helps clarify what matters.

Glen Weyl (53m 15s): To further pull on that and elaborate on what Cris is saying, I would say that I actually think that in many ways the original debate that you before the rotation was in many ways responsible precisely for the problem. And the reason is that if the whole discourse is about more accurate, less accurate, good, bad, rather than here's the color of this AI, here is its biases, here is the social background that it represents, here's where it shines, here's where it doesn't.

That's precisely when you get into that opaqueness and obfuscation. I think we need to understand that there are goals that our technology has, that there are roles that we want it to play in our society. So like, just to give you a an example of this, this contrast that's close to those examples, one way to think of AI as like, oh, this is better or worse than humans at doing this. But another way to think about it is that right now in order to like live with each other and with some amount of order, we all have to like take standard courses and like have private property and like do all these practices that were like imposed by either colonialism or like some barbarian bureaucracy in the 19th Century that like are not particularly well founded in like any reasonable historical or psychological theory of like what actually allows people to like enjoy themselves and like be happy and we have to do all that stuff so other people can make sense of us so that like there's some form that we can fill in and that you can imagine a world where we have really high powered statistical processing tools with a lot of neural whatever going on that enable us to make sense of each other while living much more diverse and flexible lives.

And that would be pretty cool. The universal translator in Star Trek lets people sort of get along reasonably cooperatively well just like being very different from each other.

Cris Moore (55m 25s): Or at least having differently shaped ears. But yeah.

Glen Weyl (55m 28s): Yeah, well in the original series and you know, in Discovery and Deep Space Nine it starts getting more and more different. And that's actually cool because it's precisely, actually that's a great example. Technology of various kinds that we've developed has allowed us to imagine much more different things over time. And to me that's an inspiration worth having. We could use statistical systems to live more flexible and diverse lives rather than to like replace the existing thing and do it 7% more accurately or whatever.

And I hope that we together will aspire to that kind of sociotechnical change rather than to have debates over who does something that's really boring and problematic to begin with a little bit better.

Michael Garfield (56m 19s): Thank you both for your inspiring riffs there. At this point, I wanna create a little garden around this conversation that we can cast back some links to other episodes of the show and other, other pieces of work that have come out of SFI in reading this paper on the Decentralized Society within the first sentence, I'm thinking about Herbert Simon's famous 1971 quote, that what information consumes is attention feels like such a crucial point that I made it my email signature.

You know, because like you said earlier, Glen, that you know, the value is really in the relationships and, and there are differential scalable qualities here. I think a lot about the way, and Doug Rushkoff and others have pointed out that you can have at least, you know, indefinitely many emails a day, but you only have so much time and attention to read them. And that this is part of the argument for the importance of not just following the sort of logic of the internet as a great copying machine off a cliff, right?

Where we're imagining an abundance that is nonetheless still founded in real material scarcities. You know, like David Wolpert talks about, you know, the thermodynamics of communication and there being a theoretical limit to how effective that can be. And while we still have plenty of room, you know, orders of magnitude to improve on that, you know, the, there are the real world limits that we're eventually gonna bump up into. And so, you know, one of those limits again to kind of like just repeatedly stress the importance of, what I think it is that you and your team are doing here is calling back to the conversation had with Rajiv Sethi and you know, and his, his work on violent crime 

Glen Weyl (58m 24s): An old friendand collaborator of mine.

Michael Garfield (58m 25s): So yeah, yeah. So Rajiv's, you know, book with, with Brennan Flaherty on stereotypes and criminal justice Shadows of Doubt. Fantastic book. Yes. Episode seven we talked about it. And in that episode, you know, one of the things that, you know, we, we did was compare meeting people online to something kind of like meeting someone in a dark alley. You know, you have very limited information about a person and a very narrow kind of timeframe within which to make a decision.

And you know, this gets back again to all the trade-offs and conflicts. You just enumerated Cris in how we use decision augmentation approaches to try and to make better decisions within, under the conditions of great uncertainty. You know, so I think about that and I think about the paper that was led by Jay-Won Shin on, you know, info processing thresholds in the social evolution, you know, of human beings.

I want to just use this as the opportunity to invite you to talk a little bit more about, you mentioned actually a couple times in the papers we've discussed here the work of James Evans at the University of Chicago, who I know is a friend and collaborator of yours and he talks about, you know, using a kind of a similar thinking to try and find fruitful differences in, you know, the expertise of one researcher and another to identify really, really promising potential avenues of research in this paper on soul bound tokens.

You know, this notion that you can have a kind of a ledger based CV that can then be computed and you know, correlated with other peoples. And I like how you put it here. “Correlation discounting could be extended to structured deliberative conversations. For example, organizations susceptible to majoritarian and capture could compute over soul bound tokens to bring maximally diverse members together in conversation to ensure minority voices are heard.” Again, this seems like a much more substantial and rigorous way to think about diversity.

Glen Weyl (1h 0m 40s): Another example, sync circling back to what I was riffing on Cris's comments about, which is why not, rather than mostly focusing on a debate over how biased or not some particular algorithm is, instead say, how could we build algorithms to maximize diversity? Which is a really complicated thing once you take intersectionality seriously, right? Like once you take intersectionality seriously, you realize that there is no meaning to a diverse class or whatever.

You can't ever have a diverse class. That's completely, it's conceptually impossible. There's no way you could, from an intersectionality perspective, possibly have a group of a thousand students who are representative of the population. That's meaningless, right? But you can ask the question, how could we approximate something like an optimally diverse class subject to other criteria that we all also have? You know, that's a really interesting intellectual question.

And why don't we work on algorithms that help us approximate that optimum? We just need to like shift the frame from like playing defense and being like, here's these algorithms and they're coming and they're just gonna impose this very standard totalizing thing and instead say, no pluralism is on the march. You know, like, let's actually be really serious about pluralism and let's pursue it as a technological trajectory.

Cris Moore (1h 2m 10s): I think there's also some analogous ideas just on the human side. For instance, in the American justice system, people tend to specialize either as prosecutors or as defense attorneys. And judges therefore have come up either from one side or the other. And in a society where for better or worse, probably worse, a lot of judges are elected and people are concerned about crime and want to, you know, be tough on crime. Well, it's often easier for a judge who has spent their life as a prosecutor than as a defense attorney to be elected.

And part of the problem here is one approach to this besides maybe we shouldn't act actually elect judges, but that's, you know, kind of under the bridge. One approach to this is to say, well gee, maybe the legal profession should not be so specialized into prosecution and defense. Maybe everybody should really spend multiple years of their lives on both types of cases, either alternating or in parallel. And maybe then you would have judges who had more of a respect for both sides of a debate, whether that's around detention or sentencing and for that matter parole.

And there's a thing wrong with our society, which is we have, even the institutions that work reasonably well in our society are still often built around adversarial advocates in which the idea is, I will argue as passionately as possible for one side, you'll argue as passionately as possible for the other side, we will deploy whatever resources we can, rhetoric money, et cetera. And somehow we like to think that by, I don't know, interpolation that will arrive at the truth.

And that's totally not true, right? I mean, we know that there are lots of types of decision making where that's a disaster. Where what you need is not these two sides, each of which are deliberately undercutting the other as effectively and including viciously as they can. But you want everybody to be willing to change their mind openly, publicly, to be willing to publicly acknowledge the point that the other person is making.

And you want a sense that people are cooperatively working together toward the truth. But that's not how most civic organizations work. It's not how our legal system works, it's not our political system works nowadays. Maybe there was some golden age in the past when it did, probably not. It's not even how neighborhood associations work, right? I mean there may be some diversity in how homeowners associations work internally, although I regret to say I don't think that's usually true because they're usually very self-selected groups of people who are quite vocal.

But once they arrive at a decision, they're like good old fashioned Maoist, democratic centralist, you know, like, well we represent the neighborhood and this is our monolithic opinion. And if somebody shows up and says, well I live in that neighborhood but I actually don't agree, then they, they get piled on and punished. And if somebody says, I'm an environmentalist but this environmental organization doesn't speak for me or I belong to this racial or ethnic group, but I don't necessarily agree with what the claimed representatives of that group say that group wants, they get punished, again, a lack of pluralism.

But I think it's not just a lack of diversity, it's this notion that the way to get make decisions is for everybody to hammer their stake as firmly into the ground as they can and to be as unwilling as possible to budge. And you know, it's interesting that there are corners of the internet, not Twitter, but little corners of the internet that are more moderated that have little funny microcultures that I kind of admire. You know, so there's the whole slate star codex world, you know, I mean it's a funny little world.

I'm not saying it's representative of the population in any way, shape or form, but they have some pretty good cultural practices. Like there's the notion of steel manning rather than straw manning, right? So steel manning is if you express an opinion then rather than kind of caricaturing it with the stupidest possible example, which is a common debating trick, I will with you and publicly try to form the most convincing version of your argument. And then I will try to argue against that.

So embracing these practices where, you know, we recognize what is good about the other person's point of view, when a politician changes their mind, they're often attacked as a flip flopper. And you know, you rarely hear a defense attorney say, actually your honor, ladies and gentlemen of the jury, I think my client might be guilty after all. That's not their job. So coming up with institutions where we actually want together to figure out the truth as opposed to defending our initial positions is a great, though difficult job.

And I think part of it is being less performative and I'm worried that, and this conflict with these ideals of transparency, it might mean in some cases being less publicly visible. So there are these interesting cases of citizens assemblies doing this thing of deliberative democracy where you get a bunch of people sometimes randomly chosen and Ireland did this to update their laws about abortion for instance. And you know, these people get together and they express their views. But the point is they're not there to earn points.

They're not doing this in front of a big audience saying, oh look at the clever thing I just said, it's not performative. And I think this lets us kind of carry out the best angels of our nature in being willing to come together and to change our minds. Now is that something which is scalable? I'm not sure I had some good conversations about this this past weekend. I'm a little worried that kind of like a jury trial and a decent espresso, that's just not something which scales well, but maybe there's a way to do it.

I don't know, maybe. But it's how do we uphold those cultural norms of being willing to admit that you are wrong and being willing to change your mind, which I have to toss in even though we don't always live up to it obviously is one of the cultural norms of science, which I think we can offer to the larger society. We are supposed to be willing to change our minds when the evidence suggests it and to let go of our ideas rather than, of course in reality it's often just that the old scientists die, but you know, we're supposed to be willing to change our minds.

Glen Weyl (1h 8m 56s): Two things to highlight there. The first is what you're describing is precisely the reason why I am a bit of a skeptic of prediction markets. Not to say that they don't have a role, but I don't think that they are nearly the solution that many believe they are. And it's because they set us up in an adversarial relationship with regards to determining a truth. It's not at all to say, I don't think incentives have a role or that it isn't worth eliciting information for me. I believe in all those things, but the notion that the way that we should do it is betting against each other so that we want everyone else to be as wrong as possible so we can be right.

And we want to get like one big payoff for like the person who's most right and anything that can be like too easily analogized to some sort of a like dick measuring contest is not something that like excites me as a mechanism for like coming to good social outcomes. And I think that prediction markets have an important element of that. And the second thing I was gonna say is that I do think that things like what you're describing can be scaled. I don't think we've invested in scaling them, but I think they can be.

And I think leading example of that is poll.is I don't know if you guys are familiar with that, but it's a system used in Taiwan. It's a Twitter like format, but it deliberately guides conversations towards consensual or partially consensual outcomes while highlighting the differences that exist in the conversations in a nonjudgmental way. And it's just a wonderful system. And at the same time it's like the most simplistic proof of concept of the general direction. It uses k-means clustering of stated opinions.

It doesn't use any natural language processing. It's like the bargain basement version of what it's trying to achieve. But it still has been transformatively effective for these types of conversations at scale in Taiwan and is being adopted if it's survives by the Twitter birdwatch folks as the foundations of what they're trying to do for fact checking. So I do believe that there is a science here that can advance dramatically. I think that we have not chosen to apply ourselves to it cuz we've been seduced by, oh we're gonna do the unbiased algorithm that's gonna predict the truth the right way.

Rather than saying no, people are diverse, you'll have a lot of different opinions. How do we actually help people navigate that complexity? So I really am hopeful that this science, what I would call plurality, really can advance and help us to these things much better. And again, I'll put in the plug, if you're a researcher interested in these things, we're trying to build an academic community that really wants to work on them. Write tome@glennpluralitynetwork.org.

Michael Garfield (1h 11m 43s): Yeah. So just to tie a bow on this conversation, it strikes me there's a section in the piece on soul bound tokens, on programmable, plural, privacy, you know, reconciling all this stuff. You know, the theme that I hear in both of what you have just said, a lot of what we've talked about in this conversation is the importance of this situatedness or the context of knowledge. You know, you mentioned, you say as scholar Helen Nissenbaum highlights the concern is not privacy as such, but lack of integrity to context and the sharing of information.

I remember years ago I heard John Perry Barlow of the Electronic Frontier Foundation make a similar point about the information requests in surveillance. And you know how as someone who grew up in a small town in Wyoming, for him everybody knowing everyone else's dirty laundry was not so much, you know, it's not privacy, right? It's power imbalances in information access, not knowing when the government is requesting that Google give them all of your personal data.

You know, you talk about a more promising approach, treating privacy as a programmable, loosely coupled bundle of rights to permission access, alter or profit from information. So just cuz I'm personally obsessed with this particular issue, I would love to link that to this other thing. And I work kind of revisiting in a topic we've already discussed, but you mentioned specifically the possibility of using tools like this to address the issue of deep fakes, which is another place where stripping something from its context is already causing us as a society an enormous amount of grief.

I remember thinking about how we were gonna have to put cameras and microphones on the blockchains so that you can stereoscopically correlate timestamps from multiple different devices and a distributed ledger and, and like confirm. But you take it a step further. University philosopher Regina Rini has talked about how deep fakes present a challenge to the epistemic backstop, you know, the convenient comfort in security that we had this historically bizarre privilege for about a century of feeling like that we could trust an objective recording device and that you say present technology, decontextualizes cultural products, which is very similar.

And maybe I'm stacking too much all on this all at once, but again, it's very similar to the way that, as you've mentioned earlier, natural language models and other large AI projects strip the data from the people who actually created it. One of the things that I would love to hear, and you know, you spend a good deal of the paper talking about this is areas for follow up research and you specifically mentioned that to what degree should correlated votes be discounted is something that hasn't really been formalized quantitatively.

So I mean that and what else, where should we be directing the attention of researchers?

Glen Weyl (1h 14m 52s): My main closing remark, cuz we didn't get to it, it would be that I really loved the piece that Cris sent me on pluralism and mathematics. I've been looking for all of these different places that pluralism was transformative in 20th century thinking. And I loved finding that way of thinking about girdle and many other advances in the 20th century and tying that to the things I was saying earlier about ecology and quantum mechanics.

And I hope that we can bring that to the way that we imagine and design the technologies that govern our societies more and more in the 21st century.

Cris Moore (1h 15m 35s): Well, maybe those two pins should form another episode.

Michael Garfield (1h 15m 39s): I’m for it.

Cris Moore (1h 15m 40s): I would just say that I spend a lot of time worrying about the future, and I admit that I sometimes think in rather apocalyptic terms, and as one blogger who I enjoy pointed out as we go into this election and the next election, which hopefully won't be the last one in the United States, that you know, the history isn't ending. There's a bit of a copout to say, oh, well the whole earth is gonna be destroyed by climate change or society is gonna be destroyed by the next dictatorship or whatever.

It's kind of a copout because history continues and we've learned that one of the things that humans need is institutions. We need institutions. That's why authoritarians attack them and attempt to capture them. So we need institutions and we need creative thinking about how to build them in ways that benefit our, not just diversity for diversity's sake, but ultimately even more importantly, help us make good decisions together and help us live with each other. As Glen said, even if we choose to live in very different ways, that's important and it's important to be creative about that.

So I'm glad that Glen and his collaborators are doing that and I'm looking forward to attending that conference that he's organizing and to learning more about their work.

Glen Weyl (1h 16m 54s): I probably common friend, but certainly a friend of mine, Allison Deuttmann, often uses the phrase existential hope, which is to say that, yeah, she's already in a place of existential fear, but what we now need is existential hope, like how can we survive together? So hopefully we can figure that out.

Michael Garfield (1h 17m 12s): That's a beautiful and enabling place to end it. Thank you both so much for indulging this frayed yarn ball of a conversation and bringing yourselves to some really important questions. Thanks a lot.

Glen Weyl (1h 17m 26s): Thanks Michael. It was really a pleasure.

Michael Garfield (1h 17m 29s): Thank you for listening. Complexity is produced by the Santa Fe Institute, a nonprofit hub for complex system science located in the high desert of New Mexico. For more information, including transcripts, research links, and educational resources, or to support our science and communication efforts, visit Santafe.edu/podcast