COMPLEXITY: Physics of Life

Tina Eliassi-Rad on Democracies as Complex Systems

Episode Notes

Democracy is a quintessential complex system: citizens’ decisions shape each other’s in nonlinear and often unpredictable ways; the emergent institutions exert top-down regulation on the individuals and orgs that live together in a polity; feedback loops and tipping points abound. And so perhaps it comes as no surprise in our times of turbulence and risk that democratic processes are under extraordinary pressure from the unanticipated influences of digital communications media, rapidly evolving economic forces, and the algorithms we’ve let loose into society.

In a new special feature at PNAS co-edited by SFI Science Board Member Simon Levin, fifteen international research teams map the jeopardy faced by democracies today — as Levin and the other editors write in their introduction to the issue, “the loss of diversity associated with polarization undermines cooperation and the ability of societies to provide the public goods that make for a healthy society.” And yet humankind has never been more well-equipped to understand the problems that we face. What can complex systems science teach us about this century’s threats to democracy, and how to mitigate or sidestep them? How might democracy itself transform as it adapts to our brave new world of extremist partisanship, exponential change, and epistemic crisis?

Welcome to COMPLEXITY, the official podcast of the Santa Fe Institute. I’m your host, Michael Garfield, and every other week we’ll bring you with us for far-ranging conversations with our worldwide network of rigorous researchers developing new frameworks to explain the deepest mysteries of the universe.

This week on Complexity, we speak with SFI External Professor Tina Eliassi-Rad, Professor of Computer Science at Northeastern University, about her complex systems research on democracy, what forces stabilize or upset democratic process, and how to rigorously study the relationships between technology and social change.

If you value our research and communication efforts, please subscribe to Complexity Podcast wherever you prefer to listen, rate and review us at Apple Podcasts, and/or consider making a donation at santafe.edu/give. Please also be aware of our new SFI Press book, The Complex Alternative, which gathers over 60 complex systems research points of view on COVID-19 (including those from this show) — and that PhD students are now welcome to apply for our tuitionless (!) Summer 2022 SFI GAINS residential program in Vienna. Learn more at SFIPress.org and SantaFe.edu/Gains, respectively. Thank you for listening!

Join our Facebook discussion group to meet like minds and talk about each episode.

Podcast theme music by Mitch Mignano.

Follow us on social media:
Twitter • YouTube • Facebook • Instagram • LinkedIn

Related Reading & Listening:

Tina’s Website & Google Scholar Page

What science can do for democracy: a complexity science approach
Tina Eliassi-Rad, Henry Farrell, David Garcia, Stephan Lewandowsky, Patricia Palacios, Don Ross, Didier Sornette, Karim Thébault & Karoline Wiesner

Stability of democracies: a complex systems perspective
K Wiesner, A Birdi, T Eliassi-Rad, H Farrell, D Garcia, S Lewandowsky, P Palacios, D Ross, D Sornette and K Thébault

Measuring algorithmically infused societies
Claudia Wagner, Markus Strohmaier, Alexandra Olteanu, Emre Kıcıman, Noshir Contractor & Tina Eliassi-Rad

1 - David Krakauer on The Landscape of 21st Century Science
7 - Rajiv Sethi on Stereotypes, Crime, and The Pursuit of Justice
35 - Scaling Laws & Social Networks in The Time of COVID-19 with Geoffrey West
38 - Fighing Hate Speech with AI & Social Science (Garland, Galesic, Olsson)
43 - Vicky Yang & Henrik Olsson on Political Polling & Polarization: How We Make Decisions & Identities
51 - Cris Moore on Algorithmic Justice and The Physics of Inference

“Stewardship of global collective behavior” - Joe Bak-Coleman et al.

Michelle Girvan - Harnessing Chaos & Predicting The Unpredictable with A.I.

Transmission T-015: Anthony Eagan on Federalism in the time of pandemic
Transmission T-031: Melanie Moses and Kathy Powers on models that protect the vulnerable

Also Mentioned:

Simon DeDeo
Elizabeth Hobson
Danielle Allen
Alexander De Tocqueville
Stewart Brand
Safiya Noble
Filippo Menczer
Jessica Flack
Rajeev Gandhi
Scott Adams
David Brin

Episode Transcription

Tina Elliassi-Rad (0s): Safiya Noble who wrote the book Algorithms of Oppression, she says you have no business designing technology for society when you know nothing about society. What has happened is that you have technologists, mostly tech bros, designing technology for society, where they don't know how society works. And then it's like a post hoc fixing of things. And in fact, one of the things that has been interesting is that as a representative of democracy is not doing very well. People are looking at other forms of democracy like citizen assemblies or kleptocracy, or they are looking at like liquid democracy where you can have delegated democracy or this notion of quadratic voting, where there will be markets for voting.

But again, when one form of democracy doesn't work, people go to others. And again, there's this notion of who's in charge and who has power and the incentives are around the power. So in terms of AI and machine learning and our data-driven world now, one has to take a complex systems view of it. It's not just, oh, here's an algorithm that will maximize some objective function. But that that algorithm works within a complex system that has randomness as feedback as memory, it has hierarchy so on and so forth.

And so one of our jobs is to like teach computer scientists about complex systems. 

Michael Garfield (1m 47s): Democracy is a quintessential complex system. Citizens decisions shape each other's non-linear and often unpredictable ways. The emergent institutions exert top-down regulation on the individuals and orgs that live together in a polity, feedback loops and tipping points abound. And so perhaps it comes as no surprise in our times of turbulence and risk, the democratic processes are under extraordinary pressure from the unanticipated influences of digital communications media, rapidly evolving economic forces, and the algorithms we've let loose into society. In a new special feature of PNAS co-edited by SFI science board members, Simon Levin, 15 international research teams mapped the jeopardy faced by democracies today as Levin and the other editors write in their introduction to the issue, the loss of diversity associated with polarization undermines cooperation and the ability of societies to provide the public goods that make for a healthy society.

And yet humankind has never been more well-equipped to understand the problems that we face. What can complex systems science teach us about this centuries threats to democracy and how to mitigate or sidestep them? How might democracy itself transform as it adapts to our brave new world? How might democracy itself transform as it adapts to our brave new world of extremist partisanship, exponential change and epic systemic crisis. Welcome to 

Complexity

, the official podcast of the Santa Fe Institute.

I'm your host, Michael Garfield, and every other week, we'll bring you with us for far ranging conversations with our worldwide network of rigorous researchers, developing new frameworks to explain the deepest mysteries of the universe. This week on complexity, we speak with SFI External Professor Tina Elliassi-Rad, Professor of Computer Science at Northeastern University about her complex systems research on democracy, what forces stabilize or upset democratic process, and how to rigorously study the relationships between technology and social change.

If you value our research and communication efforts, please subscribe to Complexity podcast wherever you prefer to listen, rate and reviewus @applepodcasts and/or consider making a donation at santafe.edu/give. Please also be aware of our new SFI press book, The Complex Alternative, which gathers over 60 complex systems research points of view on COVID-19, including those from this show and that PhD students are now welcome to apply for our tuition lists. Summer 2022 SFI gains residential program in Vienna. Learnmore @ sfipress.organd 

Santafe.edu/gainsrespectively. Thank you for listening, 

 

 

Tina, it's a pleasure to have you on complexity podcast, as I'm sure, you know, having listened to a few of these, I like to start these out by giving people a bit of your background as a mind, like your journey into a where you are now, how you got here, where you started developing the interests that you're now pursuing professionally and how you got affiliated with SFI.

Tina Elliassi-Rad (5m 4s): Excellent questions. The beginning story. So I'm a computer scientist. My interest in computer science started when I was a child at home. My dad would get these for play computer magazines. My dad is a PhD in electrical engineering way back in the late sixties, early seventies, he was working on autonomous vehicles back then the department of transportation was actually funding work on autonomous vehicles, even though the computers weren't fast enough. And I was intrigued with computers and then I was really good at math. And so I decided to go into computer science because it's the bastard child of math and electrical engineering.

I didn't mean to go into AI or machine learning. That's what my expertise is. And in fact, when I entered graduate school, I knew I didn't want to do AI. And then I took a class in machine learning and I loved the material. And this is back in the mid 1990s. Nobody cared about machine learning. The was like 12 people. We were reading the textbook that Tom Mitchell, who is also an SFI External Faculty Member was writing. We were the guinea pigs. My dissertation was actually about artificial neural networks.

Can we do theory refinement on artificial neuro networks, personalized web search. Basically we will learn one artificial neuro network about what pages do you like? And one about what kind of links do you like? And then when I graduated, I just didn't want to go into academia. I thought academics were weird. So I went to work at Lawrence Livermore National Laboratory. For those of you who don't know, Lawrence Livermore National Laboratory is a sister lab to Los Alamos National Laboratory. It's where the hydrogen bomb was invented. So for Los Alamos, it was the atomic bomb.

And when I got there, I was working on basically things that go boom. So if a bomb were to explode, what happens and basically the physicists they're putting up these differential equations in these large computers and they weren't really looking at the output of their simulations. And so I was building statistical indices over the outputs of the simulations of things going boom. So all of my examples were like, yeah, a wall crushing a can or supernova or something like that. And then 9/11 happened.

And a lot of the funding got shifted to if we could have only connected the dots. So the resurgence of network science studies in complex networks and social networks were in part due to 9/11 and also of course, the rise of Google and the effectiveness of patron. So then I shifted completely in working on machine learning, data mining, and complex networks. So what are some of the problems that we would be interested in that?

So however big your complex network is it's not complete because you're not omniscient. So if you want to observe more of the phenomenon that you're interested in, can you learn which on notes to query to get the most information out of them? And then about five years ago, my friend, Danielle Allen at Harvard said, come over and give a public talk on AI and ethics. And so that's where I basically got started on fairness and machine learning and my project, just machine learning and the study of the connections between that and complex networks and democracy was my friend asked me to go to Bristol.

This is back like January of 2018. And she was interested in stability of democracy. So yesterday there was this report that came out that for the very first time, put U.S. on democracies with backsliding going on there. And so we were setting this back in 2018 and we had a couple of papers there. And then my connection with SFI was through Cris Moore. So Cris Moore was visiting Boston and I had known Cris Moore from before. He's a computer scientist, I'm a computer scientist. And then we started talking about algorithmic justice and that's how I kind of became part of the SFI family.

Michael Garfield (9m 0s): I think we'll be sort of braiding around the conversation that I had with Cris Moore earlier this year and the conversation I'm anticipating having with Melanie Moses and Kathy Powers in a couple of weeks here with this conversation, because I want to take us through to the papers that you've written about, taking complex systems perspective on democracies. And then I think that takes us actually kind of all the way back to how this ties into AI. And as you put it, the algorithmically infused society.

So let's start with this piece that you led with a number of other researchers we'll link to it in the show notes, what science can do for democracy, a complexity science approach. And although I doubt anyone listening to this needs to be sold on the idea that the political system is a complex system and therefore we should understand it thusly. I mean, this is an eloquent and detailed argument, and I would love for you to just give us the 50,000 foot view.

Tina Elliassi-Rad (10m 5s): So first I guess we should think about what democracy is. So a base definition would be collective decision-making plus equality among participants. Now, of course, a democratic society makes a lot of assumptions. Like citizens are moral and rational. Society desires, order and cooperation instead of chaos and conflict. Politics is about compromise and that power is not concentrated in individuals and groups. And then what has happened recently is people who do track democratic backsliding in terms of civil liberties, political participation, et cetera, have noticed that we are going downhill.

And this is not just in the U S but we also see this in Poland and Hungary and other EU countries. And one of the things which is interesting is that political scientists publish, so this would be Waldner and Lust in 2018. They said before, they believe that achieving democracy is a one-way ratchet. That is when you get to democracy, you don't go back. Of course, this is not true, right? In Germany, for example, right. They had a democracy and then it led to the Third Reich.

And so what we were thinking of was if we think about a democracy as a complex system, then can we explain its backsliding as instability. People who study complex systems are interested in features of that complex system, like feedback and memory and randomness and so on and so forth. Can we think about it in terms of instability in a complex system? And that's why that workshop that was organized by my friend, Karoline Wiesner, who at the time was at University of Bristol and Karim Thebault who is a philosopher, actually think Karoline was a post-doc at SFI because Karoline Wiesner is a physicist.

They brought in people from different disciplines to describe what is stability to them. So I'm a machine learning person in machine learning. We say an algorithm is robust if it's performance is not changed when the input has changed. So some of your viewers may know about all this work on adversarial machine learning, zero pixel attacks, where there's an image and a machine learning algorithm says, this is a Panda, and then you change it slightly. And then it says, it's a given, for example. You change one pixel, you add some noise.

And so obviously in machine learning, we want robust algorithms. And then of course, physicists have studied stability and resilience and networks and robustness a lot. And so part of it for us was to separate stability from robustness. So stability would be persistence over time, resistance to change or perturbation. For example, a ball on top of a peak is not stable. And then robustness is more about insensitivity or independence of the behavior of the system to changes that happen at the microscopic level of that system.

So for example, for the folks here who know about the nearest neighbor algorithm, that the nearest neighbor is not robust. So that's like a high-level view. And then if you like, I can go on,

Michael Garfield (13m 19s): I definitely do want to go into more detail about stability because you sent a whole other piece specifically about this. So before we get there, there's an interesting link that you just brought up between your work in machine learning and the idea of democracy. And for me, it draws a line through a community lecture that Michelle Girvan gave at SFI I think back in 2018 or 19, we'll link to that too, that she was talking about reservoir computing. And as I recall, the talk was basically saying that there are hard known mathematical limits to the horizon of prediction based on you're overfitting to training data, basically the world being more complex and chaotic than that.

And that what researchers have found they can do is inject noise into that process by like training a camera on a bucket of water, something that seems like just like shockingly low tech. But in fact, if you're modeling a chaotic fluid dynamics, then that's kind of what you want. And they were able to extend the horizon of meteorological predictions dramatically by this approach. And so in this paper, you say randomness of interactions is often important to self-organization and you quote Sharada and Christakis 2017 on how the collective performance of human groups can be improved by insertion of a few autonomous agents that behave randomly.

It's funny, this links up with the conversation I just published on the show with Simon DeDeo about when does a virtue become a vice, you know, like everything that we're talking about with stability, chaos and collapse and so on is about what happens when there's too much perturbance, but you're also making the case that there's a lower limit. And I'm curious if you could speak a little bit to the complex systems view of where that limit is and like, how do you find it? What are the techniques that are being brought to bear on determining when it's time to inject a little noise?

Tina Elliassi-Rad (15m 23s): So that's really interesting. If you think about it in terms of our democracy, we need some consensual norms. And so high-end, if there's too much randomness, then you get this instability and then the consensual norms go away. And that's when people are susceptible to lying demagogues because they're like, oh, well the system doesn't work for me. And in fact, this is one of the issues in terms of using algorithms to make high stakes decisions, because people treat algorithms as if they are somehow objective, but the algorithms are also being trained on data that is full of societal bias.

So there is that aspect of it. And so then the question is, how can I predict going back to what you were saying that I have too much information that I will tip over to this chaotic sense of instability. And then my consensual norms are going to go away. And whether adding randomness is the right thing to do, or whether actually in the language of complex networks, separate people out. So not let communication happen to try to stabilize the system.

And so it's not clear. I feel like it depends on the context. So some of your audience may know about these large language models like GPT three or Bert, where you go from texts, then you generate another piece of texts. And so, because they're trained on curated data. That data is actually very random if you want to think about it. Everybody around the world puts up something. And so it's not curated at all, but it's very homophobic, sexist, misogynist, et cetera, et cetera.

And that's why if you go and type into one of these large language models, for example, two Muslims walk into a bar, it will finish it off and they blew it off. And so with those kinds of things, it's not clear that if I add more randomness, that would help it. It's more about what am I learning on? What kind of data am I learning on? And it's where like you need more editors as supposed to like adding randomness.

Michael Garfield (17m 36s): So there's this other piece of it, which you talk about flooding citing Roberts in 2018, the flooding of social media with multitudes of quarrelling perspectives, increasing the diversity of perspectives in the system to an intolerable level at leading consumers of news, either to political apathy or to converge on crudely simplified propaganda. And this is something I found really interesting, looking at democracy itself as an unfinished project, acknowledging the fact that it's evolving, it's taking new forms as society and media continue to adapt to changing circumstances.

And you say, we cannot exhaustively specify the basis of stability within which democracy can subsist. You give the example of early classical liberal claims that democracy could not survive because the less wealthy majority would vote themselves benefits to the extent of the wealthier minority and how instead actually democracy has been structurally constrained by the need to placate wealthy elites. So the dominant model in political science and economics views democratic institutions as being self-enforcing game theoretical equilibria, internally stable, and therefore by implication. Here's the kicker by implication. Democratic institutions are thought to provide an external set of guarantees that foster more dynamic and unpredictable activities, but they will not themselves be effected by those activities.

My God, does that sound like every sort of office bureaucracy that everyone is familiar with? Ultimately it seems like the question of political voice and how much political voice people are actually guaranteed is unresolved, sticky and as a moving target.

Tina Elliassi-Rad (19m 11s): Yes, indeed. And the rise of social media has not obviously helped that because now information in terms of whether you're happy with something or whether you're unhappy with something, usually you're unhappy with something. It goes around the world a lot quicker than before. So for example, during the Iranian Revolution, people would send information, will disseminate information in their social network with cassettes, for the audience who remembers cassettes. And then during Tiananmen Square, it was with fax machines.

So faxes, and then during the Arab Spring, it was Twitter and Facebook, et cetera. And this goes back to the randomness idea that for example, Facebook now has a board that looks at what they have done and the decisions that they've made, but it's, post-talk. It's not before the water has already been spilled. And it's very difficult to collect that water. So we need serious regulations on that for the democracy. I think this all goes back to what I was saying. The assumption with a democracy is that you have moral and rational citizens and that people are going to compromise and somehow we're not there.

Clearly our politicians aren't there. We as people aren't there. And that's why for the very first time the us is on the backsliding list for democracy.

Michael Garfield (20m 33s): There's again, to the sort of wickedness of this problem. You have two really cool quotes in this piece. One of David Runciman who says “the randomness of democracy, which remains its essential quality protects us against getting stuck with truly bad ideas. It means that nothing will last for long because something else will come along to disrupt it.” And then you quote Alexander De Tocqueville who says “more fires get started in a democracy, but more fires get put out too.” This is in theological terms, a theodicy, it's not necessarily a justification, but it's an acceptance of a certain amount of this spill that you're talking about here.

I mean, this seems to point into a more fundamental strain n complex systems thinking about the relative roles of what you might call to borrow on conversations I've had with David Krakauer on the show about this machine learning on one side and theory on the other, or intuition on one side slash prediction and understanding. There's a sense in which we are always going to be on our back foot with this, trying to understand what happened.

I think that this is where we can kind of lean into questions about the piece that was lead authored by Karoline Wiesner on the stability and others on the stability of democracies, it complex systems perspective and get into the question of design for a given context, understanding these things the way that we understand ecosystems to quote former SFI Trustee Stewart Brand, who says like we're turning into gods, we might as well get good at it.

Now we have to design these built environments that we live in. And what does that mean? And for later topic modeling purposes, this definitely plugs into a much broader conversation SFI is having about emergent engineering and like the relationship between basically sort of like playing with fire, playing with these processes that are innately out of control, but can we sculpt them? Can we make a kiln, what can we do with them? So I'd love to hear you unpack this paper. You gave a little bit about this earlier, but you talk about multiple definitions of stability.

I'd like to revisit that in more depth.

Tina Elliassi-Rad (22m 44s): Before I get to that, since you mentioned something, I think this is good interlude for one of the quotations that are really like. It's by Safiya Noble who wrote the book Algorithms of Oppression. And she was on the MacArthur Genius Award list this year. She says you have no business designing technology for society when you know nothing about society. So what has happened is that you have technologists, mostly tech bros, right? Designing technology for society, where they don't know how society works.

And then it's like a post hoc fixing of things. And in fact, one of the things that has been interesting is that as a representative, democracy is not doing very well. People are looking at other forms of democracy like sortition, which some of your audience members may know as like citizen assemblies or kleptocracy. They're kind of like our jury system here and they have their own issues and problems. Or they're looking at like liquid democracy where you can have delegated democracy where you delegate your votes to somebody or this notion of like quadratic voting, where there will be markets for voting.

I like to me, like all of this is really horrible. But again, like when one form of democracy doesn't work, people go to others. And again, there's this notion of like who's in charge and who has power and the incentives are around the power. So in terms of like AI and machine learning and our data-driven world, now one has to take a complex systems view of it. It's not just, oh, here's an algorithm that will maximize some objective function.

But that algorithm works within a complex system that has randomness as feedback as memory, it has hierarchy so on and so forth. And so one of our jobs is to like teach computer scientists about complex systems basically. And that you're not just like independent by yourself. And then in terms of stability that you were talking about, that workshop that we had in Bristol was very interesting because we had people from like philosophy and physics and economics and machine learning and mathematics and social sciences and political sciences into all of them.

This notion of stability was something slightly different or robustness or resilience. In many fields, it seems to me that people use them interchangeably as if one means the same thing as in others. So if your system is stable or robust or resilient, that they're all the same kind of a thing. And that's why for us, it was more about what we're really thinking about is that do you have stability in your system of democracy? So one of the things that happened in America was we did have a fair election, but the sitting president did not say that we did have a fair election.

And so that's something that adds chaos to the system and increases instability because we never had that before. And so thinking about stability though, at all levels, it's very interesting. So at the federal level, at the state level, at the local level, what does stability mean, what happens when one administration comes and in other, if administration is out the door and why is this important? It's important because of how they implement the policies, right?

How the policies get executed on their different administrations is extremely important. And we saw that. We saw that with the power of the executive pen, you can change a lot of people's lives, which reduces stability in the system.

Michael Garfield (26m 21s): So to that point, actually it may be, it makes sense to hairpin back to the first paper we were talking about, because in that piece, you propose six policy recommendations. And I was so excited to see this because as SFI, as a theory producing institute, we're not in the practice typically of issuing policy recommendations. Don't like doing it. And yet these are, I think basically bumpers that you admit in this piece are extremely context dependent.

The balance between these sort of six things to keep in mind is going to shift depending on where you are considering deploying them. So it's not like a one size fits all thing, but you give three top down and three bottom up considerations. And if we can enumerate and unpack those a little bit, I would love that because I feel like that's something that's very concrete that people can take away and consider in the governance of their own lives.

It's not just this lofty kind of theoretical piece,

Tina Elliassi-Rad (27m 28s): So one is improve diversity by regulation. So one of the things that these citizen assemblies are good at is they randomly pick citizens that discuss a problem that's important to society. So this is in fact how in Ireland, which is a very conservative Catholic country, they were able to make, I believe abortion is legal there. And same sex marriage is no longer illegal. So you can change public opinion if you run these kinds of sorting bodies.

So that's one. You'd really need diversity. So in fact, one of the things which is unfortunate now is in all the people who work within like AI and ethics, and I keep going back and forth because I feel like they're all falling under the same umbrella is that there are the marginalized people who are saying, look, these algorithms, these systems are harming people. They're harming our communities. And then there are the privileged people were like, oh, well, we're advancing science, right? I have yet another definition of fairness in machine learning. And so you really need to have diversity.

And so this goes back to what Safiya Noble was saying, in terms of like, you need to know how society works to want to make good change. And so one is diversity. The other one is to monitor feedback. So the feedback that we're getting between mainstream media, social media politicians, the economic inequality that we have, and the dissatisfaction with the politicians, these all go together. There's a lot of feedback loops. In fact, in the paper that you mentioned in terms of algorithmically, a few societies, that's one of the things that we talked about.

So what you buy, so Amazon has a lot of power based on what they show you and what you buy and how that affects the stock market. The dating process of these online dating apps, who we date online and offline. So this feedback between online and offline and a new cycle, they're all meshed together. And so one needs to monitor this feedback. And then the other one is to be able to ensure connectivity, that is, we want transparency. Like one of the things that really helped the U S in terms of democracy is that we did have paper ballots there.

This idea of, okay, let's recount. And so you'd really want to have transparency. And so these are the top down ones. The bottom up ones is you really need to recruit people from the other parts of society. In a way it's similar to the diversity, but you need to have credible communicators to other people that you don't necessarily agree with. And I know there's lots of folks that are working on like filter bubbles. I think you may have had Filippo Menczeron at Indiana University.

So he works a lot on those kinds of things, but we need to really talk to each other because if we don't, then we don't see the other person's point of view. The other one is to recognize, limits of message control. So if you're working on, let's say climate change, if you years ago at University of Arizona, some of the emails of the scientists got released and then a spin was put on it. And so you have to be careful in terms of communication. And we've noticed, for example, in the pandemic, the communication has been extremely bad.

So we have to recognize what are the limits of the message control and what is the best way of going out there and saying, look, democracy is a good thing even if you believe it's not a good thing and try to convince people of that. And then the last one was to emphasize, basically you need to keep going, you need to persist for the things that you care about. So just hoping for the sake of hope is not good enough. We need to be active. We need to be out there and try to make the world a better place. And one of the things that, which was great actually, after we published it, I believe the American Academy of Arts and Sciences also had a report on how we can reinvent American democracy for the 21st century.

And their recommendations were very similar to ours. They also had six recommendations

Michael Garfield (31m 28s): Validating. Yes. So I'm going to give you a very SFI question. I mean, basically it sounds like all of this hinges on the sort of the typology of that society, who's connected to whom who has sort of agency over what. There's lots of work coming out of SFI on scaling laws and natural limits of various kinds. This has been a hot conversation since the beginning of democracy.

I think about Kurt Vonnegut and Blue Beard saying something to the, I think the quote was “any country larger than Denmark is a damned fools mistake” which, I mean, that's a hypothesis that can be empirically researched. So when it comes to all of the problems that we recognize that are brought into focus by papers like the stewardship of global collective behavior piece that came out earlier this year about what have we done? We've created all of these affordances for people to influence one another in ways that everyone seems to acknowledge is destabilizing.

And it's a question of whether or not you want that instability. But I mean, ultimately, what are your thoughts on the natural limits of different kinds of governance? There's one piece here, which is that if you look at this through the lens of work like that, of Jessica Flack who thinks about these kinds of processes as collective computations and thinks about the state as having to coarse grain information, it's only seeing individuals at a given resolution.

And so things are falling through the cracks all the time. And then if you look at work that Simon DeDeo and Elizabeth Hobson were writing on last year about the dimension at which a system understands itself, you get to a point where suddenly beyond this point, everything is organizing according to one principle components like in academia, like suddenly prestige reshapes, the entire institution, or in New York, it's your distance from Hudson Street determines the price of real estate.

And like things are distorted in this way that seemed to happen as a natural consequence of the system's inability to understand itself at a finer level of resolution. And so this is just sort of a question into, is there a natural size limit for democracies, or is the fact that the protocols upon which democracy are founded are so general and so adaptive that we just don't know, and that maybe we haven't seen the biggest amoeba yet, like we're not yet quite at the point where multicellularity makes more sense than just scaling one cell bigger and bigger.

 

Tina Elliassi-Rad (34m 20s): I think we don't know. Having said that in terms of size, you were saying. If you have a very homogeneous society, it could be as large as it wants. And if you are controlling what kind of information people are getting, then let it be as big as it wants. Part of the issue with, for example, democracy like America, is that it's so heterogeneous. And if I the workshop, I used to say Bristol, we had a thought experiment that if you were to model democracy with one parameter, what would it be as a new mathematical model with one parameter?

And it was very interesting to hear what people would say depending on where they were raised. So for some people like freedom of speech was the most important parameter for modeling democracy. For some people, it was separation of powers, for some people was equal treatment under the law. So I think if we think about democracy and okay, I'm going to model it as a complex system, and I'm going to start small, like  what are the key parameters that are important here? Some of them that you may think, oh, for example, equal treatment under the law is extremely important.

You don't necessarily need a democracy for that. You can have a dictatorship and you can have equal treatment on the law. So I think that's an interesting side experiment to do here. And then if you add heterogeneity in terms of your population, in terms of where they get their information, then you make your complex system even more complex, and then you can prove less things about it.

Michael Garfield (35m 57s): This is back to number four, like the recruit credible communicators in estranged regions of the network. So this is the thing that keeps me up at night is the crisis of social epistemology and narrative. If you're a non-expert, how do you know you're talking to an expert and it's kind of this recursive nightmare of a situation. This kind of links to the conversation I had with Rajiv Sethi where we were talking about how his work on stereotypes and your brain creating these equivalence groups where it's, it's very natural for us to do this.


It becomes a problem when it's a high-risk situation in which you have very limited time to make your decision. And so, like you could break bread with your neighbor and they may come from a different race and ethnicity and culture, but then you've gotten to know them. And there's something about the web, like its ultimate effect is the auto correlation of society in such a way as to ramp up these disruptive network effects, such that people who are encountering one another online, don't have time to get to know one another and establish the common priors necessary for what we were thinking of as democracy in the United States in like the 1950s. I don't know, this is just sort of a vague question, but I want to get us from here into the questions about how we think about democracy in a society that is increasingly determined by algorithmic processes as well as mediation of electronic media.

Tina Elliassi-Rad (37m 34s): So I feel like part of it is because our shared realities are very different. So one of the things about the pandemic that happened are people like me. I was able to bring up the drawbridge and continue to do my work. But a lot of people were not able to do that. I had a safe home where I could work and teach and research and et cetera. But a lot of people couldn't do that. My shared reality is very different than a single mother who works three jobs.

 

And I feel like as our economic inequality gets bigger and bigger, our shared reality goes away. And as I'm sure you know, on your audience knows to have a healthy democracy, you need to help the middle class. And so as middle-class goes away as our policies. The policies that our governments enact squeeze the middle class, our shared realities totally changed. And because of that, then our opinions on social media are so different.

It's like, which world do you live in? And of course the economic inequality also touches on unequal influence in the political system. And that basically this notion of not having the same shared reality is where, for example, some of our marginalized communities believe that the cops aren't for them. And so then the algorithms that are used on data from let's say, arrests are obviously not going to be fair to them. And so trying to at least have some base of shared reality is what we really need to have stable democracies.

And I thought that maybe pandemic would be one of them, but unfortunately the pandemic was not one of them. The financial crisis of 2008 also was not one of them. I guess 9/11 was the closest we got in modern day to having like some shared reality. But that's one of the biggest issues. And that we see in the data that's been fed to the algorithms that make these decisions, these high stakes decisions, whether you go to jail or whether you go home while you wait for your trial.

Michael Garfield (39m 49s): Now we're really like in it and talking about all three of these papers at once, which is, I'm glad that we got here, because again, in the first one you talk about to what you were just saying, high inequality under some circumstances will spur unhappy citizens to counter mobilize, however, such a stabilizing counter-reaction requires sufficient political knowledge and access to the public space and like an around and around we go. And I like to think about this in terms of how many layers of self-determination really need does a system of a given size.

It's not how big is too big for democracy. It's more like Geoffrey West talking about circulatory systems and saying an elephant has more branch points between the aorta and the capillaries because it's a bigger animal. And so to your point about policing and inequity in unfair treatment, I don't know what you might say to this at the level of sort of fundamental theory. But I do hear a lot of people asking questions about if we could re-imagine the police, then what kind of emergency should be escalated to what level?

It seems like most people agree that things like a domestic dispute do not require a SWAT team. I would love to know a little bit more about that piece of it and where you see research on this question of basically like tissue layers, forming, and how many are appropriate to a given system and at a given task.

Tina Elliassi-Rad (41m 20s): So these are all really interesting questions and very difficult to solve with technology. Like people think, oh, I can solve these with technology. So my friend Rayid Ghani, who's a professor at Carnegie Mellon. He does data science for social good. And for, he was saying, I go to a police department and the police chief says, tell me if my department is racist. And it's like, okay, well, how are you going to formulate that your department is racist? Or my friend Tracy Mears was a law professor at Yale. So it's, to me that in criminal justice, people feel like they're being treated fairly if they are treated with dignity, they are told why things are happening to them. And they feel like they have said their piece that you gave them enough time to delve their story. How do you put these in an objective function, whatever data you have, how do you put it in an objective function where you're like, oh, I believe that this person feels like they have been satisfied and have told their story or that they feel like they're being treated with dignity. So there's some disconnect in terms of like, how do you actually formalize these things?

There's a lot of data and people are trying to find associations between them. And unfortunately, associations usually just work on the average case, not the people who are on the tail end of the distribution. But even if like, we don't think about, for example, criminal justice or policing, even let's say take medicine. During the pandemic, we were all wanting to buy the oximeters to measure, for example, how much oxygen you have in your blood? Well, last year in December, there was this paper and it made all the news that these estimators don't work on darker skin people.

And then I did some research and it was clear that people knew this way back in the eighties, people knew that the oximetry is don't work on darker skin people, nobody did anything. So it's about like, we don't care about the marginalized people. So nobody did anything. And imagine like all of this data that was collected over time and now is being fed to machine learning systems, or, you know, these are all anecdotal, but like a friend of mine who's African American goes to the doctor and she has a history of osteoporosis in her family.

And they were working through a decision tree. So the doctor was going through a decision tree and machine learning classifier. And basically when the classifier was like, oh, you're black. They have nothing to say, like it stopped. It was a leaf. And so she's like, okay, Sue Mom, white. And then there were questions that were irrelevant. Like how many members of your family have had osteoporosis? How many times have you broken your limbs in the past couple of years? And so some of it just has to do with who's valuable, who does our society value.

And who does our society not value? And so these things then persist over time, which is unfortunate. And then which leads to, again, the system doesn't work for me. So the police is not for me. If I'm a dark skin person, the police is not for me. I'm never going to call them because bad things may happen. Even if the police department is a great police department, but it becomes very difficult to break those cycles. So we just have to work on it. So one of the solutions would be that when politicians, policymakers enact policies, there's supposedly an intent to that policy.

When the policy gets executed, then there's data that comes out. So then you can use that machine learning to see if you can reconstruct the intent of the original policy. And if there's mismatch, then you can make noise. So a concrete example would be stopping frisk in New York City. Clearly the policy makers were not going to town square and saying, yeah, this is to stop and harass young people of color. But when you actually look at the data, when the policy was executed, it seemed like it was to just harass young people of color.

 

And so then it's like, maybe we should change this policy. Or maybe we should change the implementation of it. Which goes back to, for example, what you were saying that if there's, for example, a mental situation, maybe we shouldn't send the cops was all there rot gear. But you know, all of this requires courage from policy makers. And yeah, I feel like our politicians and policy makers are not as courageous.

Michael Garfield (45m 33s): Well, so as much as I want to come up with something funny to say to that, I have nothing. I do however, want to press because this piece that measuring algorithmically infused societies that was led by Claudia Wagner that we will link in the show notes, you identify three challenges, three key challenges to basically understanding what's going on, one being insufficient, quality of measurements, the complex consequences of miss measurements and the limits of existing social theories.

And it's funny, because in reading this, it occurs to me that the first challenge takes us all the way back to chaos theory and indeterminacy and the so-called butterfly effect. And what we have here, I think what you've mapped in a way is this sort of ratchet, this comes up a lot in terms of questions I have about the details of our models. That it's inevitable something is going to fall through the cracks. In some cases it's good.

You know, certain things remain like uncorrelated and a financialization and herd following behavior and this kind of stuff. I would love if you could just go a little bit deeper into how you and your authors on this saw opportunities to address these three challenges and sort of like where you see promise in our theoretical understanding. We just talk about like Andreessen Horowitz, talking about software, eating the world, and like to the degree that we are becoming more and more dependent on these kind of Lovecrafty in discarnate things out there.

How do we make sense of it?

Tina Elliassi-Rad (47m 20s): So one of the things that we do not have, so I teach a class to undergraduates. This is first year students’ algorithms that affect lives and the students are awesome. And so one of them asks, do we have tools? Do we have software that tells me how I'm being manipulated. We do not. Now there are people including myself, I'm part of this big, a foundation grant where we're looking at, can you like nudge or boost people to do better online? So before, for example, Tina posts something, you would say, well, Tina, are you sure you want to post this?

Some of your Jewish friends may find it anti-sematic. And then you can see if the persons still posted it or not. But as we all know, like data is political, there's a really nice site called the Library of Missing Datasets. How come for certain things, we don't have any data, even this day and age. And like having basically like a birth certificate for data. So, there was a really nice paper that finally was recently published. It's called Data Sheets for Data Sets by Timnit Gebru who was one of the co-leads of Google's ethical AI before she was reinstated last year where they're like, you have to have a long form birth certificate for whatever data that you're using.

Because that way also you have to put in your values, like, what are your norms? So what was the motivation? How did you collect it? How are you maintaining it? For what purposes it is so on and so forth. And then there's actually another paper by Meg Mitchell, Margaret Mitchell, who was the other co-leader of the ethical AI, Google, who was fired called Model Cars for Model Reporting, which is well on what populations does this data work on? And which ones does it not work on. Again, having this kinds of long form birth certificates.

But if we are going to live in this kind of algorithmically infused societies where some of these algorithms can have a huge impact on your life, whether you get hired, whether you get the loan, whether you can go home while you wait for your trial, then in a way we should cheat the algorithms like drugs, like prescription drugs. And that we should have warning labels for them because just like prescription drugs, they will have side effects. They work differently on different subpopulations, et cetera, et cetera. And actually the FDA is far this along on this than other regulatory institutions in terms of algorithms being used in medicine or medical devices.

And I think that's because people have successfully sued medical companies and we're able to collect a lot of data versus other high stakes decisions. But in a lot of this is just like being honest and saying, these are my values. So I'm building this system. I'm collecting this data because I only care about white males and white males between 30 and 65 think. But nobody is going to go say that. Nobody's going to go say like, oh, I just want to build a system that works for this sub-population.

And in fact, this is why there's a whole group of machine learning folks led by Charles Isbell that are thinking about the systems as basically like engineering systems. You will not build a bridge that only works for white people. You'll build a bridge that will work for everybody. So we need to build algorithms that work for everybody. And these algorithms are part of the complex system. And it could be that you can have one master algorithm that works for everything. Clearly it will be like that. That you would need different algorithms for different sub-populations because our democracy is so heterogeneous.

So I'm not quite sure that we can have the nice theoretical formulations that we love to see given just how messy data is, but it would be nice to develop new social series, as you were saying, where it looks at data as a first-class citizen. So some of the social theories, for example, with strengths of weak ties that mark renovator came up with, which is like, if you want to find a job, do you talk to your best friend or do you talk to an acquaintance?

And he found that talking to an acquaintance is a better way of going because your acquaintance has access to other information versus your best friend has the same information that you have. And whether like those kinds of theories, hold in, let's say LinkedIn, and I believe there was a paper about it that it's still is true, that you want to talk to your acquaintance.

Michael Garfield (51m 45s): So that works because again, we have network heterogeneity. That there are things that I know that you don't and vice versa. And so this gets to the last question I have for you, which first of all, thank you. This has been awesome. And then second, the question that I have is about the relationship of transparency to everything that we're talking about. You stress the need for transparency. I don't think any proponent of democracy would deny that transparency is important, but again, you get people like Scott Adams or David Brin who have written very loud pieces about the idea of a transparent, truly like a totally transparent society.

I think a lot of people, these days are burned on surveillance capitalism and actually have given up that rhetoric and are actually fleeing from that kind of a thing. And so it looks like there are sort of like upper and lower bounds on this. And of course, one of the things about transparency is that it distorts behavior. People know they're being watched. So it allows for things like virtue signaling your tribal affiliation by demonstrating large campaign donations, or it leads to like cold wars or it leads to people feeling like they are stakeholders in something that is going on in some town and like five states away from them that they should be weighing in on this.

And they don't have the expertise or the cultural history for that to actually matter. But we've created this thing where there's transparency in ways that are disastrous. So I'm curious, based on all of this work, in what ways, and to what degree is you regard transparency as optimal for a democracy?

Tina Elliassi-Rad (53m 29s): So in all the important ways, things are not transparent. So when you say, I agree to Google's iTunes, what are you consenting to? I have a friend who actually has read it. She's a lawyer and a law professor and works on consent and she's like, they have covered themselves perfectly well. So what am I consenting to because my data then gets aggregated with other data, goes into this machine learning pipeline who knows what is happening to it? 

And so we don't have those kinds of regulations in terms of Tina, what is your data doing here and how like the inclusion of your data may put you in this group that then for example, you will not get good credit. And so to me, it seems like in all of the important ways, things are not transparent. And some of it is because the companies say this is our secret sauce. I can't tell you what my secret sauce is because then my competitors will use it. But to me, like who gets information when is extremely important and we don't really have transparency there.

So perhaps we are transparent in all the superficial ways, but not transparent is a more important ways about how your data is adding quote unquote value to the companies or to whatever government programs there are, et cetera, et cetera,

Michael Garfield (54m 50s): Just as a brief follow-up to that, where do you stand kind of earlier in this conversation, looked at this proliferation of new candidate voting mechanisms with a bit of a side eye, not to like overtly talk smack on quadratic voting, which I know SFI friend Glen Weyl researched, but like last thought is, so what do you make of what is going on right now in quote unquote web three space with like all of these people bringing governance on and these like experimental, distributed ledger based self-governing systems.

Like there's a proliferation of political possibility here, but I mean also it has levers into the system as it is, and is contributing, I think quite a bit of anxiety to the incumbent structures about just accelerating the already unmanageable disruption. I don't have a lightening round thoughts on that.

Tina Elliassi-Rad (55m 48s): I feel like the biggest thing is who will benefit from such a system? Will the marginalized benefit from such a sin and it's often not. So one really needs to look at the incentive structure. And in fact, like this also goes back to, for example, using algorithms in the justice system, like why is the judge using an algorithm? What are the incentives for the judge to use an algorithm given that people do believe that algorithms are objective? And so they put more weight on them without even like cross-examining them.

And so with all of this, like different forms of civic participation, my biggest issue with it is who is it going to benefit? Is it going to benefit the people who are already privileged or is it going to raise the folks who are marginalized and help them? And I feel like, you know, in this case, it's not going to help them marginalized. They have no power, they have no voice.

Michael Garfield (56m 45s): Hopefully this makes it through to people building in this space and that they're eager to heed what you have to say here. But anyway, Tina, this has been awesome. Thank you so much for taking the time.

Tina Elliassi-Rad (56m 59s): Thank you so much. I very much appreciate it. And I'm sorry to end on a downer, but it's like the divide between the privilege and I am part of the privileged class and the marginalized class is as wide as Grand Canyon. And so the question is with all the technology, with all the advances that we have with all the data that we have, how can we close this gap? And if you want to close this gap, then you have to put yourself in the shared reality of the marginalized people as supposed to, oh, if I do this, it was going to help my life or give me more power or whatever your incentive structure is for yourself.

Michael Garfield (57m 30s): That's a fine strong place to add this. Thank you for listening. Complexity is produced by the Santa Fe Institute, a nonprofit hub for complex systems science located in the high desert of New Mexico. For more information, including transcripts research links and educational resources, or to support our science and communication efforts. Visit Santafe.edu/podcast.

Transcript generated by machine at podscribe.ai and edited by Aaron Leventman at SFI.