COMPLEXITY: Physics of Life

Cris Moore on Algorithmic Justice & The Physics of Inference

Episode Notes

It’s tempting to believe that people can outsource decisions to machines — that algorithms are objective, and it’s easier and fairer to dump the burden on them. But convenience conceals the complicated truth: when lives are made or broken by AI, we need transparency about the way we ask computers questions, and we need to understand what kinds of problems they’re not suited for. Sometimes we may be using the wrong models, and sometimes even great models fail when fed sparse or noisy data. Applying physics insights to the practical concerns of what an algorithm can and cannot do, scientists find points at which questions suddenly become unanswerable. Even with access to great data, not everything’s an optimization problem: there may be more than one right answer. Ultimately, it is crucial that we understand the limits of the technology we leverage to help us navigate our complex world — and the values that (often invisibly) determine how we use it.

Welcome to COMPLEXITY, the official podcast of the Santa Fe Institute. I’m your host, Michael Garfield, and every other week we’ll bring you with us for far-ranging conversations with our worldwide network of rigorous researchers developing new frameworks to explain the deepest mysteries of the universe.

We kick off 2021 with SFI Resident Professor Cristopher Moore, who has written over 150 papers at the boundary between physics and computer science, to talk about his work in the physics of inference and with The Algorithmic Justice Project.

If you value our research and communication efforts, please consider making a donation at santafe.edu/give — and/or rating and reviewing us at Apple Podcasts. You can find numerous other ways to engage with us at santafe.edu/engage. Thank you for listening!

Join our Facebook discussion group to meet like minds and talk about each episode.

Podcast theme music by Mitch Mignano.

Follow us on social media:
Twitter • YouTube • Facebook • Instagram • LinkedIn

Related Reading:

Cris Moore’s Google Scholar Page

The Algorithmic Justice Project

“The Computer Science and Physics of Community Detection: Landscapes, Phase Transitions, and Hardness"

The Ethical Algorithm by SFI External Professor Michael Kearns

“Prevalence-induced concept change in human judgment” co-authored by SFI External Professor Thalia Wheatley

“The Uncertainty Principle” with SFI Miller Scholar John Kaag

SFI External Professor Andreas Wagner on play as a form of noise generation that can knock an inference algorithm off false endpoints/local optima

Related Videos:

Cris Moore’s ICTS Turing Talks on “Complexities, phase transitions, and inference”

Fairness, Accountability, and Transparency:  Lessons from predictive models in criminal justice

Reckoning and Judgment  The Promise of AI

Easy, Hard, and Impossible Problems: The Limits of Computation. Ulam Memorial Lecture #1.

Data, Algorithms, Justice, and Fairness. Ulam Memorial Lecture #2.

Related Podcasts:

Fighting Hate Speech with AI & Social Science (with Joshua Garland, Mirta Galesic, and Keyan Ghazi-Zahedi)

Better Scientific Modeling for Ecological & Social Justice with David Krakauer (Transmission Series Ep. 7)

Embracing Complexity for Systemic Interventions with David Krakauer (Transmission Series Ep. 5)

Rajiv Sethi on Stereotypes, Crime, and The Pursuit of Justice

Episode Transcription

This is a machine-generated transcript produced by podscribe.ai with edits by Aaron Leventman. If you would like to volunteer helping to edit future SFI transcripts, please email michaelgarfield[at]santafe[dot]edu. Thank you and enjoy:

 

Cris Moore (0s):

Often, the question is not, how do we solve this math problem? It is this the right problem to solve? And does the data mean what we think it means? And a lot of the data that people train these criminal justice algorithms on, there's a single bit in the data set that says you were rearrested. It doesn't say whether that was because you tried to kill somebody or whether it was graffiti. So a lot of the data sets that are actually being used, don't make that distinction, or let's say you missed your court hearing. Is that because you tried to drive over the Texas border to escape justice, or is it because you were terrified of getting fired from your job because then you wouldn't be able to feed your kid and you didn't understand that if you didn't show up, there'd be a warrant out for your arrest or because you couldn't afford transportation?

 

Cris Moore (45s):

And so on. Maybe what you should do is help you show up to court, make sure that you have transportation, make sure that maybe you have childcare if you need childcare and make sure that you get a text message reminder instead of a physical postcard that arrives at the apartment you lived in six months ago.

 

Michael Garfield (1m 24s):

It's tempting to believe that people can outsource decisions to machines. That algorithms are objective and it's easier and fairer to dump the burden on them. But convenience conceals the complicated truth when lives are made or broken by AI. We need transparency about the way we ask computers questions, and we need to understand what kinds of problems they're not suited for. Sometimes we may be using the wrong models and sometimes even great models fail when fed sparse or noisy data, applying physics insights to the practical concerns of what an algorithm can and cannot do. Scientists find points at which questions suddenly become unanswerable.

 

Michael Garfield (2m 5s):

Even with access to great data, not everything's an optimization problem. There may be more than one right answer. Ultimately it is crucial that we understand the limits of the technology we leveraged to help us navigate our complex world and the values that often invisibly determine how we use it. Welcome to complexity, the official podcast of the Santa Fe Institute. I'm your host, Michael Garfield. And every other week, we bring you with us for far ranging conversations with our worldwide network of rigorous researchers, developing new frameworks to explain the deepest mysteries of the universe. We kick off 2021 with SFI resident, Professor Cristopher Moore, who has written over 150 papers at the boundary between physics and computer science to talk about the physics of inference and has worked with the algorithmic justice project. If you value our research and communication efforts, please consider making a donation@santafe.edu/podcastgive and or rating and reviewing us at Apple podcasts. You can find numerous other ways to engage with us at santafe.edu/engage. Thank you for listening. Cris Moore. It's a pleasure to have you on complexity podcast.

 

Cris Moore (3m 23s):

Thanks for having me.

 

Michael Garfield (3m 25s):

Before we started this call, we were calling the shots mapping out how we would like this to go. And I think you make a good point that, you know, spending roughly equal time in theoretical background and like foundation for some of the questions that you're interested in asking that you're asking in your research and then where that takes us into broader scientific questions about half the time there, and about half the time looking at the way that algorithms are applied in society, the questions they raise and the problems they create so that we can land this with a nice sort of everyday practical consideration seamless.

 

Michael Garfield (4m 8s):

I'd like to start though by inviting you to talk a little bit about your background as a human being, as a scientist, because without fail everyone on this show, a personal story about how they got into asking the questions that animate and inspire them in their work. How did you get into computer science in the first place? And how did that ultimately bring you into the orbit of the Santa Fe Institute?

 

Cris Moore (4m 37s):

Well, I started in physics and so I think growing up, I was raised on the Carl Sagan version of Cosmos, which locates me generationally and that was a fantastic TV show at the time. And what I loved about this show was not just that it let you hear the angels sing about the fundamental nature of reality from elementary particles to black holes and the big bang and evolution, but that it painted this very humanistic picture of what science has for and how the amazing glory of the world around us helps us transcend our national and cultural boundaries and biases.

 

Cris Moore (5m 20s):

And what I remember very clearly about it, because this was when something called the Cold War that some of your listeners have heard about in their history books. This was still going on.  Carl Sagan painted this picture that the United States and the Soviet Union and the rest of the world will destroy each other in nuclear hell fire, or we will join hands and travel together to the stars. Either nuclear hell fire or interstellar travel. Those were the two possible is for humanity. Then the Cold War ended and now the world seems more complicated and we've passed over the past few decades from a focus on the threat of terrorism to now the threat of authoritarianism and at the moment the pandemic and so on.

 

Cris Moore (6m 18s):

And so the world seems more complicated, but I think that's still imbued in me this strong sense that science is fundamentally a force for good and that fundamentally, we all have to decide what kind of future we want to fight for. And for me, it made me very much want to be on the side of transcending like I said; the boundaries that divide us and working as hard as possible to join with other human beings for a better future, both in terms of abstract understanding and in terms of building a better life for everyone.

 

Michael Garfield (6m 57s):

And so utopian goal is brings us to computer science.

 

Cris Moore (7m 4s):

Well, sort of. Right then the next thing that happened was while I was getting my PhD in physics, I read Godel Escher Bach by Douglas Hofstadter, which was another amazing touchstone. And I think it was a very, very formative for many people in my generation. And it's still a fantastic book. There's a bizarre lack of female characters, but besides that, it's a fantastic book and it's a very playful exploration of loopiness and self reference from how DNA self replicates to how computers can simulate other computers or themselves to how logisticians, including the great 20th century logisticians like Bertrand Russell and Kurt Godel thought about proving things, including proving things about proofs and how there are. There are unsolvable problems, there are unprovable truths and so on.

 

Cris Moore (7m 58s):

So that blew my mind. And I learned about 1930s computer science when a lot of these things were happening and Alan Turing and his heroism in breaking the Nazi enigma code and so on. And then I decided I wanted to catch up on theoretical computer science. And so that became working at the boundary between physics and computer science became a big part of my research experience as a postdoc at the Santa Fe Institute. And then later on as a professor at the University of New Mexico, and then back at the Santa Fe Institute and at that boundary between physics and computer science, there's all sorts of fantastic stuff.

 

Cris Moore (8m 39s):

There's quantum computing, there's network theory, the theory of social networks and biological networks. So that's been kind of a big part of my life. And then more recently, I think partly because of all the crises that society is facing right now, including crises of inequality and social justice. I got very interested, like a lot of other computer scientists in the impact of algorithms on society and the way that algorithms are being used in criminal justice in housing to determine who gets a lease, who gets a mortgage or alone as well as who gets imprisoned and who gets released.

 

Cris Moore (9m 21s):

And so I found myself working in a much more applied way, a much less theoretical way on how these algorithms really affect people's lives. And here I think the key issue, and we'll talk about this later, I guess, is transparency. And do the people whose lives are affected by these algorithms, understand why they were given the yes or the no they were given, or do they understand what data these algorithms use and how these algorithms work and do they have the opportunity to contest these results and to ask questions like why is this data being used about me and is this the right data and why are we using this algorithm anyway?

 

Cris Moore (10m 4s):

So that's another fascinating topic. I guess from theory to practice, that's been part of my trajectory. I still love theory, but, you know, especially as these algorithms get used more on the ground for these decisions that really affect people's lives and livelihoods. I want to demystify algorithms. There's this air of mystery around them. Vendors of algorithms are telling government officials that they should use these, that they're marvelously sophisticated and marvelously accurate in predicting people's behavior.

 

Cris Moore (10m 46s):

When in many cases they're neither of those things. And so I want to empower people so that they can understand that both the strengths and the weaknesses of algorithms, what's good about them and what is maybe not so good so that everyone, including policy makers can make better and more informed decisions about them.

 

Michael Garfield (11m 8s):

Let's start then in theory, as I think my goal state for this conversation would be to send in the listeners of this podcast into court settings to ask precisely the kind of personally empowered and terrifying to a judge or prosecutor kinds of questions that you're asking, questions that seem to undermine the, the BS that we've allowed to creep into these, the systems in that regard. Where I'd like to start is you gave a series of talks, Turing talks in Bangalore, and you had a related paper up on archive on the computer science and physics of community detection, landscapes, phase transitions, and hardness.

 

Michael Garfield (12m 1s):

So I think that this is a good just for people who are as clueless as I am about even sort of fundamentals of what you're talking about here. I think it would be great to talk about, to introduce for us the ideas of how we detect patterns in data, how we detect in particular the noisy data and what kinds of problems that creates. And then what kinds of insights from physics can and have been leveraged in order to, to make sense of these things, looking at the relationship between probability and energy and that kind of thing. I think that's usually a good place to start.

 

Cris Moore (12m 40s):

I got into this partly through this idea of finding communities and networks. So this is a popular problem in network theory, people of course have all sorts of rich histories and rich facts about them, what some people might call metadata. You have a location, you have demographics, you might belong to a particular religion. You might belong to a particular, a political party. And then for better or worse, who you link with them, the network might have a lot to do with these things. We have this habit as so as social primates of forming social relationships, mostly with people who are similar to us in many of these ways.

 

Cris Moore (13m 27s):

So now imagine that as a scientist, you get to observe a whole lot of links between thousands or millions of people. And now you're trying to figure out, what are the communities here just based on those links, perhaps without any additional information about those underlying facts about people. So this is the community detection problem. And we might assume for instance, we might define a community as a group of people who have nodes in this network with a higher density of links within that community, then between it and other communities. Although by the way, that's not always the case.

 

Cris Moore (14m 9s):

So in food webs, for instance, predators eat, pray more than the other predators and economic networks. Buyers might connect more to other sellers than to other buyers. So a community isn't always something with more links within it then between communities, but, that's for a lot of social networks, that's a reasonable assumption. So there are a host of different algorithms, different computational methods for taking a massive network and picking out these communities. But it's fundamentally a high dimensional problem. If you have a million people in this community, then you have a million guesses to make or conclusions you want to reach about who's in which community.

 

Cris Moore (14m 49s):

So you're not trying to just to compute a single number or three different numbers. You're trying to compute a million different things, and this puts it in what people call high dimensional statistics. And it's good to remember that even in this age of massive data sets in a high dimensional problem like this, where we have a lot of data, but we're also trying to learn, or we say infer, many, many different things. The amount of data we have per variable, we're trying to figure out might not be so much. So in the social network, it might be, it may be that you only have six real friends that strongly influence you in the network.

 

Cris Moore (15m 36s):

So maybe you have a million people, but we only have a small amount of information per person through a small number of connections. This happens in genomics too, by the way, I mean, maybe you're looking at the activity of 10,000 different genes, but you only have 100 different rats because keeping rats is expensive and time consuming. So per gene, you only have a hundred different data points. So again, you have a lot of data, but per thing you're trying to figure out it's actually not so much. Anyway, so in these high dimensional problems, there are a couple of different situations you could be in.

 

Cris Moore (16m 16s):

There could be a great shortcut that helps you immediately zoom in on the best solution or at least a very good solution to this community detection problem. And there are a bunch of fast algorithms that, you know, sometimes they work very well on some networks, maybe on others they don't work so well, but there are lots of algorithms out there in practice that you can download and try out, but to find the best solution or to deal with a kind of noisy data set where maybe some of the links are wrong. Some of them are missing. Maybe there's some randomness where people unexpectedly form links with people who are quite different from them, which was sort of throw the algorithm off.

 

Cris Moore (16m 60s):

Then in principle, you might have to, in the worst case, go through all combinations of who's in what community and with a million people. That's two times, two times, two times two, two to the million of power, which is an astronomical number. So this is an example of an exhaustive search, which we would really like to avoid, and many of the hardest problems in computer science, the so-called NP complete problems where you're trying to solve a hard optimization problem with many different variables, which are interacting in complicated ways. As far as we know, many of these problems require that kind of exhaustive search and the solution is out there, but you are wandering in this astronomical space of possible solutions.

 

Cris Moore (17m 50s):

It gets very hard. What my coauthors and I found in this community detection problem in many others is that as the problem gets noisier, as the connections get more random or as more of the links become unobserved say, or in our rat example, maybe as the ratio of the number of genes to the number of rats that you're trying to figure things out about goes up. So it's getting up to be a higher dimensional problem, as well as a noisier one. There are these examples of phase transitions, which is a term from statistical physics that some of your listeners know, the jump from small outbreaks of a disease to a huge epidemic is a phase transition.

 

Cris Moore (18m 34s):

The fact that water melts or boils when it passes a certain temperature is another example. The fact that a block of iron suddenly ceases to hold a magnetic field when you heat it up above a certain temperature is yet another example. So here the noise in the data is like the temperature. And as you heat the system up, you're adding more randomness to it, more thermal noise with things kind of randomly jumping around. And at a certain point, suddenly you can no longer find the pattern in the data. It's almost as if nature is trying to tell you about the pattern, but nature can only do that through this noisy channel.

 

Cris Moore (19m 16s):

It's like transmitting a message over a channel with a lot of static in it. And when there's too much noise, suddenly we can no longer find the pattern. It's just impossible. The information isn't there. There are other cases where the information is there, but there's some evidence from both physics and mathematics and computer science that it becomes astronomically hard to find that you have to start searching through this vast space of possible patterns in order to find one that fits the data. So for me, this is a great cautionary tale. I mean, our theorems, our calculations are carried out in kind of clean mathematical models where we actually have the right model of the data.

 

Cris Moore (20m 0s):

We have the right model of the noise. And yet even in that very clean setting, there are cases where suddenly we're no longer able to find those patterns. And that's important to keep in mind that our, our ability to find patterns in noisy data is not infinite. There are going to be times when we just, we need to know more or times when our entire model might be wrong.

 

Michael Garfield (20m 27s):

I may be getting ahead of our conversation here by asking this, but I remember episode seven, I had Rajiv Sethi on the show and in our conversation around stereotypes. One of the things that came up that seems relevant here, and I'm curious whether this holds water as an analogy that people can apply to daily life was that we come in to an interaction with a stranger, with a set of prior assumptions based upon models. We've built about people who look or act in a particular way. And yet the human world is so rich and diverse and full of exceptions and people are full of surprises.

 

Michael Garfield (21m 11s):

And so we get into this problem where to speak to that.   We could solve this if we had unlimited time and computational resources. What Rajiv was saying in this book is that we see these issues where stereotypes rear their head, when we're under pressure to make a rapid decision about someone. And we have to fall back on these faults heuristics.

 

Cris Moore (21m 38s):

Like condiments thinking fast and slow when we have to think fast, we often do a rather badly.

 

Michael Garfield (21m 44s):

I'm curious if you think that we're talking about the effect of electronic media and in particular like social media as an instance where we're turning up the noise because we're having so many, very low information density interactions with people so rapidly. And at that point, there's a phase transition where we can't actually solve the problem of who this person is that we're actually talking about, or like the opinions that we then developed to fit the data of the interactions that were sort of correlating inappropriately.

 

Cris Moore (22m 22s):

I think there's a couple issues there. And some of my colleagues study this more closely than I do, including hate speech on social media, but it's that example of clustering where we connect to other similar to ourselves. Of course, it's very visible on social media. And I think actually the way that a lot of social media platforms, gamify things like likes and upfits and so on actually makes that problem even worse than it is in our natural to the extent we have natural human nature. You know, we are very tribal. We do form strong connections with our friends. We do often initially assume that someone who disagrees with us or who looks different from us, can't be trusted.

 

Cris Moore (23m 4s):

And so social media does not, in my opinion, seem to be helping us cross these boundaries. It seems to be accentuating them. And I guess going back to that whole Carl Sagan thing. I'm still very attached to enlightenment values and to the idea that we should all be a little bit tentative in our opinions. We should be willing to change our minds in the face of the evidence. We should celebrate change in our minds. We shouldn't be ashamed of it. And of course, scientists have plenty of ego and tribalism just like other people, but at least the norms that we're taught to live up to, which we try to live up to are to embrace other opinions and to cheerfully admit that we were wrong when that's what the evidence says, which is not something I think that most people in society are even aware of has a good value,.

 

Cris Moore (24m 0s):

It's regarded as, "Oh, you're flip-flopping, or you're not sticking to your guns." And we're used to the idea, at least in our society of this advocacy, that the way to come to good decisions is for each side to have an advocate who will be completely one-sided for advocating for their side, never admit that the other side is a point use dirty tricks if necessary. That seems to be how political parties work. And I'm not sure, actually, that that's a good way to reach decisions. So I don't know if the people who designed social media platforms have thought about how to game-ify listening to the other side.

 

Cris Moore (24m 42s):

And somehow someone who always agrees with you likes your posts. That should be worth 1000000th of a point. But if someone who usually thinks you're an idiot thinks you've made a pretty good point, that should be worth a hundred points. You should get lots of gold stars for that. And I'm not sure if that would help, but it would be worth trying. But in terms of stereotypes, let's look at how algorithms, including algorithms based on machine learning are being used now in the criminal justice system, because here it's for better or worse. The most fundamental assumption in all of machine learning and most, almost all of artificial intelligence is that the data you have not seen yet is going to look like the data that you already have seen.

 

Cris Moore (25m 34s):

The way machine learning works is you have training data stuff where you know the right answer data from the past or whatever. And you train your algorithm on that until it gets most of those answers right. And then you show it the test data and see how well it does on that. Well, now imagine that your training data comes from criminal records in the past about who was arrested and whether when they were released, they were arrested again for committing another crime. So an algorithm which looks at your criminal record and which based on that makes a recommendation to a judge about how you should be treated.

 

Cris Moore (26m 15s):

For instance, let's say you have been arrested, but you haven't had your trial yet. Actually most arrests don't go all the way to trial, but you have not yet been found guilty of this crime. And in our society, you're innocent until proven guilty. So do we let you go or do we detain you in the meantime? And to be fair, if you were found standing over your victim with a bloody ax, maybe we ought to detain you. Maybe there's strong evidence that you would be a terrible danger if we released you. On the other hand in the United States, there are an enormous number of people who are sitting in local jails who have not yet been found guilty of the crime for which they've been arrested.

 

Cris Moore (27m 2s):

And as they sit there on those jails, it costs the taxpayers a lot of money, but meanwhile, it also destroys those people's lives. They lose custody of their children. They lose their jobs. They can't keep up with their rents or their mortgage payments and so on. So it's tremendously disruptive. And so presumably if we do this at all, which is an interesting question about constitutional rights and freedoms, we should do it only with a small fraction of people who are truly a danger to the public, but who are those people and how do we, how do we find out who they are? Well, human judges have traditionally made this decision by telling people, we'll let you go if you post a hundred thousand dollar bail bond, which is kind of a weird way to do this, because it ends up being based, not so much on whether you're actually dangerous, but on whether you can afford to post a hundred thousand dollars bail bond. 


 

Michael Garfield

Maybe you're more dangerous if you can. 


 

Cris Moore

Exactly. So as a result, the white collar criminals all go free. And then a lot of low-income people sit in jail and a lot of states, the good news is, a lot of States want to move away from the system and for better or worse, the way the debate is currently organized is instead of these bail bonds, we'll have an algorithm which looks at, for instance, whether you've been arrested in the past and then makes a guess about how dangerous you are to the public, and then makes a recommendation to the judge about whether you should be released or detained or may be released with an ankle bracelet, or maybe released with the requirement to check in with the police every month or every week or something.

 

Cris Moore (28m 44s):

So human decision-making is extremely flawed, and there's an enormous amount of statistical evidence that human judges have all kinds of biases, including racial biases, that these are both explicit and implicit, both overt and covert. And it's a huge problem. And there's abundant evidence that people of color in this country and low-income people get arrested much more often than someone who looks like me. And when they're arrested, they're more likely to get longer sentences. They're less likely to be paroled, which means to be released before their sentences is over and so on.

 

Cris Moore (29m 25s):

So our current criminal justice system is clearly very biased. So the advocates of algorithms say, well, let's make this objective. Let's make this a mathematical process that just looks statistically at past data and tries to make guesses about future data. And this has led to fascinating debates, not just in computer science. There's a book out by one of our external faculty, Michael Kearns called the ethical algorithm, but also within law and political science and ethics and civil rights. Again, there's this assumption that the future will look like the past.

 

Cris Moore (30m 6s):

Now on the one hand, looking at data from the past can teach us a lot. The fact is that most people who commit a lot of minor crimes and even some major crimes are very unlikely to commit a violent act and actually hurt someone while awaiting their trial. If you look at the fraction of people who get rearrested for a violent offense while awaiting their trial, it's actually quite a small fraction. And it's pretty clear that many of the people we're detaining now could be safely released. That way they can get on with their lives, take care of their kids and most of them will in fact show up to court and face the music and the in the court system.

 

Cris Moore (30m 51s):

So looking at the data from the past tells us that, and that's good. And that's important on the other hand, looking at data from the past and assuming the future will be like the past, obviously has the potential to perpetuate these biases. So for instance, if I pay attention to how many times you've been arrested in the past, and I assume that that's correlated with you being dangerous. Well, you know, homeless people get arrested all the time and young black men get arrested for hanging out with their friends on street corners, whether or not they're drug dealers. I have never been arrested for heroin with my friends on the street corner. So there is a consensus in this business that arrest in itself is a very noisy, not just noisy, but biased signal.

 

Cris Moore (31m 37s):

So what do you use whether you've actually been convicted, it's complicated because there are biases in all of these signals, but many jurisdictions including here in New Mexico are starting to use these algorithms to make recommendations to judges. And I've been working with people in the law department and the political science department at the University of New Mexico. We have this group, we call the interdisciplinary working group on algorithmic justice and it's us. And it also includes several people at the Santa Fe Institute and part of the Santa Fe institutes, larger network of external faculty. So it's great talking with people in law, for instance, because the questions they want to answer about this are not so much about statistics.

 

Cris Moore (32m 25s):

They're not necessarily statistical questions about are white people and black people treated with equal accuracy or equal fairness in some sense by these algorithms. And even defining that, by the way, it's very tricky. In law they care about the process. So we have all these things in our constitution and our bill of rights. You're supposed to be able to face your accuser. You're supposed to be able to cross examine witnesses. You're supposed to be able to question the evidence that's brought against you. And so the theme that we've settled on is actually not a technical theme or a mathematical theme, although it overlaps with mathematics and technical issues in computer science. It's transparency.

 

Cris Moore (33m 9s):

And if you wish, contestability. So do you, as a defendant, know why a judge was told by an algorithm recommended that you should be detained. Do you know what data about you was used? And do you know why we use that algorithm? Why is this the right algorithm? If it is, do you have a right to question that data, maybe some of the data about you is wrong. It may be that there's something on your record where a charge was actually dropped. Well, that shouldn't count against you, but more broadly, you should be able to question why is this the right procedure? But in order to question that, and in order for independent analysts to measure whether these algorithms are actually as accurate as they claim to be, or as fair as they claim to be, we need transparency.

 

Cris Moore (33m 60s):

They should not be proprietary. They should not be black boxes where they're hidden behind a veil of intellectual property. They should be things that everyone can look at them and see how they work, which is not just open source, by the way. open source might be neither necessary nor sufficient here. What really matters is what's the mathematical structure behind the algorithm? What assumptions is it making? What data does it use and how much weight does it place on each kind of data? How many points does it give you for each past arrest or each past conviction and so on? This is fascinating because of course there's a lot of private actors and vendors and providers of these algorithms, many of whom view these as their intellectual property and don't want their inner workings to be known.

 

Cris Moore (34m 48s):

But I think in the justice system, that is a real issue. I think that you should not be told by a black box that we think you're dangerous and shouldn't be released. You should at least have an explanation at the very least and the ability to appeal it. So, you know, this is a big issue in Europe. They have their GDPR, the General Data Protection Regulation, which has a lot of parts, which are written very badly and by people who clearly didn't understand the technical issues involved, but they tried. People here in the United States are trying to formulate law around this. And honestly, I guess, and here we're really getting into the social side. I'm hoping that by demanding more transparency from the algorithmic side of the system, the computerized part, the automated part, that this will lead to more transparency in the system in general, because we need to gather data about police behavior about, about judges, about prosecutors, about defense attorneys.

 

Cris Moore (35m 48s):

We need to gather data about the systems in which we live so that we can all look at it in as clear right away as possible and say, is this fair? And if it isn't, how could it be, make it better? We often think of data gathering as something that the powerful do about us. We think about that with privacy and facial recognition and Amazon gathering our buying habits and so on. But data is also something that citizens can gather about the powerful systems in which we live, more of the surveillance rather than surveillance idea. So in a nutshell, in a rather large nutshell...

 

Michael Garfield (36m 28s):

To that point about surveillance, there's a line in that lecture I mentioned earlier from the ICTS, the International Center for Theoretical Sciences, when you're talking about over-fitting you say, we want to understand the coin, not the coin flips. I was just having a conversation with somebody because to, again, date us chronologically, we're seeing a market correction after a large pump to the cryptocurrencies over the last several weeks. And I just heard someone on Facebook say that they were tracking the weekly. Things look like they're going to be really bad by the middle of March.

 

Michael Garfield (37m 11s):

Why would you base your assumption? That's exactly what you're saying, not to do. And at any rate when you're in a system like that, where, and arguably any large market in this world where large players have disproportionate impacts, then you have to understand the composition of the eco, the ecology of players in the market in order to understand the market activity and the relative contributions of the various agents involved. And at any rate, that's just over in a corner. We have to understand the coin, the actual build of this machine, and not just the last few weeks of output.

 

Michael Garfield (37m 54s):

And so to that, this might be a kind of a hairpin turn, but like in hearing you talk about all of this stuff, one of the things that I wanted to bring up in here, you discuss about computer science literacy is that most of the ways that I hear and read about machine learning being deployed for various purposes is that it's a simple optimization problem. And in this talk, you point that in many cases, the landscapes that we're trying to drape over and fit that to this data to multiple Optima.

 

Michael Garfield (38m 35s):

You're looking at an entire mountain range.

 

Cris Moore (38m 38s):

And that's even if we could agree which way is up, right.

 

Michael Garfield (38m 41s):

We were bouncing back and forth here. I guess what I'm requesting here is a little bit of deconstruction of this simple point of view on optimization, and then how statistical physics. You use that to kind of talk about the relationship between probability and energy landscapes and these models. How you can kind of think about this in terms of thermodynamics and how far you are from the right model is sort of like how much it's going to cost you. And so if I'm getting that right, that does seem like it has more of a direct bearing on the costs we, as a society are facing by applying the wrong models.

 

Cris Moore (39m 26s):

Well, let's, let's do theory and then practice like we did before. So on the theory side, there's a lot of beautiful mathematical analogies between statistical physics, which is the branch of physics. This studies, fluids, gases, crystals materials, things with many, many atoms or particles interacting with each other. what many of us call complex systems. That's statistical physics, as opposed to say particle physics, where you're looking at a couple of quirks or something, which is also awesome. But so in physics, things are trying to minimize their energy. Generally rocks like to fall down, although when things are not at absolute zero, which nothing ever is when things are at some non-zero temperature, you also have noise in the system because you have atoms and molecules randomly bouncing around like lots of little air molecules in the room are knocking you around and tiny, but random ways, which you feel as a pressure overall from the gas.

 

Cris Moore (40m 24s):

But it's the sum of many microscopic collisions, each of which have a lot of randomness. So there's a very rich theory here. And for instance, in our study of communities and networks, we were able to use this theory a lot where in a block of iron, little iron atoms are in a sense they're tiny magnets on their own. And they're trying to line up with their neighbors. They would prefer to agree with their neighbors. Well, that sounds a lot like that social network. People like it when their friends agree with them on things. And in fact, it goes both ways that tend to make friends with people who already agree with them. So we were able to use these models of magnetism and use their physics to think about, suppose an algorithm is trying to figure out who's in what community it might do that by thinking of each person as a little magnet and asking whether they agree with their neighbors or not.

 

Cris Moore (41m 16s):

And indeed, that kind of magnetic system, which is called the Icing Model is a popular toy model of opinion, formation in networks. Then the noisiness of the network is a lot like the temperature. And so I mentioned earlier,  Pierre Curie, that you may have heard of him, he was married to Marie, found that if you heat a block of iron above a certain temperature, then very suddenly it no longer holds a magnetic field. And the reason is that neighbors are still pretty well correlated with their neighbors. But when you go to larger distances, the information doesn't get through. There's too much noise.

 

Cris Moore (41m 56s):

And so two atoms that are far apart in the block of iron might as well be completely unrelated to each other. And they basically cancel each other out and it's not magnetic anymore. And it turns out that that's very similar mathematically to an algorithm that's trying to identify who belongs to what community in a network or what data point belongs to this cluster of mice, where that cluster of mice says, "I can't figure out anything here. I might as well just be flipping coins." So there's a beautiful theory there. And you mentioned getting stuck in the wrong place. So the idea of a bumpy landscape, a bumpy fitness landscape with multiple Optima, which if you're trying to go up there mountain peaks, and if you're trying to minimize energy, their valleys, but whichever way you cut it, thery're Optima, and then they have some kind of barrier in between them where to get from one to the other.

 

Cris Moore (42m 48s):

You have to make things a lot worse in order to make it better. Again, you're right. A lot of things in society feel like that. And in physics, this happens even in window class. So if you were to look at the atomic level of glass, you would see the atoms in a rather jumbled arrangement, even though the true optimum, the lowest energy state, is a perfect crystal, but the problem is it gets stuck in this jumbled state, this amorphous state, because a couple of items over here are lined up this way, a couple over there, lined up with each other that way. And you would have to do a very painful rearrangement at larger and larger scales to fix it all and turn it into a crystal.

 

Cris Moore (43m 30s):

And I'm sure you're thinking of problems in society that are in fact like that. But I think the problem in society is it's even worse than this because we don't agree what it is that we're trying to optimize. If we all agreed on what math problem we were trying to solve, then we can sit down together and say, you know, let us calculate. But the problem is we don't agree on what we're trying to optimize. And we also have a lot of uncertainty about what the consequences of our actions are. And so, unfortunately not everyone agrees that we need to work hard to avoid catastrophic climate change.

 

Cris Moore (44m 12s):

Not everyone agrees that inequality in society is a bad thing. Not everyone agrees that it would actually help our economy flourish if people were less terrified of not being able to feed their children or provide healthcare for them, if they lose their job, partly because we don't agree on what it means for the economy to flourish. And so this, this is hard and you see this, let's talk about machine learning and let's talk about AI because when we do the training process of a neural network or a simpler algorithm, we treated as some kind of optimization problem, and we've gotten very good at solving these optimization problems, but what are you trying to optimize even in facial recognition, for instance. What does that mean?

 

Cris Moore (45m 1s):

So as I'm sure many of your listeners know, there's a huge controversy about the fact that facial recognition systems that overall have a pretty good accuracy turn out then when you look at them to do much better on one demographic group that another. And so when you read a breathless web article that says this is 90% accurate or 99% accurate well, what does that even mean? Because you cannot generally encapsulate the behavior of these things was a single number. I mean, one example is, you know, suppose that you have a society where 80% of the people belong to one group and 20% belong to the other group.

 

Cris Moore (45m 43s):

Well, let's say an algorithm does all this perfectly on the 80% on that majority group and only 50, 50, no better than flipping a coin on the minority group. 80 plus half a 20th 90. So then it could be 90% accurate, woo-hoo. But if you're in that minority group, it's wrong about you as often as it is. Here's another example. There are false positives and false negatives. We all been hearing about this from COVID diagnostic tests and that this happens in criminal justice as well. There was a great jurist William Blackstone, a great English jurist, who said, "I would rather release 10 guilty people than capture one innocent person."

 

Cris Moore (46m 26s):

And so he was very concerned about false positives in the sense of thinking that someone is guilty when they're not. Then again, Dick Cheney talking about Guantanamo. "I don't care that we detained a couple of innocent guys. What matters is that we caught the bad guys." So he's much more concerned about the false negatives. He doesn't want to let someone go if they're actually dangerous and these are different types of errors. Some people call them type one and type two anyway, who cares about the terminology. But obviously they have very different costs to society. And depending on the setting, you might be trying to minimize one or the other, or some combination of both.

 

Cris Moore (47m 8s):

And that's really a policy decision. That's a philosophical decision. So you can't really wrap up these things with a single notion of accuracy. You always have to look under the hood and find out what people mean when they say the algorithm is accurate. Another example is predictive policing. There's an article that says a certain predictive policing algorithm is twice as accurate as human analysts, humans who are trying to predict when and where crimes will occur. Well, you look at the paper and it turns out what that means is that the algorithm predicted say 6% of the crime and the human analysts predicted 3% of the crime.

 

Cris Moore (47m 48s):

Well, that is twice as good, sort of. You could also say that for the humans, 97% of the crime didn't happen when they thought it would and for the algorithm, 94% didn't happen where, and when they thought it would. So you always have to look under the hood and find out what is it. People are optimizing and what lies behind these claims, especially when a private vendor is making them.

 

Michael Garfield (48m 14s):

This isn't one of your papers, but I'm reminded in you talking about this SFI External Professor, Thalia Wheatley was coauthor on a piece in Science Magazine, 2018, on prevalence induced concept changes in human judgment. One of these issues is that as you pick up, you're looking for red balls and then as you pick up the red balls, you become more and more sensitive to red balls. And so, why does some social problems seem so intractable? Well, we show that people respond to decreases in the prevalence of a stimulus by expanding their concept of it.

 

Michael Garfield (48m 55s):

So on top of everything else you've said, it's the thresholds at which we are gathering data to feed into these. I mean, again, that's perhaps tangential, but I think it speaks to the sticky intricacy of this kind of issue.

 

Cris Moore (49m 10s):

Well, it's true that often the question is not, how do we solve this math problem? It's the question of, do we look more deeply into, is this the right problem to solve? And does the data mean what we think it means? In a lot of the data that people train these criminal justice algorithms on there's a single bit in the data set that says you were rearrested. It doesn't say whether that was because you tried to kill somebody or whether it was graffiti. So a lot of the datasets that are actually being used don't make that distinction, or let's say you missed your court hearing well, is that because you tried to drive over the Texas border to escape justice, or is it because you were terrified of getting fired from your job because then you wouldn't be able to feed your kid and you didn't understand that if you didn't show up, there'd be a warrant out for your arrest or because you couldn't afford transportation, and so on.

 

Cris Moore (50m 3s):

So if you look more closely at the data, this also offers up a lot of opportunities for positive interventions. And I think again, to a machine learning person, it's all about prediction. I'm going to predict whether you'll commit crime. I'm going to predict whether you'll show up for court or not. And then I'm going to take some action based on that prediction. Well, maybe we should do is help you show up to court., Make sure that you have childcare. If you need childcare, make sure that you fully understand what the consequences to you are. If you don't show up and make sure that you get a text message reminder instead of a physical postcard that arrives at the apartment you lived in six months ago. 

Cris Moore (50m 44s):

So, there are a lot of common sense in things we can do to improve the systems that are a little different from the sort of arm's length. "Oh, let's predict more accurately." So I think the algorithms do have a positive role to play, but I want everyone to feel fully informed about what they can do, what kinds of mistakes they can make so that we can all participate in a democratic way on why we're using them, whether we should use them. And of course, to a technical person, the question is how do I solve this problem as well as possible. But for the largest society, should we be doing that at all?

 

Cris Moore (51m 24s):

There might be problems where the best solution is still a human being, warts at all, with all our foibles, but human beings should also be, we should also, like I said, collect data about different judges to see how fair they are, different prosecutors to see how fair they are. And we need to hold up as clear a mirror as possible as we can. Sometimes data is not as clear a mirror as it claims to be, and we should look at it more closely.

 

Michael Garfield (51m 54s):

So related to the issue of our desire, our meaning, like Western societies apparent desire to buy into the hype of machine learning. And, you know, there's something about it that sort of, "finally, I don't have to think about this right now." But this is a problem that you addressed in a piece with a SFI Miller Scholar, John Kaag.  You wrote together a piece last spring, right before campus closed on the uncertainty principle and the pursuit of truth. And you, the two of you quote Goethe and talking about how nothing is sadder to watch then to watch the absolute urge for the unconditional in this altogether conditional world.

 

Michael Garfield (52m 46s):

This seems related to the issue of the hardness of a computer problem relative to the affordances that were available as individuals or as institutions, and then also maybe how each of us have different heuristics that satisfy the criteria of what feels to us like certainty, what feels to us like a parsimonious or a simple enough answer to a problem to suit our purposes. And so, you know, if we pull out a little bit, a lot of the questions you're seem to me to be questions that have deep implications for the philosophy of science and whether we are actually looking for the kind of elegant, simple math that so many people think of physics being inquest for. What are your thoughts on the desire for certainty, for simplicity, the reality of, of our pluralistic world. It's all of this, this discussion here.

 

Cris Moore (53m 56s):

How much time do you have?

 

Michael Garfield (53m 57s):

Infinite time, but limited time to edit.

 

Cris Moore (53m 60s):

I have several dinner parties worth of thoughts. I really enjoyed writing that article with John Kaag who's a well-known philosopher now with a lot of a number of very popular books. And I think our goal was to say, and this is last spring. Keep in mind, Spring 2020. I actually, we, we started over at cocktails in the fall of 2019 back when there were still bartenders serving cocktails. And we were looking at the rise of authoritarianism and the fact that human beings in many cases want to be released from the responsibility of thinking. Thinking is hard, right?

 

Cris Moore (54m 43s):

Thinking that maybe you're wrong about a lot of things is terrifying. I was on the city council of Santa Fe for eight years and I cast some votes that I am proud of. And I cast some that when I look back, I think, "That was really dumb." And you know, there, I was actually in a position of some small amount of public responsibility. And of course, you know, sometimes the wee hours of the morning, I think, "Oh my gosh, why did I do that? Why did I say that?" So facing the possibility that you might be wrong, it's not an easy thing to do. And as we talked about at the beginning, I think in many walks of life, it's not even publicly valued as a norm to be willing to pay attention to other points of view, to be willing, to change your mind.

 

Cris Moore (55m 25s):

It's often viewed as a lack of conviction to be willing to do that. And so in this article we were thinking maybe science and even mathematics have something to say to the largest society about how to live with uncertainty because mathematics you think of it as just crystal realm of utter certainty, but it's no such thing. And especially over the course of the 20th century, we realized that there are multiple ways to view the mathematical world. There are multiple ways to view infinity within picture of trends, funded numbers.

 

Cris Moore (56m 6s):

You get almost theological disagreements, right? Where some people believe in certain types of internet sets and other people don't. And in mathematics, this has actually led to a certain degree of pluralism of willingness to say, "Oh, well, there, there are different of view here." These are, if you will, different parallel worlds. And both of them have beautiful mathematics and let a thousand flowers bloom. So people, they still argue and discuss about what the fundamental axioms should be, but they're also sort of willing to say, "Oh, you have your model. I have my model. There are different, interesting, beautiful things that are true in these models." That's okay. You know, non-Euclidean geometry is another example. If you drive in a straight line on the earth, well, you'll fall into the ocean, but bear with me. If you try it in a straight mind, eventually you'll come around to where you started. Well, if you do that on a flat Euclidean space, that doesn't happen. And if you and your friend both start in different directions and drive in straight lines, you'll cross each other twice where you started. And at a point on the opposite side of the earth, whereas again, in Euclidean geometry, two lines are either parallel or they cross exactly once. So mathematics actually has some experience in living with each other and not killing each other and not necessarily demanding a single right answer, but rather embracing that there are different sets of Axiom different assumptions from which we can proceed, both of which might lead in interesting directions. 


 

Cris Moore (57m 51s):

It's a pretty, distance analogy. But if we don't figure out how to live in a pluralistic society with institutions in which sometimes my side wins and sometimes your side wins and that that's okay. If we don't figure out how to do that, if we insist on winning every argument or burning the whole system down, then the system in which I live is just going to completely disappear. And 50 years from now, we'll just have authoritarian States and maybe a few pseudo democratic States sort of pretending or banyan style things.

 

Michael Garfield (58m 30s):

But how robust are those really?

 

Cris Moore (58m 33s):

Well, I don't know. Then you have to look another 50 years in the future after that. I don't know. But, even though I think human beings are easily deceived and easily led, I still think some kind of democracy is a good idea. I think we need much more robust discussions with more than 140 characters per utterance, to understand each other and to figure out together what we should do. But I think it would be very sad if the idea of a pluralistic society, a liberal in the classic sense society were to disappear. I guess if people just study more non-Euclidean geometry, then they'll all agree with me.

 

Michael Garfield (59m 17s):

I've been thinking a lot about a paper Simon Dideo recently coauthored where he was talking about the multiple different heuristics for simplicity and to bring it back to your work on algorithmic justice, perhaps the question is why don't we have in the courtroom, multiple different algorithms that are like the five blind men feeling the elephant? It seems like this is a perfect example of a place where accepting a pluralistic approach would take us a long way towards solving some of these more abstract problems or routing around them.

 

Cris Moore (59m 57s):

That's a good point because the way that these actually, these things actually play out in practice quite often. So I can give you an example. There was somebody arrested in Albuquerque who had never been convicted. And so the algorithm used in Albuquerque, which is called the Arnold Public Safety Assessment named after the Arnold Foundation now called Arnold Ventures. Quite rightly in my opinion, does not count past arrests because as we said, some people are arrested much more often than others. For reasons of bias. It does look at past convictions. This person had never been convicted. There's zero convictions.

 

Cris Moore (1h 0m 37s):

So the prosecutor said to the judge, look, I know this person got a low score according to the algorithm. The algorithm thinks they're not that it would be a low risks, release them, but let me convince you why they're actually very dangerous. And so the prosecutor brought evidence, incriminatory evidence and argued that the only reason this person wasn't convicted in the past was that they succeeded in intimidating a witness and that they should have been convicted and so on. And the human judge listened to the human prosecutor and to the human defense attorney and made a judgment call. And in fact did detain that person. That's how legal reasoning works. I mean, you know, you do have an advocate, the prosecution has an advocate and they can bring in evidence on both sides which the judge can weigh.

 

Cris Moore (1h 1m 22s):

And the whole point is that it's individualized for better or worse. That it's individualized justice. It's not statistical. It's not based on the assumption that people who look similar to you in some statistical sense that you will act in the same way as those people did typically. And of course, this can cut both ways. I mean, this can give the judge the opportunity to be biased against you because they don't like how you look. It can also give the judge the opportunity to give you a second chance to say, "Hey, you know, I, I see that you have this terrible record, but I also see that you're trying to turn your life around and I'm going to give you a second chance."

 

Cris Moore (1h 2m 1s):

And I think that there needs to be space for this type of decision-making, especially where people's civil liberties are involved I think that the basic question of should you be denied your physical freedom. I think a human should always be in that loop again with worts and all. I think that should be a human decision and not a statistical one. So, you know, maybe other things about what type of, should you be recommended for a certain type of followup program or something like that could be algorithmic. So legal reasoning is fascinating. I've started reading like Supreme Court opinions and it's not math, it's not logic.

 

Cris Moore (1h 2m 45s):

And yet it does have these internal norms which are hundreds of years old and I guess there are other systems and other parts of the world. We inherited a lot of our law from England, of course. And it does try to uphold some norm of internal consistency. And I think that's quite fascinating and it's not, I don't think it's enough, but it's interesting. I have more respect for it than I used to. Now that I've actually started reading some of it.

 

Michael Garfield (1h 3m 16s):

Well, it was like the rugged fitness landscape of an optimal legal argument is rather convoluted, I guess, is what I'm to get out of that.

 

Cris Moore (1h 3m 24s):

Well, and, and which legal arguments does the judge in front of you actually respect? 

 

Michael Garfield (1h 3m 31s):

I don't know how much more there is to say about this, but I just feel like it's key to draw the connection between a lot of the points that you're making here and identify where they exist in other areas. You know, I think one being Melanie Mitchell and Jessica Flack wrote this piece for Ion a while ago, that was talking about a similar problem in that the metrics that we choose. They mentioned you try to test students for particular aptitudes that students optimize for the tests and then you end up testing the wrong things because the goalposts have shifted KPIs in the workplace are the same. And then it's related to these issues about...

 

Cris Moore (1h 4m 13s):

Or university rankings where deans can increase the ranking of your university by spending a hundred million dollars on an athletic center.

 

Michael Garfield (1h 4m 22s):

Right. And so to sort of end it here and throw you this and see what kind of bank shot you want to make with it. I find myself perhaps because I am an uncredentialed wild animal living in the temple of SFI in a lot of these conversations about what it means, how like how do we rethink the nature of our agreements around authority and around consensus in an age when the world is changing so fast, that it's sort of like this Pharaoh magnetics model in which things fall apart, that you, you pump the rate of change up too high.

 

Michael Garfield (1h 5m 2s):

And that the fact that you got a degree in a field 10 years ago doesn't necessarily mean that that degree is an effective compression of your expertise in that field. And I'm just curious, given the amount of time you spend with these problems, how you look at problems like these problems, like the insufficiency of the genie coefficient, the way that we think about gross domestic product, that like our economic measures are not adequate to the complexity of the world that we live in and where you're thinking on phase transitions and in computational hardness weighs in on how we might steer ourselves to storm some inkling of an answer to problems like these.

 

Cris Moore (1h 5m 47s):

I am not an economist, but certainly there's a lot of disagreement about what economic indicators are relevant. And people try to invent various economic indicators that for instance, say something more about how people who are not stock market investors, how their lives are going. The stock market is doing great, but the stock market is not, the economy is so what is the economy and how do we measure its success? But I know a number of people work on on that kind of thing. And I don't know if some of those alternate measures are really catching on and entering the general parlance. We can't even agree in this country whether the deficit matters or not.

 

Cris Moore (1h 6m 31s):

These things are at the heart of passionate disagreements that I don't expect to get resolved anytime soon. And I'm not an economist myself. So I'm not sure how informed my opinion is.

 

Michael Garfield (1h 6m 45s):

I guess maybe like the desperate question here is, have we crossed the Rubicon as a society or are we going to be able to reconstruct reliable metrics amidst this kind of exponential change that we're undergoing? Basically are we now like sort of chronically and fatally over fit to a lost world? Or what is it in your work that gives you hope that we will find the nuance with which to answer this kind of question?

 

Cris Moore (1h 7m 22s):

I don't know. And I don't know what fraction of human beings were actually involved in building that consensus in the past. I don't know what fraction of human beings were actually enamored of nuance and subtlety in the past. We may be looking partly at fictional golden ages that never actually existed.

 

Michael Garfield (1h 7m 48s):

Make consensus great again.

 

Cris Moore (1h 7m 51s) 

I think many of the things that we call societal consensus is consensi we're actually in reality, just consensus is of the elite at the time. And opening up society to people who are not elite is probably a good thing. But that also means alluding to what you're saying, trying to build broader norms of thinking about things and actually listening and actually learning and be willing to admit that you might be wrong. In high school, we teach our students that the scientific method is all this hooey about you have a hypothesis and then you design an experiment and that's not the scientific method.

 

Cris Moore (1h 8m 34s):

Maybe a handful of science has actually worked that way. Maybe particle physics works that way, but most fundamentally the scientific method is being willing to admit that you might be wrong and that you don't know everything yet, and that you want to learn more. That's the scientific method. And it would be great to have that be a more broadly accepted goal in society. I'm not sure how to get that across there. There are some glimmers that I like. There's this thing called deliberative democracy, where rather than asking everybody to vote, and rather than having everybody post a little thing on social media, you get like a hundred people from your town, from different walks of life together in a room for three days and give them access to experts and let them ask those experts questions.

 

Cris Moore (1h 9m 28s):

Now, of course, I guess they have to believe to some extent in the experts and an expertise, but at least if these are questions like, "How much does it actually cost to run the sewer system?" You know, they can answer that question then what would happen if that pipe falls apart? Well, that's a factual question that maybe they can answer. And I think there have been some experiments along these lines where people who frankly, might not be that thoughtful in some of the standard settings that we have now in society when they get together in this setting and they have the time and the luxury to really go in depth on a topic they often achieve more consensus than you would expect.

 

Cris Moore (1h 10m 10s):

So I think that's interesting. I mean, I think that there is still some hope. We are small minded, tribal primates. It's true, but we are also when dragged, kicking and screaming, capable of learning from each other when we have the time to do so when we're not too stressed out, you know, when we have food in our bellies and you know, a little bit of relaxation, we do have the ability to listen to each other and maybe to learn something. And I'm hoping that if we do more of that, and maybe if we create institutions that encourage that kind of thing, that will all become a little bit less certain about our opinions and that instead of two polarized polls, we'll have more of a spread of opinion that can talk to each other.

 

Michael Garfield (1h 11m 5s):

That's beautiful, Cris, that was like a good place to end it. Good. Thanks for being on the show. Thank you. Thank you for listening. Complexities produced by the Santa Fe Institute, a nonprofit hub for complex systems science located in the high desert of New Mexico. For more information, including transcripts research links and educational resources, or to support our science and communication efforts. Visit SantaFe.edu/.