COMPLEXITY: Physics of Life

David B. Kinney on the Philosophy of Science

Episode Notes

Science is often seen as a pure, objective discipline — as if it all rests neatly on cause and effect. As if the universe acknowledges a difference between ideal categories like “biology” and “physics.” But lately, the authority of science has had to reckon with critiques that it is practiced by flawed human actors inside social institutions. How much can its methods really disclose? Somewhere between the two extremes of scientism and the assertion that all knowledge is a social construct, real scientists continue to explore the world under conditions of uncertainty, ready to revise it all with deeper rigor.

For this great project to continue in spite of our known biases, it’s helpful to step back and ask some crucial questions about the nature, limits, and reliability of science. To answer the most fundamental questions of our cosmos, it is time to bring back the philosophers to articulate a better understanding of how it is that we know what we know in the first place. Some questions — like the nature of causation, where we should look for aliens, and why we might rationally choose not to know important information — might not be answerable without bringing science and philosophy back into conversation with each other.

This week’s guest is David Kinney, an Omidyar Postdoctoral Fellow here at SFI whose research focuses on the philosophy of science and formal epistemology. We talk about his work on rational ignorance, explanatory depth, causation, and more on a tour of a philosophy unlike what most of us may be familiar with from school — one thriving in collaboration with the sciences.

DavidBKinney.com

On the Explanatory Depth and Pragmatic Value of Coarse-Grained, Probabilistic, Causal Explanations.Philosophy of Science. 86(1): 145-167.

Is Causation Scientific?

Visit our website for more information or to support our science and communication efforts.

Join our Facebook discussion group to meet like minds and talk about each episode.

Podcast Theme Music by Mitch Mignano.

Follow us on social media:
TwitterYouTubeFacebookInstagramLinkedIn

Episode Transcription

Michael: David Kinney, it's a pleasure to have you on Complexity Podcast.

David: It's absolutely great to be here.

Michael: This is a very different conversation, or I'm expecting it to be, from the kind of discussions that we've had on the show so far, because you're a philosopher of science.

David: Right, yeah.

Michael: So this is going to be, I think, for a lot of people, a lot more of a meta-level look at the way that we can start to think about the kind of research that's being done here and the kind philosophical considerations that have to be made in steering research at SFI. It's really cool that you're in the mix here.

Let's start with your own personal background and how you got into doing this work, and then how you came to work here. That's two stories, really.

David: Great. Yeah. I was born in New York City, Queens, moved up to the suburbs of New York when I was about nine or 10. In undergrad, I was pretty wayward. I took a lot of different classes in the American liberal arts style, where you don't have to settle on a major right away. But I eventually got into philosophy, and specifically philosophy of science, largely as the result of two really good professors, Christine Thomas and Sam Levey. They steered me towards writing this undergrad thesis on the philosophy of probability. When I read it now, it's a little cringe. I think for most people when they read their undergrad work, it can be a bit cringe, but I was super keen. I was super into it.

But I wasn't sure if I wanted to have an academic career or anything like that at the time. So I ended up working as a paralegal for two years and then got a scholarship to go do a master's at the London School of Economics. It was there that I really started to get into decision theory and thinking about probability in a less metaphysical way than I had been as an undergrad, and in a more rigorous way, although in my master's it still wasn't very rigorous. Then after one more year out in the private sector, they pulled me back in for good, and I started the PhD there in 2015. That was also at London School of Economics.

LSE, listeners might not know, the philosophy department was founded by Karl Popper, obviously, super famous philosopher of science. So it had always been on my radar. And it's a very specialized place where we only really do philosophy that's connected to the rest of the world, whether that's more ethics and political philosophy or decision theory, game theory, formal epistemology, which is more of what I do, which involves sort of the mathematical representation of belief and knowledge and maybe some other epistemic attitudes if there are such attitudes.

And then of course philosophy of science. And I ended up writing this thesis about causation and that's sort of more or less my graduate career where SFI comes in is, I was reading actually some computer science papers by a guy called Christof Halupka who's written some really good stuff about coarse-graining and causation. And he's a coauthor with a philosopher named Fredrick Eberhard whose work I also really like.

And they cited some stuff by Shalizi and Crutchfield, Cosma Shalizi and James Crutchfield. I went to read that and I saw, where people put their little address under their names and academic articles, Santa Fe Institute. And I kind of heard of the Santa Fe Institute before, but it didn't really have a sense of what it was. And I thought, "Hey, I wonder if they have postdocs?"

I saw that they did and I applied because readers, not readers, listeners might not know, but the philosophy job market's really tough. And I applied to about 80 jobs last year.

And this was one of two that interviewed me and the one that I got. In that sense, I'm a little different from some people who come here who are sort of steeped in complexity science maybe since their undergrads and have sort of always wanted to come here. For me it was little more accidental, but it was really cool. I got to meet Cosma at an event here recently. Cosma Shalizi. And tell him that, his papers sort of very serendipitously ended up with me coming here. And I've got to say, it's a really great job. So I'm very lucky to be here.

Michael: Right on. It sounds like we can just dive directly into the deep end here.

David: Okay, great.

Michael: And talk about how you understand causality and probability, and how that shapes the kind of work that you're doing, and what kind work you've done in this space. People can go to the show notes and find a link to your website and the links to the papers, at least the published papers we'll be talking about.

David: Yeah.

Michael: There's one in particular about “Explanatory depth and the pragmatic value of coarse-grained probabilistic causal explanations.” Maybe that's a great place to start. I'll leave that up to you.

David: Yeah, that title could have been better, but I don't know how to do it, the title of the paper that is. So in terms of thinking about causality, I'm deeply influenced by an approach to thinking about causation that started in the 80s and has always been a collaborative project between computer scientists, statisticians, people working in the various sciences, and then also philosophers.

This is the causal modeling program that really, I mean if you want to look for some names that are really associated with this, Judea Pearl is probably the number one name that listeners might be familiar with. Also very big within the entire Carnegie Mellon philosophy department famously Spirtes, Glymour, and Scheines have their whole sort of book, Causation, Prediction, and Search.

Without going into too much into the details, it uses a mathematical formalism developed, again by some of these same people. But also in other contexts as well that uses elements from graph theory and probability theory to represent causality. So you basically draw these graphs between random variables, these sort of arrow and blob kind of diagrams, where the blobs are these variables and the arrows are indicating causal relations between those variables. And then you define, a probability distribution over all the ways that these variables in the graph could be, basically saying here's the probability of each possible setting of this system. And your causal graph is only going to be sort of adequate if that probability distribution satisfies certain properties, the most famous of which is the Causal Markov Condition.

So what this does is takes this idea causality that had been really sort of bound up in metaphysics and makes it testable. You can look at the data and if your data doesn't fit to a probability distribution that satisfies that Causal Markov Condition, then your graph is going to be inadequate.

You mentioned how I'm thinking about what probability is. I mean, to me probability is really a mathematical concept. It has a very standardized mathematical definition, at least classical probability going back to the Russian mathematician, Kolmogorov. and I don't really get into a lot of philosophers spent a lot of time thinking about, objective versus subjective probabilities. That's not something I've really ever published on ever. And for me I'm content with the mathematical definition of what a probability is.

Michael: I feel like my job in this conversation is to ask me very naive questions. And the first question, I think you might've actually just answered, and it also touches on your work with Chris Kempes on astrobiology that there's this question that you chew on in there about: probable according to what? And that's I think what you were saying, you kind of don't care about these sort of subjective versus objective probability. Because I've always had this thing about, specifically when we're talking about the likelihood of life, trying to solve the Drake Equation or whatever, that you don't know the denominator. So how are you actually dealing with, maybe not in that particular case, but in understanding, in trying to distill causal relationships, how does profitability in a mathematical formalization get around that kind of issue?

David: All right. Well, let me, to be diplomatic to some of my metaphysician friends, I wouldn't say I don't care about what probability is and I don't care about sort of what causality is in this sort of ontological sense. Ontological meaning this sort of the study of being, like what there really is. It's not that I don't care. I mean, I think you'd be crazy not to care in some way. It's just that I haven't worked on it. And I wouldn't be comfortable saying anything really intelligent about those kinds of distinctions. Not that I'd be comfortable with the idea that I'd say anything intelligent about anything, but in my approach, I tend to think about these things more in terms of claims made in probabilistic language or in this sort of language of causal graphs as being conditional to some sort of modeling assumptions, right?

So, in probability you usually have a probability space where you have a set of possible outcomes and then a set of subsets of those outcomes that has to satisfy certain conditions and then a probability function that's going to tell you the probability of all those outcomes. That function has to satisfy certain conditions. And then you can say, "Okay, given this mathematical setup, what can we derive? What can we say that's sort of well-supported and logically valid?"

And then you can ask questions about, "Okay, well how empirically valid is the setup?" And to me this is a lot of what science does, right? You build a model. And then the model might entail some interesting results. But if that model is a model of nothing, it's only going to be so interesting. But then the models might seem at first glance to be plausible models of some systems.

So you mentioned the astrobiology paper, that's not published yet and it might change. But in there we're thinking about some approaches to the ways in which people thought about formalizing inferences about whether planets are hosts of life, exosolar planets. And we note that some of these approaches, specifically what are called Bayesian approaches, make specific assumptions that then can lead to some mathematically kind of funky results that you might want to avoid.

And that's the kind of thing too where you can say, "Okay, look, here's what looks like a compelling model." Maybe it's a probabilistic model, maybe it's a graphical model, maybe it's something else that I don't know anything about. But then notice that it entails some weird mathematical consequences. And then maybe that forces you to go back and revise your model even though it seemed empirically adequate or it seemed theoretically well-motivated as Bayesian models in astrobiology can seem. I mean Bayesianism is in a sense a really compelling inductive logic. But it has its limitations. That's sort of what we're pointing out in that work. But that work is still ongoing. I should sort of caveat, it may change and we may find that all of miser mathematical musings are just not worth anything. But we'll figure that out hopefully.

Michael: Okay. So, I wanted to go back to, you mentioned the Causal Markov Condition.

David: Right.

Michael: And this is a really interesting way to formalize thinking on … for me for years, I was thinking about this. And I talked about this a little bit with David Krakauer in the first episode about the difference between complex systems thinking and science driven by machine learning. And how in that space, it's not maybe strictly true, but it's interesting that there is controversy about the theory-less-ness of machine learning results and how these massively dimensional number crunching correlations are not giving us this same kind of truth claims as certain other scientific methodologies. And so I would love for you to discuss a little bit more, like go to a little bit more detail about the Causal Markov Condition.

Because I was sitting there listening to your talk about this and I was having trouble deciding whether or not this laid to rest or not my concerns that you never actually know all of the possible hidden variables in a system. And again, it sounds like what you're actually saying is, "Well, we never do. And so the work is always provisional. The model is constantly subject to expansion." But if that's true, then how is this particular setup that you're describing with reasonable degrees of confidence about an acyclic directional graph useful to us in sort of accepting the idea that science can say with any reasonable certainty that something can cause another thing.

David: Right.

Michael: So that's a mouthful, but…

David: Yeah. So just to start off, I mean, in a very sort of non mathematical informal way, what the Causal Markov Condition says is really two things. One, is what's called a screening-off condition. And that means that if A causes B and B causes C and there's nothing else going on, there's just a causal chain, then B is going to give you all the information that you need to know about C. And once you've accounted for B, A doesn't tell you anything else. So it's basically saying, if I cut off a causal chain and say okay I've got all the information at this point, going further back in that chain and looking at the causes of those causes isn't going to give you any new information.

The second really contentful thing the Causal Markov Condition says is that where two variables are correlated, they must either be causally related in one direction or another or have a common cause.

So, if people with yellow fingers tend to have lung cancer, that's a correlation. Now you think either there's a direct causal relationship there or there's a common cause. In that case it's smoking. Going back to screening off, to stay on these sort of epidemiological examples, if stress causes smoking, and smoking causes lung cancer, really all you need to know about is the smoking. Once you know that, you don't need to go back and say, "Okay, was this person stressed?" when you're trying to sign a probability that they'll get lung cancer.

So that's the Causal Markov Condition. In terms of whether we've ever accounted for all the hidden variables, no. There's nothing about assuming the Causal Markov Condition that is going to give you that. And in fact, when Pearl sort of derives the Causal Markov Condition from assumptions, he specifically assumes that there are no sort of unmeasured common causes of two or more variables when he says that any sort of system, any model that ought to be called a causal model has to satisfy this condition.

Going to the point about machine learning, I don't know too much about machine learning. And so I'd be hesitant to sort of speak for them. I do know that Pearl is often very critical of the deep learning community basically alleging that they're essentially playing this correlation game and what they're discovering are correlations and not causal structure. There is a very sort of active program within machine learning at doing causal inference from data. And their algorithms explicitly assume the causal Markov condition when building models. I'm thinking in particular sort of pioneering work by people like Heckerman at Microsoft Research and things like that.

Michael: Right on. So actually, I feel there're two places we can go from here.

David: Yep.

Michael: And I'll let you decide which you think is the lowest entropy move conversationally. One is that you brought up the smoking example, and this feels like a good spot that we could get into a little bit more about core screening and explanatory depth and information pricing and how we can start to actually look at different explanations through a lens of their thermodynamic costs.

I think that one connects to a lot of the other work that's going on here at SFI about the evolution of cognition in different systems and how evolution and adaptation is actually functioning in conditions of uncertainty. The other one is a little bit more specifically about how you choose the assumptions that you're making. And in your astrobiology, work you make this distinction between Socratic modes of inquiry and Euclidean modes. And I think both of those are really interesting places to jump off. But I'll let you decide where you want to go first.

David: Let's do the first paper first. And then maybe we can come back to the Socratic Euclidean stuff. So that paper, which I should say while we're talking, it was my first chapter of my dissertation that I wrote and the one that I spent a ton of time on. And I got a ton of help on it, both in the initial formulation of the ideas and in writing it up and going through the revision process from all of my advisors, but especially my advisor, Katie Steel. Also another advisor is Luke Bovens and Jonathan Birch as well. But definitely Katie from the beginning really had a lot to do with that project. The basic idea there is, I've been talking about these causal models and saying, "Oh, they involve variables," right? Well, variable is just a random variable, is informally just an exhaustive categorization of the different ways that the world could be, right? So classically you think the world could either be such that, Jones is a smoker or Jones is not a smoker, right? Or Jones has lung cancer, Jones doesn't have lung cancer. I threw the ball or I didn't throw the ball, the window shattered or it didn't shatter.

Obviously, these kinds of possibilities are what we think about when we think about causation and counterfactuals. Now what that entails is that all of these causal models, are all going to be sensitive to and essentially require us to first specify what these variables actually look like. This is where granularity comes in. The set of values that a variable can take can be more fine grained or coarse-grained, right? So you can say John's a smoker. John is not a smoker. Or you can say John smokes Marlboros, John smokes Camels or he's not a smoker. So on down the list you could list how many cigarettes John has ever smoked, all kinds of things like that.

Michael: So just to pop in here, I think this comes up in our conversation in the Facebook group with some regularity. Because one of the things that I feel people try to avoid or just completely rethink is this historical dispute between the scientists that assert that a study of history is a necessary precondition for understanding economics, for example, and the scientists that are willing to look at crowd dynamics through a fluid socio-physics model.

And then this is one of these issues where, how much detail, how much contingency you consider relevant. Are you willing to look at somebody as a particle in a fluid in motion or do you need to know why the person made the decision to attend the protest that day?

David: Right.

Michael: And that these are... If I'm to understand you correctly, these questions have concrete implications for the way that we practice science because it's how much energy do we have, how much time do we have to allocate to a particular line of inquiry? And then also how different allocations ends up giving us, potentially… The interesting thing in this paper for me, among many was that there's an assumption that abstraction and explanatory depth are the same. And you're saying that's not necessarily the case.

David: Right. I mean that's... So, as is often the case with academic papers, you have to sort of situate it in this context of sort of debate and things like that. Especially, in philosophy you often have to do this and I think it's mostly to the good. But yeah, so there've been people that said, "Look..." So just to take a step back for a second. The question I'm trying to answer is, there's been a lot of hand-wringing among philosophers that think a lot about these causal models given that they do require you to first set the values of your variables before you can investigate what the causal relations are. There's been a lot of hand wringing about what the right level of granularity is for a given causal model of a given phenomenon. There's been some really... Jim Woodward has a really nice paper in which he sort of goes through all the different sort of possibilities and has a slightly pessimistic conclusion that he can't really think of a good sort of objective reason why any given causal model has to use a particular sort of set of variables or variables with a particular set of values.

Some really good work on this by people like Michael Strevens, Laura Franklin Hall, Brad Westlake, all of whom sort of try their hands at one answer or another. Brad Westlake in particular has this answer that you want as coarse-grained as possible, a description of the system that preserves all the causal relationships and preserves all the information between all the variables.

But what I show is that in some cases, especially depending on how you're defining information flow that can lead you to some counterintuitive results. What I bring in, in that paper, which is really something that Katie Steele helped me out a lot with, but it was also I guess new within this discussion is a sort of pragmatic bent… I basically say, "Okay, we can actually set up as a decision theory problem, what variable will give us all the information we need and none of the information that we don't for a given decision problem." And then I say, "Look, if you specify some decision problem that you're trying to solve with your causal model, then and the decision problem is going to have a utility function over possible outcomes and a probability distribution between interventions on some causal variable and the values of some variable of interest… You can sort of mathematically say, if you assume certain things about your utility function and how expected utility works, exactly how coarse of a variable can you get away with. And if you assume that coarse graining is good at some level because it reduces the sort of computational complexity or complexity in some other sense of your causal model, then you ought to coarse-grain as much as possible while preserving all of that pragmatic information.

So that allows you to coarse-grain up more than you would if you were trying to preserve all the information between variables. But it also gives you a sort of very precisely defined limit. Again, in a given context for how of coarse-grain do you have to go. I think this is coming back to a theme which is that, throughout my work I often find myself saying like philosophers, especially philosophers of science, maybe not, especially philosophers of science. Philosophers in general are often very unsatisfied with answers that require too much context.

In this debate for instance, philosophers want to know what is it about the world or what is it about causality in general that can tell you the right level of granularity for your different causal variables. Laura Franklin Hall calls this “the holy grail of philosophy of science” in some settings. What I'm kind of saying is I don't think there is a holy grail in this instance. I think you have to specify the context. In some sense, I think these questions only make sense once you've specified a context, often that's going to be some pragmatic context. You're going to have some problem that you need to solve with these scientific models. So you're going to answer these questions about what's the right level of granularity based on that context.

Michael: Yeah. So, this is where thinking about evolutionary dynamics seems to play in where you're actually saying something about the evolution of our own cognitive biases. You give this example about a court hearing. The need for a jury to determine beyond a reasonable doubt how fuzzy that is, how a reasonable doubt changes from case to case based on the severity of the crime and the severity of the punishment. So it leads to over n iterations in some evolutionary game, it leads to situations like the one in which human beings are likely to assign agency to a whispering in the grass, as a classic example.

Where it's the cost-benefit of getting it wrong and they're not being a tiger in the grass is such that we're actually skewed in the way that we think of banks and it has to do with cost and risk and so on.

David: Yeah.

Michael: So there's this question of how... I mean, I don't know where you want to take that, but it does sort of raise the question for me of how these biases find their way through this kind of evolutionary game into what we believe is going on in scientific theory.

David: Yeah. I mean, I'm not an expert in evolutionary theory and so I'd be cautious about saying anything too definitive on that. I will say that there is a flavor to my work, and I wouldn't characterize it this way in print, but there is a flavor to what I'm doing in that paper in which I'm kind of doing what sometimes is done in moral philosophy in which what's thought to be sort of a kind of deep moral principle is argued to in fact be an artifact of evolution. I don't want to say anything about how successful or unsuccessful those kinds of arguments are in moral philosophy.

I will say that some of my work about levels of description has a similar feel, in that I'm saying, sort of, what makes sense for some agent who might be subject to selection pressure. Although I would not have the expertise to formalize what it would mean for that agent to be subject to selection pressures, but given that that agent is going to be subject to some selection pressures, what's the sort of core system model or most sort of adaptively rational model for them to have of the world?

And I think in some sense because science is done by agents and group agents and all kinds of agents, all of that's going to matter in terms of how we build our scientific models. I think in some sense that's deeply pragmatist, but I'm happy to sort of own that. I will say David Danks has a sort of much more wide reaching research program and has some really great papers with a student of his, Sarah Wellen, I believe is her name. In which they sort of in a much more broad way than I do, think about the rationality of science and the very enterprise of science from a standpoint of adaptive rationality. So evolutionary rationality, more or less. If anyone's sort of intrigued by these kind of half-formed things I'm saying, now you can go read that and get the fully formed things. [Laughs.] So yeah.

Michael: Well, done. This seems we can always double back to the Euclidean and Socratic stuff. I’d love to. But it seems this is a natural place to segue into another unpublished paper, that you're doing with Liam Kofi Bright. On risk aversion and elite group ignorance. You're critiquing the idea that someone in an elite group who is willfully ignorant of the conditions of people outside of that group is irrational. You're making an argument from the perspective of a rational risk aversion. That this is not the case. This really seemed... I don't know. This is just a really interesting piece. So I'd love for you to expose on it a little bit.

David: Oh, thanks. Yeah, I should give a big shout out to my co-author Liam there. Liam started as a professor at LSC the final year of my PhD program and we've had a lot of really great talks about this project. I actually introduced him to falafel for the first time over this project. He'd never had a falafel before. We sat in a cafe not far from the London School of Economics and ate falafels and really talked through the details of this paper.

But yeah, so our starting point here is a really great paper that I would recommend everyone read by Charles Mills called “White Ignorance” in which he basically takes to task a lot of really great social epistemology being done by people like Hillary Kornblith and people like that. Alvin Goldman. Basically saying that this project of trying to think about belief and knowledge, not as sort of just properties of some one individual, but as attitudes that individuals have in virtue of their participation in society, of their participation in groups and the groups may even have in themselves.

Mills is sort of taking all of that to task and saying that it is ignoring this massive role that ignorance plays in the social world, especially ignorance around inequalities in race, in gender, in social class. Mills focuses on race. But he's clear in that paper that he hopes his analysis could be extended to all kinds of social inequalities and the role that all of these inequalities play in our epistemic lives. Right?

So this is very much in keeping with what's sometimes called standpoint epistemology, critical race theory, things like that. One thing he says in his paper is that he considers the kinds of ignorance in which, bluntly, white people are ignorant of racism, historical racism, the effects of racism on their lives today. He considers that a form of irrationality. What we do in that paper is acknowledge that Mills is entirely correct if one adopts the standard economic notion of rationality, namely expected utility theory. Mills is entirely correct that any kind of ignorance on the part of any agent would be irrational.

But then we show that if you adapt a different way of thinking about decision theory, one built by a philosopher named Lara Buchak who is just moving now actually from Berkeley to Princeton, as far as I'm aware… You can get a result in which it is actually rational to remain ignorant of some information. We set up situations that speak directly to this kind of ignorance of one's privileged status.

Just to give you a sort of a flare of how that works, the idea is something like this: Look, if I'm going to borrow my friend's train pass to take the train, let's say that's illegal, right? I'm debating whether or not to do it. I could either just buy a train ticket or take the pass. If I take my friends pass and I'm caught, I'll get a fine. If I don't take their pass I'll obviously have to pay for a ticket. So there's a sort of a built in cost function there and a built in utility function. Because I'd like to not pay any money to take the train.

But I'm a little risk averse. So looking at from my risk averse position, I think, “You know what, I could probably get away with taking the pass. Maybe it's 50/50 but it's not worth it. I'm just going to buy a train ticket. Now someone comes in, they can sell me some information. They can tell me whether I'm a member of a group — in this case, white people — that actually have a very low probability of having their train passes subject to any kind of scrutiny. The conductor's just going to walk by, go, “Yeah, train pass. Good. Go right ahead.”

Now what we show is that under certain conditions I'll actually pay to avoid being told whether the group that I'm part of is in fact subject to these privileges. The reason why is because it would license all kinds of risk-taking that by my own lights right now I actually want to avoid. I don't want to become this risk-taker in the future.

Now this is controversial, right? Whether that's a rational attitude, right? Buchak thinks it is. I think it can be in at least some cases. I think it's fair to say... It's fair to sort of challenge us on that question, is that really rational to say, “Don't give me this information because it'll make me a risk-taker. And from where I'm standing now, I don't want to be risk-taker.”

Another example, just to sort of drive the message home a little bit, one that's not related to these sort of socially loaded contexts would be someone who, maybe they're an extreme skier and they've looked at the avalanche report and they think, “Okay, there's maybe a 5% chance of an avalanche. I'm not going to go. I'm not going to go out today. Too risky.” And then someone comes along and says, “Well actually I can tell you whether it's a 10% chance of an avalanche or a 1% chance of an avalanche.” And then the skier says, "Don't tell me, because if you tell me 1% I'm going to go. But from where I'm standing now, I don't want to get involved in an avalanche."

So that's the less ethically loaded case. But one that sort of should pump the same intuition. And then we use that in that paper to sort of talk about this broader way of thinking about these issues. Thinking about ways of trying to alleviate these kinds of social hierarchies, right? If we think that these social hierarchies between races, between genders are in themselves sort of deeply problematic features of society, then we say, “Look, it's not going to be enough simply to try and educate people, right? Because here's a model in which people will rationally avoid this information that you're trying to get them.”

So you really need to sort of work on these hierarchies more directly. I should say this is very much a how possibly model as in this standard economics tradition of saying, “Look, here's some math that kind of shows you how things could go.” We acknowledge in that paper that there needs to be a lot more work in cognitive science before this could go from a how-possibly model to a how-actually model.

But this is work in progress. It might change. It's sort of running the gauntlet of the journals right now. But one thing I'm really proud of in that paper is the way in which we sort of bring mathematical philosophy to bear on questions that it is traditionally avoided. Right? There's sort of... I think it's partly sociological, anthropological, but there's been a divide until very recently between philosophers that use a lot of math in their work and philosophers that think deeply about social inequalities and other real world issues.

I think that's to the detriment of both of those kinds of philosophers to some extent. I don't want to say everyone has to do mathematical philosophy or everyone has to do politically loaded philosophy because they don't. But I'm quite proud that I have something that hopefully will someday see the light of day in which we're bridging that divide a little bit.

Michael: For me, one of the most interesting things about this paper, you squeeze in right at the very end, where you're talking about what precipitates out of this in terms of there being a prima fascie plausible line of argument that the correct thing to do is just to devote our efforts to trying to inform people who are risk-averse and rationally avoiding information to make an effort. And I hate to even bring this up, but I see a lot of people throwing themselves against the wall in sort of #okboomer-style conversations where people in a group that feels that they are economically disadvantaged is trying to make the case that things are actually worse out there than you realize. What you're saying is that this suggests that, at least under a certain set of constraints on this situation, you may encounter so much resistance that the energy could be better spent on direct interventions with the disadvantaged than on trying to appeal to the privileged group.

David: Yeah. Direct interventions on society, right? Making society such that there is not a deep material disadvantage associated with being a particular race or a particular gender as a result of deeply unfair and historically ingrained practices. Yeah, the kind of “Education is not enough” idea. I think this is something that's treated much more interestingly in the Mills paper itself. This is the Charles Mills paper, “White Ignorance.” Because in that paper he goes through all of these historical examples to illustrate the extent to which privileged people, a group in which I entirely count myself, privileged people are really motivated not to have to acknowledge their own privilege.

There are all kinds of reasons why you wouldn't want to do it, which is why it's interesting in a way that the sort of thing we critique Mills for calling this a kind of irrationality. He calls it motivated irrationalities, as there's deep motivation to be ignorant of these things, that the ignorance is inherently irrational, but there's still a motivation behind it. We come in and say, no, let's try and fit this to a model of rationality. But Mills is very right to point out the lengths people will go to to avoid information that is uncomfortable to them. And to act upon that information would require them to make changes in their life that would make the world more fair but might make their lives a little bit less comfortable in a number of ways.

Yeah, I think this is really going against a traditional notion in both epistemology and in economics that people will always seek information, right? Because that would be the rational thing to do on some model. But we show that if you have a broader definition of rationality that includes other ways of mathematizing what one counts as a choice or the action, you can see that in fact it is entirely in keeping with rationality for people to maintain a kind of ignorance that we might nevertheless find morally, deeply objectionable.

Michael: So to again draw a totally slipshod and potentially faulty analogy here to evolutionary thinking, this seems like what we're saying is that in our pursuit of the quote unquote truth that we reached these local optima, and that there may be a higher peak over here, but to get there from here requires you cross the valley, which rationally nobody is going to do, right? I mean that's keeping with Kuhnian idea of scientific paradigm shifts that you elsewhere critique.

David: I guess I would say in the context of the particular paper we've just been talking about, I think we would not want to claim there were at any kind of local or global social optimum, and we're basically showing, well, one way not to get to something closer to a social optimum is just through education because t here are these rational impulses towards avoiding information, at least from the self interested rationality of individual agents.

More to the point, I think that things would just be, at least in the short term... But maybe this is where your point comes in. I see what you're saying now. Yeah. There might be a long-term benefit to receiving information that you, by your own risk averse lights, judged to be information that you ought to avoid. And this gets into really good work on dynamic choice including in a risk averse context. People like Johana Toma and Jonathan Weisberg working on dynamic choice under risk averse cases, where you're not just looking at one decision problem but potentially hundreds or thousands of decision problems, and how risk aversion factors into that kind thing. And there, I do think there's a case we made that information that might be harmful to you in the short term is nevertheless beneficial to you in the longterm. Getting people to make those kinds of decisions with a longer time horizon could in itself be a very worthwhile project.

Michael: Well, you're not going to have to fight me to agree on that.

David: Okay. Good.

Michael: In fact, that section reminds me of the work that SFI external professor Ole Peters is doing on ergodicity economics, and disabusing ourselves of the notion that people actually make decisions across the space of possible outcomes. They're actually looking at a situation iterated over a given time horizon, like you said.

David: Yeah. Without knowing too much about Ole's work, there is a direct relevance here in that both of us are looking at the consequences in a broad way of jettisoning this one picture of rationality that involves expected utility theory where expectations are a weighted linear average over possibilities. He's jettisoning that in his own way. We're doing it a slightly different way here. But both of us are sort of exploring notions of a broader context of rationality. Yeah.

Michael: I want to make sure you get out reasonably close to your fair time. But before we go, I feel like I've already promised this so many times: I just want to hairpin here and double back on this interesting distinction that you make in the astrobiological work on Socratic versus Euclidean modes of inquiry. How that kind of thinking constrains the kinds of questions that we feel fair to ask in science.

David: Yep. Good. Basically, this comes out of a book review in which Clark Glymour, really famous philosopher of science, someone whose work has been influential on mine in a number of ways, including my work on the problem of old evidence. He came up with the problem of old evidence. I'm glad we haven't talked about the problem of old evidence today because it's very boring, but that's another area that I work on where he's been influential. But he, in this review of a really great book by Jim Woodward, as well as just in some interviews that he's done, has made this distinction between Socratic and Euclidean modes of inquiry.

And the way that Socratic mode of inquiry does is you try and define all your concepts, give the necessary and sufficient conditions for the thing that you're trying to study, right? So the Socratic approach to morality would be to say, okay, what is the necessary and sufficient conditions for an action to be good, right? The Socratic approach to epistemology — this has generated a ton of ink — is the question of what are the necessary and sufficient conditions to know something, right? These Socratic analyses of concepts. Most philosophy has gone this way, right?

And then there's this Euclidean mode, in which you don't try and do that. Rather what you do is you state some axioms that you take to be warranted, right? And this is where you get back to people like Wittgenstein and Reichenbach in terms of this idea that assumptions can be warranted even if they're not known or even if you don't have reasons to believe them necessarily, or sorry, you might have reasons to believe them, but they might not be empirically grounded reasons or things like that. But you put forward these assumptions, then you investigate the consequences of those assumptions. And given that the assumptions are meant to encode something about how you think the world is, you can get results out of that that constitutes new knowledge. You can have a fruitful inquiry, right?

And it's called Euclidean because the idea goes, Euclid has this whole geometry in which he has a set of axioms. And from there he's able to derive all these incredibly rich geometric concepts. But he never defines some of the primitive terms in these axioms. The example Glymour gives is the point, right? Euclid doesn't spend any time talking about what it is necessary and sufficient for, for something to be a point. He just uses this notion of a point in these axioms, and it's in investigating those axioms that you start to really get a sense of what a point is because you understand the role it plays in some system.

I think if you look at the history of Socratic projects, they're usually carried out in philosophy. Philosophy is a discipline where there's an active debate over whether it makes progress. I think philosophy does make progress in a lot of ways, but certainly the sciences which tend to take a more Euclidean approach, I think there could be no doubt the sciences have made progress. Although Kuhn would disagree with me there. But I certainly think there could be no doubt that the sciences have made progress. This does speak to this Euclidean way of doing things. I mean, you can just spend all your life trying to find out the necessary sufficient conditions of one thing, or you can just make an assumption and see if it leads to fruitful further consequences.

And just to tie things full circle, this is exactly what the causal modeling approach based on the Causal Markov Condition does with this notion of cause. At no point in this literature do you ever get something that says, “ X causes Y, if and only if,” and some long list of things. The closest that you come really is Jim Woodward's book on causal explanation in which he uses this causal modeling formalism to try and derive conditions like that. But cause in this framework plays the same role that point does in Euclidean geometry. And then just by setting up these axioms you get this really fruitful, rich understanding of what a cause is. You could derive all sorts of theorems and ask all kinds of interesting questions as I'm trying in my own feeble way to do in a lot of my work. Ask all these kinds of interesting questions about how causality works using these axioms. It's there that you're able to make progress, I think.

As far as my meta-philosophy goes, I'm a big fan of Euclidean approaches in philosophy. I think there are coming to be more and more of them. And I think working with scientists like I do at SFI is a good way of continuing that project because scientists, to me anyway, seem to be in the business of launching Euclidean projects. And so yeah, just seeing, watching people do that all the time really gives me inspiration and really helps direct my inquiry throughout all my work.

Michael: Just to tie a hypothetical bow on this, it sounds to me what you're saying, or what I see rather, through all of your work, is an appreciation for submission to the humility that what we know is within a given time horizon, given the available energy, based on the level of aversion to risk that we have inherited. Within a particular margin of error, given reasonably defined bounds of profitability. I think a lot of people think about not just philosophy but also science as seeking much more grandiose and final statements about the world. And it seems as though really the message elaborated on in your work so much is, again like you said earlier, much more about the conditional and the contextual in all of this thinking.

David: I think that's really well put. I think I maybe paradoxically take it as a great compliment if you think there's a sense of humility that comes through in my work. You mentioned how people think about philosophy and how people think about science, and I think it is part of a PR problem that both of those fields... Maybe not a PR problem, maybe it's just a PR reality. If you go to the website of a philosophy department in the anglophone world, certainly you'll be told to take philosophy classes because it'll help you understand the big questions.

Similarly, the sciences sell themselves on this idea that if you study the sciences you will gain more understanding. And in a sense, both of those things are true, but I think there's a tipping point at which in both the sciences and philosophy, you realize exactly, as you said, how much of our knowledge is scaffolded, just how much of our knowledge is context dependent. And yeah, I think that's one of the really profound things you get out about this. I don't know if there's a way to sell that to people. I don't know if you could say study biology so you can know how little we know about life. Study philosophy so you can know how little we know about everything.

Michael: There are rational reasons to remain ignorant.

David: There might be rational reasons to be remaining ignorant there, at least if you're trying to get anyone to take your classes. And just one more point on that. I've mentioned a lot of names of mostly contemporarily working academic philosophers. It's not to give an impression of being well-networked because I'm not really. I don't know most of these people that I'm mentioning. And it's also not to give a sense of how well read I am because I'm not well read.

It's more to give a sense of we're all just doing these little research projects that together form a really nice mosaic. And I think if there's one thing I could get across about how philosophy is versus how it's seen, it's often seen as this discipline where you have big names. Kant, Hume, Mill, Aristotle. But today I think it's practiced much more communally and in a much more piecemeal way, closer to the way the natural sciences have been for a long time. And I just hope that collaborative and piecemeal trend continues.

Michael: Awesome. David, thanks for sticking around a few extra minutes and rapping with us.

David: Alright. No problem, Michael. Thank you very much.