COMPLEXITY: Physics of Life

Rethinking Our Assumptions During the COVID-19 Crisis with David Krakauer (Transmission Series Ep. 4)

Episode Notes

COVID-19 has delivered an extraordinary shock to our assumptions, be they in how we practice education, business, research, or governance. When we base forecasts on bad data, even solid logic gives us unreliable results. Centralized authority is good for organized coherent action but isn’t agile or fine-grained enough to deal with local variance and rapidly evolving novel challenges. Surveillance can save lives but also threatens privacy upon which a diverse society depends. A longer memory might cost more to maintain, but also save more by preventing even larger economic burdens down the road.

How we adapt to this pandemic will depend on where we find new balance points between established and efficient universal standards and agile, messy flexibility. In this episode, we build on the themes of earlier installments to study five new articles where rigorous uncertainty, complex time, and the creative opportunities of crisis intersect…

Welcome to COMPLEXITY, the official podcast of the Santa Fe Institute, the world’s foremost complex systems science research center. I’m your host, Michael Garfield, and each week we’ll bring you with us for far-ranging conversations with our worldwide network of rigorous researchers developing new frameworks to explain the deepest mysteries of the universe.

In Transmission, SFI’s new essay series on COVID-19, our community of scientists shares a myriad of complex systems insights on this unprecedented situation. This special supplementary mini-series with SFI President David Krakauer finds the links between these articles—on everything from evolutionary theory to economics, epistemology to epidemiology—to trace the patterns of a deeper order that, until this year, was largely hidden in plain sight.

Support our research and communication efforts at santafe.edu/give.

If you find the information in this program useful, please consider leaving a review at Apple Podcasts. Thank you for listening!

Further Reading:

Anthony Eagan on Federalism in a time of pandemic

Carrie Cowan on the future of education

Stephanie Forrest on privacy concerns that arise with a pandemic

Sidney Redner on quantitative ways to consider the economic impact of COVID-19

David Wolpert on statistical tools for making pandemic predictions

Visit our website for more information or to support our science and communication efforts.

Join our Facebook discussion group to meet like minds and talk about each episode.

Podcast Theme Music by Mitch Mignano.

Follow us on social media:
TwitterYouTubeFacebookInstagramLinkedIn

Episode Transcription

Michael

Well, David, we are in the mix for week four of the Transmission series and this week's is kind of a grab bag. But with the diversity of entries for this one, I think we're starting to see something that taps into themes from all three of the last week's episodes. I'm looking forward to exploring this with you. Where should we begin?

 

David

Well, yeah, I think there is a thread in this labyrinth having to do with or authority and power and it's best use and control. And I think the logical place to begin as we have in the past is at the micro and work to the macro. And I think David Wolpert’s article on Bayesian statistics is where I think we should start.
 

Michael

Yeah, David actually does a really good job with this piece in introducing the outline of Bayes’ theorem. Not assuming the prior of who in our audience knows what this is, why don't you lay this out for us?

 

David

Yes. So David starts by making the point that our best decisions are based on when appropriate data and typically estimates of datasets which are incomplete. And we've talked a lot in this series about R0, the reproductive rate during an epidemic. And these are things that we have to estimate from very noisy data. And the point that David makes, which we all know of course, is that our prognostications and projections and predictions are only as good as those estimates. And those are changing of course dynamically. And so we're constantly having to go back to the data to make better estimates. And he introduces us to what is really a very foundational idea. It’s about 200 years old, and it was derived by an English minister, Thomas Bayes. And it's the appropriate way to think about conditional probabilities.

So most of us growing up learned about probabilities in terms of fractions. And so if 50 out of 100 people are female, then the probability of being female in a population is a half. And that's how most of us think about probabilities or odds. But there is a more complicated idea, which is not a straight probability, but what's called a conditional probability. So for example, what was the probability of being British if your favorite food is a scotch egg? Well I think it would probably be quite high, right? Because no one in their right mind who wasn't would ever go near the thing. And this is not a simple fraction. To calculate a conditional probability—what is the probability of A given B, you need to use an equation and that equation is known as Bayes’ Rule. And why does that matter? And David gives a very nice example: because if you use fractional intuition for probabilities, you make terrible mistakes.

He gives the example of a test. Let's say it's a test to determine whether or not you are positive you possess the COVID-19 virus. Let's say that it's been established that that test is 90% accurate in its control group. So we have a conditional probability here. What is the probability that you have the disease if you test positive? Now, most people, thinking “fractionally” if you like, would say, “Well look, you've told me that the test is 90% accurate. If I'm positive, there's a probability of a 0.9 or 90% chance that I have the disease.” And that would be absolutely wrong because Bayes’ Rule, the equation says it's not enough to know the success of the test. You have to know the incidence of the disease in the population, what is sometimes called the prior. And David says, let's imagine that we know that only about 1% of the population is infected.

So now we have this test we've established in our control group is 90% accurate. You plug it into the equation and it tells you what are the odds of being sick if you test positive. And it's about 0.1, or 10% chance. So not 90% but 10%, and that is because the equation says that the incidence in the population gives you a sense of the true population-wide false positive rate. So it's really important because people are going to be taking tests. Those tests will be accurate. But if that prior, if that background statistic is inaccurate—in other words, how many people are truly infected in the population—then that reliability of the of the test could really titrate between almost zero and 100% and so that is a really important first point makes.

Now the second point if you like conundrum is how do you arrive at an accurate measure of what that prior is? I mean, what is the rate of infection in the population at large? Well, to know that you have to use Bayes’ Theorem, again, you have to test, but we've already established that to test you have to have a prior. And this gives rise to this whole field of what's called Bayesian updating or Bayesian particle filters, which is that you have to do it over and over again. So you plug in a prior, which is approximate, you get what's called the posterior, and then you plug that back in as the prior and you do this over and over again and you hope you converge on the accurate statistics. And so the big philosophical insight here is that your estimates of your chances of being infected up are only as good as this data that you don't really possess. And so even if the mathematics is absolutely watertight and correct, garbage in, garbage out, and it's extremely difficult to make those estimates.

 

Michael

We’re seeing this in practice now with the Stanford study that dramatically recalculated the death rate that just came out this week. I feel like this links in kind of two distinct ways with Mirta Galesic and Henrik Olsson’s piece and their work on social science and how all of us suffer local bias based on trying to infer global truths from the local sampling of our intimate social networks. Right? Because that's exactly what we're doing: we're assuming a sort of a marginal probability rather than a conditional probability. The condition being that we're only sampling like the five people that all want to see the same movie, or so on. So that's a piece of it, which is having a little bit of forgiveness for the true and terrible complexity of making an accurate estimate on this issue.
 

David

I think, again, I mean your mathematical formulae You can be absolutely right. In other words, the logic can be correct, but if you feed that formula with the wrong data, it will spit out the wrong answer. Of course, it's obvious, but it's not always obvious in debate because people will say, “your theory is wrong” or “the formula is wrong.” No, it's not. What is wrong is that we don't have the empirical grounds with which to conclude and, and I think that's something that's worth bearing in mind.

 

Michael

It's sort of a like a point that Caroline Buckee brought up in a recent episode when she was talking about the difference between weather forecasting and epidemiological modeling. where she said, if you predict that it's going to rain tomorrow and everyone goes out with an umbrella, it doesn't change the fact that it's going to rain. But if you say there's going to be a pandemic and everyone isolates and then there isn't a pandemic, people have a habit of blaming the model or blaming the scientist. And so I think that this is, this is a sort of a tricky condition. This is the other point with Mirta and Henrik's piece, is that the opportunity to reestablish trust in science and scientific communication right now is sort of contingent on people understanding this. I saw a kind of a tragic, a cartoon that was talking about, “Well, if we manage to do this well, if we managed to contain this and stave off the worst, if we have flattened the curve, then people are going to complain that the economic precautions that we have instituted weren't necessary. And so we face the possibility of a backlash against the people that were providing the very models that were used to protect us.

 

David

You know, I mean that's always been the problem with successful medicine, that it's a victim of its own success. If a vaccination policy is very successful, then you see very few infected and that leads some people to claim that there's no longer any risk. And that's almost an insurmountable problem in the limits of human judgment. And the best you can do is show people in some sense in simulation what the world would be like minus the vaccine. But that’s a hard lesson to learn.

 

Michael

So there's another quantitative piece, Sid Redner’s piece. He's looking at this in a completely different way. I took this to be sort of more like back of the envelope math that, you know, when we're talking about in the first episode, the importance of not using models that are more complex than the data that we have. You know, this is a really good instance where you can kind of ballpark an estimate of the economic impact of COVID-19. He looks at Italy specifically. Why don't you walk us through this?
 

David

Yeah, this is very nice. It’s very simple arithmetic, starting with this quote from Keynes from 1938 where Keynes makes the point that, “What is a model doing?” What a model is trying to do is segregate what you know, what do you call semi-permanent, from what you don't. And it's a very important point about the philosophy of models. In the first episodes we talked about overfitting noise as you just pointed out and the virtues of simplicity in the face of uncertainty. This is another point, which is that a good model minimizes novelty and restricts it to the hypothesis at hand. Meaning if I write down a mathematical model for an epidemic, you don't want to check my math, you don't want to determine where the calculus is, right or wrong. You want to determine whether my premise is right or wrong.

So in any effort to theorize, you're always trying to take a framework where almost everything is known and can be verified in order to reduce the ambiguity around that one insight you're making, which is hard to verify. And so this is a very important philosophical point about model building and not everyone gets it. And so a lot of people try and do everything that's new. And of course in the end, you're not sure what you're really evaluating: the mathematics, the logic or the hypothesis. So what Sid does is he says, “I'm gonna get the simplest thing in the world arithmetic, right? So there's no question yet. And I'm going to take data, which we sort of know to be true.” And he says, “So here's the deal. We'll say, what do we know about Italy prior to COVID, when we know that life expectancy was on the order of years?”

And we know that in a population of about 60 million people, just over 2000 people die from a variety of causes. So that's the Keynesian background, right? That's the thing we know. So if you look at covert late March, we can say right away that it's responsible for a 35% increase per day. So this gets to that rate remark we were making in relation to the Melanie and analogies piece. So that's very significant. So now you can try and say what's the cost of that and what do we know? Well we know how much it costs to be in an ICU unit, but we don't know the Italian ICU data. We know the American data. So in the U S about 17% of daily fatalities end up in ICUs. And so if we go back to the Italian population, given the numbers we have in hand, we get about 350 expected ICU per day.

But what do we see under COVID? About 800 per day. So that's 2.5 times baseline. So now we can ask, okay, what's the cost of ICU? It's about 1.6 thousand per day. Well, under those assumptions, we get about 1% of Italy's GDP being accounted for by the medical costs associated with treating COVID-infected individuals. 1% of GDP. And he makes the comparison to something like the National Science Foundation where he says, you know, if you were to think about that in American terms, that 1% of GDP per month, by the way, would pay for the total annual budget of the NSF twice every month. So these are very big numbers and it's simple linear arithmetic that he's using. And here's why I think it's important, and it's something he doesn't say explicitly, but I think it's a natural implication of his logic. If you look at the airline industry, the airline industry has lost on the order of tens of billions of dollars. And the same goes for say, the oil industry. The national economy is facing a loss of trillions of dollars. So these are very large numbers. How much would a virus surveillance operation have cost preemptively? Probably millions. So there's this interesting fact that industry, petrochemical, oil, airline, travel, service, if they had invested some sum of money in surveillance programs, they could have saved billions of dollars. And I think that’s the radical implication of these kinds of numbers: that some preemptive expenditure with these kinds of risks in mind could have saved national economies trillions of dollars.
 

Michael

Now of course, you know, bending back to our last episode, on weathering a mass extinction, you know, this is exactly what we were talking about with bet hedging. That in times of relative stability, this kind of thing should be obvious but is not because the economic competition, you know, sort of a blind evolutionary process, doesn't tend to regard this as a valuable expenditure because it's not thinking ahead on the kind of timescale that's going to integrate these costs.

 

David

Yes, that's absolutely true. But you will note, I mean we've been hearing in the media very divergent opinions on this. Most of us are very aware that in the last decade, and we've mentioned this before, we've had comparable viral outbreaks, whether it’s Ebola or SARS or MERS, and we were very fortunate that those viruses had lower transmissibility. If they had had higher transmissibility, it could have been worse than this situation. So in just a decade we've seen multiple instances which should lead us to understand that this is not a black swan event. And people who have been citing Bill Gates who pointed out as many had that something like this was inevitable. So in the same way that actuaries who are consulting with an insurance company calculate plausible risks, this is a totally plausible risk. What was I think unanticipated was the shutting down of aggregate demand. But now with this experience in hand, I think there's no excuse for industry and government not to take preventative measures and worry a little less about expenditures in the millions that might offset losses in the trillions.

 

Michael

This is a possible link between… When we were talking about natural selection and epigenetics and the two sort of internal and external fitness landscapes… This is the kind of thing that we see in fine-graining when we look at the epigenetic impact of like a famine on up to something like 14 generations of rat offspring; They find that environmental information is encoded so that they are remembering the condition of food scarcity. This is an instance where luckily, our society will probably be prepared for another pandemic for as long as the various memory systems that the various ways that our civilization encodes the trauma of an event like this will actually last.

 

David

Yeah. I mean this is something that John Geanakoplos has been saying in relation to the rapidity with which the government responded with stimulus to COVID and a large part of that comes from the very recent memory of the 2008-2009 financial crisis. So most people in positions of authority experienced that event. So that was only a decade ago. If that event had been several decades ago, it might be that there would have been enormous disagreement as to whether or not stimulus was a good idea, which would have been of course disastrous. So I think you're absolutely right and we have to find mechanisms to allow for the persistence of memory. And that gets to exactly that point you were making earlier about vaccines, right? When a vaccine is too successful, people don't want to take them because they don't see the disease. We have to create in culture some means of maintaining an awareness of relatively infrequent events.
 

Michael

So this links us into Stephanie Forrest’s piece because you know, something that I've been seeing a lot of people talk about lately with respect to disease surveillance and the necessity of contact tracing and so on, is the concern that's still very, very much alive in most of us about the way that privacy and surveillance was handled after 9/11 and that, I think if nine 11 had not happened then we would probably as a whole in the United States be much more willing to indulge in what we obviously now consider sort of draconian surveillance measures, but we're aware of the way that these opportunities are very difficult to roll back. And, and so she's, she's looking at how do we balance the need for surveillance with the need for privacy. And she's got some interesting insights from cybersecurity and immunology. You want to talk about this?
 

David

Yeah. So I think that's really interesting. But you know that that was novel. We didn't realize the extent to which that surveillance was possible. We now do and in consequence have been immunized, if you like, against that exploitation. And what Stephanie is talking about is issues related to anonymity and privacy when we still can make use of these technologies to good effect. The Stephanie, for those who don't know, has spent many, many years looking at parallels between biological evolution and immunology and technological evolution and looking at computer viruses in particular. One of the things that was discovered several decades ago about the immune system, which might surprise everyone is the following: how does the immune system mount a successful response to a pathogen it's never seen before? It does this by generating random variation in its antibodies, in its immune response, in its T cell receptors in particular, and this is enabled by a process called somatic hypermutation.

Now if it generates random responses, what stops it from attacking itself? And that's called immunopathology and of course it does attack itself. So it's developed a method of negative selection or clonal deletion whereby any element of the immune system that attacks self is tagged and deleted, and what you're left with is the negative compliment of the self, which is everything that might infect you, right? In other words, you generate these random responses, you try and cover as much of the space as possible. You take out anything that attacks you. And in that space, what is left is hopefully an adaptive response to a novel infection. And why is that important? Well, it's important partly because you're anonymizing yourself. You're not signaling what you are because the self is not present in that set. Only the infection. And this turns out to be really important for dealing with computer viruses, because you don't know in advance what bit of code the computer virus is going to use or which part of your operating system it's going to attack.

So exactly the same principle can be applied. You generate a whole random set of possible interventions and you eliminate any that attack your own code, leaving only the response which attacks the foreign code. And so this is a very interesting idea that in some sense anonymizes the self, removes the cell from the picture in order to remove non-self. So that really is useful for dealing with things like malware or spam. But what about this issue of contact tracing? That requires a very different idea, and this is based on this notion of cryptographic set intersections, and it works like this. Let's imagine you and me Michael, we get together in a brighter world where we can sit together at a bar or over a coffee, and we want to share our address books. We want to know who we have in common, if you like, which friends we have in common. We’re looking for what's called the intersection of the set. What we want to do is come up with a method whereby we don't share everything with each other. We only share the friends we have in common, so everything else is encrypted. The only thing that's not encrypted is the intersection, what we share.

And that is exactly the idea that Stephanie is calling for in relation to contract tracing. I want to know all of the people with whom an infected person has come into contact. And that means I'm going to have to take a huge survey of the entire population, but I'm only interested in this intersection, the intersection of that person with the population, not everybody. And that’s critical because as Apple and Google and Facebook collect data, they're gonna have to collect it all to calculate the intersection. But what Stephanie is saying is, everything that isn't the intersection gets encrypted. So that can't be used in the future for reasons that we never approved of.

 

Michael

I feel like we've moved into the domain where we're talking about the balance between a sort of central overview of the situation and a distributed overview. So this is a dovetail into the last two pieces from Carrie Cowan and Anthony Eagan on the way that our system that we've inherited as we move into this crisis has structured governance, and has structured education, as a way of addressing the conditions of a world that for all intents and purposes, in many ways, seems no longer to exist.
 

David

Right. So now we're moving back into this question of or authority versus citizen. And the first experience most of us have in our lives of both authority and very strong anti-authoritarian feelings is education. Right? Speaking for myself. You know, and Carrie points out that we're now experiencing one of the greatest large-scale experiments in education in the history of education. And that is in particular the experiment around digital distributed education and the use of digital classrooms and massively open online courses (or MOOCs)—because we have to, because we're isolated. And these are fairly new technologies at scale. The MOOC really started getting popular in the 2010s and it led to the whole notion of the flipped classroom—being this classroom where essentially you sort of do your prep in advance, you learn the facts and you use the social context to ask questions…as opposed to just listening to someone lecture, which you can do online anyway. And I think the first point Carrie makes is this moment in our history is accelerating the adoption of these platforms that we had already grown weary of in the last ten years, but now are being forced to adopt because we have no choice. And she makes this nice analogy to the flu of 1918 where that was a kind of pivotal moment, not only in society, in demographics but in education. Prior to 1918 only about 10% of the population had graduated high school, but by the 1920s it was more than double that. And that education was then the aspiration that many people in society had.

She makes another point which I think is important, which is that the economic burden of education is incredibly high. From 2008 to 2018, tuition increased by about just under 40% and the net cost of engaging in education by over 20%. These are huge economic burdens and they're greater than the burden that many of us face from housing, for example. But there's also another burden, which is not economic but intellectual, which is the education has become so specialized and narrow. And I think if this circumstance of this pandemic is telling us anything, it’s that we now have to understand the entanglements in these complex systems and the educational system simply has to catch up with the reality of our circumstances. We need to emphasize connections, commonalities, what Carrie calls the mutualisms across the disciplines. Because if we don't, we're not going to really understand how to deal with the current circumstance.

So now, here's something really interesting that she says: when in mid-March we all realized that our lives had been upturned and that we were going to be living these more solitary existences, there was a huge increase in the demand for MOOC platforms like Coursera and edX. That is, online educational resources. But their increased popularity paled in comparison to another trend. And that was the popularity of the game Minecraft. So the game Minecraft, which is an online construction game, is extraordinarily popular. I mean it's the highest selling video game of all time. About 100 million people play it each month. And the point that she makes is that Minecraft is this deeply experiential, curiosity-driven, super versatile environment to apply your mind to a whole multitude of projects that you and others have invented. So it's in some sense the ideal classroom.

This is a world that I've been very involved in for quite a while. And my colleagues in Madison when I was there, Kurt Squire and Constance Steinkuhler, investigated this world at length.  What do people learn when they play World of Warcraft that they don't learn in a classroom. How are they different? How can we transfer the insights from those games to the workplace? And as you know, at the Interplanetary Festival we've been talking to people like Tarn Adams who wrote Dwarf Fortress, and Jonathan Blow who wrote The Witness, about precisely those kinds of ideas. I very much agree with Carrie's question, which is, “If there are these very powerful community based, very free, playful environments in which people can apply their minds, how can they help us rethink education in particular in this distributed sense?” So instead of recreating a classroom, which is what a MOOC basically does, what if you were to completely rethink it. And I agree that there's a huge amount to be learned from the developers of those games.

 

Michael

Yeah. So to link this back again to content that we've discussed in earlier episodes, this is very much linked to the discussion that we had about the way that a virus mutates so rapidly that it sort of occupies a cloud rather than a point. That we're seeing society move it into what you called last week, “a microbial mode,” where the educational process, the curricula, and the areas of specialization, are much more bespoke and individually tailored to the local. She mentions that the educational system that we have by and large now emerged out of this early 20th century industry context, which was very standardized. It was in many ways sort of like the adaptation of modern times zones in order to coordinate train travel. We were looking at trying to make something consistent universally. But that in the last several decades, the surface area of society and of the information that's relevant has grown so much… You know Adam Curtis's documentary, The Century of the Self, where he talks about lifestyle marketing and companies like Nike making it so you can design your own shoes on their website rather than just getting whatever they happen to sell you in the store.

This is about also an adaptation to the small and fast…to link this to Miguel Fuentes talking about where we see the signal of an impending social crisis in the breakdown of a consensus narrative and polarization, but it's not just polarization. It seems like people are starting to split up and looking for local solutions to the problem as it emerges locally and regionally.

 

David

I understand where you're going. I feel as if when I look at something like Minecraft and insights from gaming for education, I have a sort of a list in some sense of what I think it's doing. One of them is freedom, which is moving away from the authoritarian top down approach to education where there are received ideas that have to be imparted at very high fidelity and that's sort of the drill, and that works for a very tiny fraction of people but not for the majority.

And that takes us to the second point: diversity. We have a far better understanding now of the diversity of peoples, in particular cognitive diversity, than we did have let's say 20 years ago, and we know that not everyone learns the same way, and we have to provide mechanisms for, as you say, different timescales of learning and completely different ways of learning.

The third is collaboration. When I went to school, collaboration was not even a little bit on the table of the classroom. It was outside when we played, but not inside. It was you at your desk listening to someone bore you to death. Things have gotten a lot better than when I went to school, but the idea that the classroom itself could be deeply collaborative, which is what happens in primary school, but much less than university, but it does happen, iis something that we could massively scale up.

And the final one is is construction that you are being asked to contribute your world, your vision of the world for others to scrutinize and inhabit. That's exactly what you do with a theory. If I told you my theory, Michael, of thermodynamic evolution, you'd say, “That’s an interesting place to spend some time,” or not, and you would critique it and give me feedback.

And Minecraft does all of this. It has freedom, it respects diversity, it enables collaboration, and it allows you to construct a world. And I think in a way that's what the best science is. We have been doing that historically, but I don't think the educational system for various reasons, many of them I think understandable, has really had the tools to introduce those principles in a very effective way. And I think the game environment is one way in which to think about that in a very principled way.
 

Michael

Yeah. So to call on earlier work that you've done, and the conversation we had last week regarding the error threshold, and the emergence of a syntax in response to this crisis… I love that she says, “Real world problems and the jobs that exist to solve them demand more than a knowledge of political science or physics or art history. They require a new style of thought that emphasizes connection and commonalities, mutualism across disciplines. The world is not getting simpler.” I take this as a sort of a hopeful note, that what we're seeing here is that infrastructures such as Minecraft is going to enable a kind of cognitive complex organism, like a multi-cellular educational process.
 

David

Yeah. I want to add, I do feel very strongly that great classrooms, great research institutes and universities, have been doing this. In other words, these principles that we just discussed are not new. The problem has been that they've been restricted to a rather privileged few. They haven't been easy to scale up so that everyone can have access to that kind of free environment. And I think Minecraft in the domain of entertainment, very creative entertainment, has given us a keyhole to look through and see the possibility of a world where everyone, or many people, can have access to these principles. But there's a huge amount of work to be done. It's not as if we're just going to do science with Minecraft. Far from it. But we do need to learn from the joyousness and the freedom and the communitarian impulse that you see in those games to transform our educational system towards, as Carrie says, a more complex perspective.

 

Michael

Okay. So this takes us real neatly into Anthony Eagan's piece because it seems that, you know what we're talking about here, the modularity of the educational process through universal standards, brings us to the balance of power that he's talking about with federalism, and the way that we need to both lean on—but also question the assumed priors—of the United States constitution and its and its articulation of this balance between bottom up and top down governance.
 

David

Right. So I think this piece touches on many of the previous pieces. In some sense the way I think about it, cause I've worked on constitutions, is the rule systems that in some sense run societies—I sometimes call it law OS, the legal operating system… We have been watching the tensions arising recently between the president and state governors in relation to opening up states to allow for economic economic activities to take place, and how we trade off minimizing the risks of infection and maximizing productivity. And this tension was the tension at the very founding of this country, of course, in relation to national sovereignty and the monarchy, in Europe and Britain in particular.

Tony introduces this dichotomy that most people will be familiar with in this country, between on the one hand, the Federalists who advocated for a rather top down approach to robust governance, and the anti-federalists who advocated for a more distributed local state-based sovereignty. Many of the principles that found their way into the constitution in 1789 were articulated and explored in The Federalist Papers of 1788 that were written by Hamilton and Madison and Jay under their pseudonym Publius. And they really tried to explain what is this thing, how do you compromise between localized versus highly centralized needs? And Tony cites the 39th Letter, where it's written that the proposed government cannot be deemed a national one since its jurisdiction extends to only extends to certain enumerated objects only. So it has limited power and allows states to make certain residuary decisions of their own.

The deep point here is that that's all very well under normal circumstances. But what happens in crisis? So there is a temporal dimension to the Constitution, and the rules that it encode in relation to individual liberties. Under crisis, people are willing to forgo freedoms—in order to benefit, for example, from centralized information—but in periods of peace and greater security, they want more autonomy. And there really isn't very good thinking about a dynamical Constitution that would execute different rule systems according to the circumstances of the society.

I want to bring this to complexity science because there's a great deal that can be learned from the way that, for example, organisms are regulated. You could think if you like, of the genome as being a sort of constitution of a body, with the various cell types having greater or lesser freedoms in terms of pursuing their own metabolic or physiological functions. And in 2010 actually Mark Gerstein studied the structure of regulatory networks in five different species, bacteria, worms, flies, mammals, mice and so on. He found two very different modes of phenotypic governance, what he referred to as a more democratic structure, which is in some sense this sort of Anti-Federalist position where everything connected more or less to everything else very freely with very little evidence of hierarchy, and an autocratic structure where it was a much more sparse connection and much more top down.

The most important observation that the Gerstein lab made was that the more complex a phenotype that is—the more parts it had, the more cell types, the larger it was—the more democratic it was in its regulatory governance.

This is really important, and it gets to another point now, I'll try and wrap them up together. In 2017 I wrote a paper with Dan Rockmore and Tom Ginsburg and Chan Fang and Nick Foti on mathematical analysis of the history of constitutions. We looked at lots of different ideas, but we thought of the constitution as you like, of the rule system of a society, the playbook of of a society and how it changed through time. I want to make one point that relates to the Gerstein result and to Tony's insights, and that is that the U.S. Constitution is quite remarkable. In addition to being the first written constitution, it's also one of the most parsimonious. It's the simplest model. It's the back of the envelope physics model of how society should run. And that gives this country a huge amount of freedom in interpreting the Constitution. If you look at the Indian Constitution, it's huge. I mean, just to put this in perspective, the American Constitution is about 8,000 words long. The Indian is about 150,000 words long. The next longest is Nigeria, about 60,000 words long Monaco is the shortest. It's very intriguing that the shorter the constitution in some sense, the freer you are in the interpretation of its principles, and the longer it is, the more it specifies how you should act in any given context. So there's this nice complexity regulatory spin on this question of centralized and decentralized governance that's quite universal and it spans biological phenomena as well as cultural phenomena.

I think we can learn a great deal from that. I would actually argue that there might even be a case to be made for a dynamical constitution whose essential size varies according to the challenges that a society faces.

 

Michael

Yeah. It sounds like at least in the case of the United States, that we have a good deal of epigenetic flexibility with our constitutional code.

 

David

Yes. It’s sort of like saying it's the difference between, if you like in another metaphor, an abstract painting, say a Rothko, and a very realist, say Hudson River School, painting. One of them doesn't give you much space to interpret. It's beautiful, but you know precisely what it's about. You know that's a tree and that's a human figure and that's a deer and so on. But when you go to an abstract painting, you play a bigger role in the interpretation. If it's in some sense engaging with your brain in a more creative way. Not to say that realist art isn't creative.

There's something very deep there. I think these more parsimonious models of reality, whether they're mathematical models, whether they're abstract or realist paintings, or whether they're legal documents, titrate between specificity and generality, constraint and freedom. And I don't think society has learned enough from the way that regulation works in the biological world. It wasn't known, of course, until very recently to apply to the legal world and to the sociopolitical world.
 

Michael

To think of this in terms of the society as a type of organism, that reminds me of what we were discussing last week in terms of evolvability and how the more narrowly specified trophic niches an organism occupies, the more difficulty it's going to have navigating a period of extraordinary turbulence like this one. That in some ways in the United States, our constitution affords us a sort of more of an insectivorous, or a raccoon approach.

 

David

I don't know. I think you opened it up, I think the right analogy was microbial. It's smaller and it's more of a generalist and that's why it's enduringly influential.

 

Michael

So to bring us back around the notion of authority and assumed priors, the question here really is, “At what timescale or is the problem that we're facing operating at?” In what ways is this a question for the aggregation of signals from throughout the social body and the intentional conscious decision to implement a coordinated central plan? And in what ways is it a matter of reflex? You know, you don't want to have to sit there and think about lifting your hand off a hot stove. Obviously this is a problem that demands some balance of both. And where is that balance?

 

David

Yeah, I think it gets to our research on the nature of complex time, if you like. The technologies that have allowed for knowledge to be transmitted have been by and large static objects: books, textual documents. And it's only recently that the possibility of dynamical objects has come into existence—like Minecraft, for example. There's no doubt in my mind that in the future, constitution-like objects will be algorithmic and dynamical, because we can now do that. We couldn't in the past, and we don't yet have the design principles, the engineering principles, to use to build such things. But we shouldn't assume that the constraints of history dictates the form of these objects into the future.

 

Michael

Certainly. Well David, this is such a deep and rich topic and I'm sure we'll have more time to discuss it in connection to next week's pieces…which are what by the way?

 

David

Oh yeah. So this is another really interesting group—one that I imagine people are thinking about—which is the neurological implications of immobility, because now a lot of people are sedentary and that's not a good idea. What does neuroscience have to tell us about being sedentary for long prolonged periods of time. And all the way back issues in relation to what historical ancient societies did in times of of plague and famine. We'll be looking at that. We'll be looking at further mathematical and technical paradoxes associated with testing not so much the Bayesian question, but the challenges of not doing random testing and how that can lead you astray. Issues of radical uncertainty and limitations of some of these concepts like R0 and when they fail.

 

Michael

Excellent. A reminder to everyone listening that Interplanetary Fest is putting out weekly course materials surrounding this information every Wednesday. You can go to interplanetaryfest.org and sign up for that, and then you get a really lovely curated, summarized briefing of these essays, and you can work through that with your family, and take quizzes. It's a great way to step this down and make this part of your own bespoke, modular education in a time of crisis.