If the economy is better understood as an evolving system, an out-of-equilibrium ecology composed of agents that adapt to one another’s strategies, how does this change the way we think about our future? By drawing new analogies between technology and life, and studying how tools evolve by building on and recombining what has come before, what does this tell us about economics as a sub-process of our self-organizing biosphere? Over the last forty years, previously siloed scientific disciplines have come together with new data-driven methods to trace the outlines of a unifying economic theory, and allow us to design new human systems that anticipate the planet-wide disruptions of our rapidly accelerating age. New stories need to be articulated, ones that start earlier than human history, and in which societies work better when engineered in service to the laws of physics and biology they ultimately follow…
This week’s guest is W. Brian Arthur, External Professor at the Santa Fe Institute, Fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford, and Visiting Researcher at Xerox PARC. In this second part of our two-episode conversation, we discuss technology as seen through the lens of evolutionary biology, and how he foresees the future of the economy as our labor market and financial systems are increasingly devoured by artificial intelligence.
If you enjoy this podcast, please help us reach a wider audience by leaving a review at Apple Podcasts, or by sharing the show on social media. Thank you for listening!
Visit our website for more information or to support our science and communication efforts.
Join our Facebook discussion group to meet like minds and talk about each episode.
Podcast Theme Music by Mitch Mignano.
Follow us on social media:
Twitter • YouTube • Facebook • Instagram • LinkedIn
“Where is technology taking the economy?” in McKinsey, 2017.
The Nature of Technology: What It Is and How It Evolves.
“Punctuated equilibria: the tempo and mode of evolution reconsidered” by Gould & Eldredge.
"A natural bias for simplicity" by Mark Buchanan in Nature Physics.
"Economic Possibilities for our Grandchildren" by John Maynard Keynes.
Brian: The strategy was simple. I didn't start with the theory of technology. I started with vague ideas, maybe combination was important and I read and read and read a bit like Darwin running around the Galapagos collecting beetles or looking at iguanas. And I kept reading until I started to see common patterns and I began to see that every technology had come into being as for some human purpose and as a combination of what had gone on before and what was used before, and then it joined the Lego set and things could go from there.
The interesting thing is that means the economy is open-ended. There's no finish to technology. There's more and more of it and I do think it's moving rather faster because we have more means devoted to that purpose. Doesn't mean people are thinking faster, and doesn't just mean there's more to combine with. It means there's more resources going into that. Not just DARPA or government, not just Silicon Valley, but all over the world. Huge amount of work going into it.
Michael: It seems like a strong parallel in the way that you felt you needed to expand upon and elaborate a sort of Darwinian thing with the way that this was actually going on at the same time in evolutionary biology. In the exposition of the importance of horizontal gene transfer, plasma transfer, sexual recombination, endosymbiosis, and that the more we elaborate a post Darwinian sort of extended evolutionary synthesis, the closer the dynamics of biological evolution get to the dynamics that you've elaborated for technological evolution.
Brian: Yes, that's correct. I'm not a biological expert here. I'd put it maybe in simpler terms. What I would say is that for a great deal of evolution, Darwin's mechanism does a very good job variation and selection and then eventually that speciates. So we get new species. Fair enough. But every so often in biological evolution, combination plays a role. It does with the Archaea, it does at a very fundamental level. There's horizontal gene transfer. I understand it does to some degree in Bacteria and it does with the rise of the eukaryotic cell. That's combinations somehow that invaded a simple cell. And I'm glad to say that I met and knew Lynn Margulis and I knew John Maynard Smith and I'm good friends with Eörs Szathmáry, who these are all... I'm name dropping, but it's not so much for the sake of name dropping. I find it very important in science to know the people who've had the ideas for very simple reason. You realize they're human beings.
I met John Maynard Smith in the Arctic, way in the North of Sweden. I realized that he had been an aeronautical engineer. These are things that help in the... Somehow they help in the realization that they're just normal human beings and maybe... And you're a normal human being and maybe there's hope for you to do something.
Brian: I'll tell you a story about John Maynard Smith if I may.
Michael: Please, yeah.
Brian: Use it or not. But it's kind of fun. So I think the year is 1995. We're sitting in a weather station, a couple of 100 miles North of the Arctic Circle, a little place called Abisko. And it's a conference on evolution. And I knew John Maynard Smith was part of the small group of 20 or so of us. He hadn't appeared and then the door opens and in walks this older man with longish hair, gray hair and I could tell he was English because wore National Health glasses. And I made an awful lot of inferences, Sherlock Holmes style. Probably, he's been very left wing. Probably, he went to Oxford, probably socialist, because it's a type and I recognized him immediately. I liked him. Lovely person. He started to lecture on the major transitions of biology and then, being quite English, he says, "Now we come to rather a puzzle." He says, "We have to ask, why is there sex?”
And so I'm sitting there, this is not something I know much about in biology. "We have to ask why is there sex? After all," he says, "Sex might be good for the species as a whole, but not necessarily good for the individual." I'm from Ireland. I stuck my hand up and I said, "Isn't that very much an English view of sex?" He looked at me for the first time and he said, "Well," he said, "I can tell you're Irish. And at least in England we do have sex."
Michael: Oh, I mean that gets at the whole issue of at what level are you coarse-graining the model, right? Because he was growing up in a time when the notion of multi-scale selection was very hotly contested. And so now to view an evolutionary process as something that's occurring at multiple skills simultaneously, this question of the origin of sex is very, very similar to the question that David Krakauer and Martin Nowak were writing about that brought me into this study of the evolution of intelligence more generally. Where does syntax come from? The idea that the sentence is an adaptation to the complexity of the environment reaching a point where it's no longer effectively described by single word utterances. That it's a way to route around an error catastrophe.
And so, this notion of, as you talk about in The Nature of Technology, that each technology creates new opportunity niches. And so this is an example where it's like there's a top down effect generated by a domain or a technology on all of its sub-components. And sex and language and the recombinant strategy…
Brian: Multi-cellular organisms.
Michael: Yeah, that all of these things are seemingly a response to the increasing complexity of the environment that is endogenously generated by this expanding. You mentioned Stu Kauffman talking about the adjacent possible, that it's this “non-ergodic above the level of atoms,” in his language. This is this question of is it possible as it seems, from where I'm sitting anyway, that the reason that evolution of technology looks different to us is simply because it is the dominant evolutionary strategy at work on the planet now, now that we have accelerated this process as much as we have, that we went through these periods, where either regulatory structure of a complex organism inhibits that recombinant innovation from within the organism. But then you get sex and then you get social learning and so on.
Brian: What was the question again, then?
Michael: I don't know. That's just sort of a fanboy, like, "What do you think of this?"
Brian: But I think overall, and pulling back a little bit from the particulars here, I realized... Let me say something that's not particularly modest here. After all, this is the Santa Fe Institute. I realized that there might be are or there were two mechanisms in evolution that were quite different. One was standard variation and selection, certainly there'd be mutations or re-combinations and some sort of genome maybe shuffle a little bit and there'd be variations, as Darwin told us. The other mechanism I decided to call combinatorial evolution. Again, a label I arbitrarily stuck on this area in my book. And that would be that individual pieces or parts, whether this is in technology or indeed in parts of mathematics even. Think of things like theorems — new theorems are constructed by combining old ones with syntax and mathematical logic and grammar. To give new statements that might be proofs or conjectures even. Or in biology, as you were talking about a moment to go to all these different levels.
And so there was a second mechanism. And the second mechanism is that existing parts and some or other can combine. And then some of those combinations, one or two, turn out to be successful. And so that combination then is encapsulated and becomes a new element. For example, gene sequencing, the time of Sanger and people like that 50 years ago or more, to sequence the blueprint for protein-like insulin would take days and it would take multiple labs and it would take all kinds of equipment. Now gene sequencing has become an element that is used in other technologies, such as forensic DNA policing and things like that, and now it's encapsulated in fairly small pieces of apparatus that maybe you could purchase for a high school lab or something like that. So I began to see this again and again.
The bottom line here is that I wrote all this in The Nature of Technology. Oddly enough…people were interested in that book because it was a different way to think about technology. But oddly enough, there's been a bit of a silence around combinatorial evolution. Some people picked up on it and ran with that ball. Others didn't. The region of the world that's most intensely interested in technology and in that book, The Nature of Technology, is China. It's sold three or four times as many copies as here in English. It's now into its second edition. And I think that the reason is that China sees its path forward as being technological and it's hungry to learn how Westerners think about technology.
My comment is that technology has always been a little bit of an orphan or ugly sister, or I should say maybe a Cinderella, in the sciences. We take it for granted. We use it to get us to the Moon. We use it in cell phones or in scientific instruments. But we don't wonder too much where it comes from, how it operates or how it evolves over time. And that was what fascinated me. My book has certainly stirred an awful lot of interest. But given that technology is extraordinarily important for the way we live, we certainly couldn't live without it, be it medical or housing or anything else. I'm surprised that there hasn't been more interest in the whole subject of technology.
Michael: Yeah, certainly. Let me just say as a counter example, intensely and lifelong curiously, passionately interested in this, and in large part because the question of combinatorial evolution, this area seems to suggest to me that there is a generalizing framework, within which the evolution of technology, the evolution of biology, can both be understood within a general theory of intelligence. And David Krakauer talks about this, he talks about the isometry of, the congruence I should say, of the equations for evolution and for inferential learning. And how you think about this in terms of, as you discuss, all technologies are made from parts that are just already lying around, and that they're a response to a need, which... We had Jennifer Dunne on the show and listening to her talk about how food web relationships, the structure of trophic networks are conserved, from the Cambrian explosion to today. It's the same. The parts change, but the metabolism seems to be the same.
And so, this question of when you talk about a technology as something that is created to serve a purpose, to address a need, it raises this question of like, "Well, okay, wait a minute, back up. What is a purpose here?" Because we exist within a situation where we have... Like you said, you can't invent the same things a hundred years ago that you can invent today. You place the same person in both periods and you get a different result. And in some sense, the purpose and need seem, to me anyway, to be properties of the network that we exist in, rather than understood as properties of the individual. You make a good case for this in this book. You use the term originator rather than inventor because the origin stories are just painful oversimplifications of how these things come out of these distributed processes. And once again, I'm just sort of lobbing this one at you, but do you see these as ultimately commensurable from the perspective that intelligence may be usefully re-understood as a distributed phenomenon?
Brian: Yeah. There are two different things I picked up here. One is where to human purposes come from, or where does the need for technology come from? If you're an economist, you'd say, "I can understand how technologies got supplied, but where's the demand for them?" Turns out that nearly all the technologies, well over 90% is my guess, come into being for the sake of other technologies. So things like, oh, I don't know, a radar duplexer, it's a circuit that switches off a radar machine instantaneously for maybe a few thousandths of a second, so that there isn't a blast of radio waves and you can detect a faint echo. That was needed for radar. Et cetera, et cetera, many things are needed for jet engines or for computers. So, technologies demand further technologies and occasionally we've our own demands as humans. If you discover something like Ebola, then that automatically sets up a demand for something like a vaccine to deal with Ebola. So, occasionally it's direct human need, but most of the time technologies are brought into being to handle or manage or control or improve other technologies.
The other question you brought up was understanding, or rather, intelligence. I preface it by saying that I don't know. I've read an awful lot of the psychology but I wouldn't claim to be an expert here. So let me just point out one thing: if you are educated in mathematics, and definitely I was for many, many years, up to master's degree level in my case, what struck me was that what you're actually being taught is concepts that are defined in terms of other concepts. So, you might have elements that might be digits or something, you can put them in rows and columns and you have a concept called a matrix. And then with that concept you can say, "Oh, we could do addition," which is a concept, or subtraction or multiplication. And you're shown how to do that with matrices. But all the time you're building larger and larger structures out of simpler ones.
And I began to realize, and I haven't thought about this and this might be a good thing for you or David to think about who understand a lot more about this, I began to realize that much of understanding works this way. You're building novel concepts out of previous concepts. You do it in a wordy way, so a concept such as Munich, which means “government sell-out for purposes of appeasing some unsavory character,” that's a concept. You can encapsulate that in one word, then you can use that in other concepts too. And so it seemed to me, I don't want to be too much empire-building here, but it seemed to me that this idea of evolution by combination applies quite broadly when it comes to building up concepts from previous ones or mathematics from simpler mathematics and looking at every different structures this way.
Also, I'm fascinated once you have such a toolbox, you can use it instantaneously in many ways. So if you have words, you can combine those endlessly and infinitely to formulate different sentences. Some of the sentences may have been uttered before, "I love you," or something that probably we could reduce that to say number 23 or something, and save ourselves a lot of emotion. In fact, it was von Humboldt said that language was infinite use of finite resources, so that you're combining and recombining things out of this toolbox. But the toolbox itself grows by using simpler objects to create more complicated ones, encapsulating those and you keep going. It's something that I think has been glimpsed here and there before, but I don't think it's been seriously written about. Not yet.
Michael: So, in your example that you just gave about Munich, the way that language trends towards shorthands. Or in mathematics, these complicated conceptual structures get condensed into simpler. To back out a little bit into the philosophy of science, Mark Buchanan wrote a really great article for Nature Physics a while back about the natural bias for simplicity that Jessica Flack shared on Twitter. The Occam's razor thing is as simple as necessary, but no simpler, right? So we see this in intellectual systems, technological systems, biological systems, that there is a fusing of regulatory elements, a trending towards greater efficiency, but efficiency within the context of adequate modeling of the environment, actually serving a purpose for which it was created.
I think about that, to make a transition into your McKinsey piece, that one of the things that when we ask this question about what does a need, I had Melanie Moses on the show and a lot of her work was done on scaling and biology, she and I talked about vascularization. And that seems to be, when you say 90% of new technology is in the components, that that's really 90% of your vascularization is in the capillaries, and so on.
Brian: That’s right.
Michael: And so that would again suggest that what we see as the necessity, the necessity that an invention fills, is driven by this global process of maximal entropy production and that we get to a point where, much like we saw in biology, that the organisms get so big and so dimensional that they're no longer efficient at the production of biomass. And then we have to come up with new structures to distribute the nutrition and the oxygen and so on through the body. This was glaring at me this morning when I was reading your piece on McKinsey about the future of the economy, and this movement from production to distribution. And so I’d love to hear you unpack, why is it you think that this transition is happening right now? What is the big economic shift that we're living through and what does it mean for us?
Brian: Yeah, okay. I think the article I wrote in 2017 in McKinsey Quarterly, it was called “Where Is Technology Taking The Economy?” And when I look at what I meant by technology there, I certainly don't mean social media or cell phones, I was really talking about AI or machine learning or these a collection of digital technologies, and where's the economy going with that? I want to give two answers or two conclusions I came to. One is that practically all of these advances in AI, all the ones I can think of, are really nothing much more than associations, so that if I've a camera and I am in an airport and get my picture taken, and the algorithms, whether they're neural nets are using deep learning or some other thing, those algorithms are making an association between the elementary pixels and they process and process them until they can associate them with one person based on a lot of measurable criteria.
And so language translation by machine is an association. I give you a sentence in Mandarin, you feed it into your computer. The computer looks through all sorts of similar sentences and associates it with one of those. It doesn't use very much grammar or syntax or anything. It's a real clunker, but it spits out the sentence it's learned to associate with something very close to the original. So these are all associations. We can recognize faces, we can translate speeches, we can recognize voice and so on. That's as far as we've got.
What I want to say is what we haven't got yet, as far as I know, we haven't got a machine that can read characters, say like Wind In The Willows or something, some classic book, and understand that. We haven't quite figured out what it means to understand that. But my guess that understanding what we call reading is not just associating a sentence with some picture. A phrase from poet Keats is, “moss'd cottage trees," and I think he's describing paths through the woods in autumn or something. But immediately anybody who speaks English has a mental picture of that. But you don't get full understanding unless you know a huge amount of context. You'd have to have been in some woods, maybe in England. You'd have to know what a cottage looked like and not just that I can associate cottage with some other word in German or in French or something or Cantonese. In other words, machines don't yet understand in some way that we would be happy to call something a bit like human understanding.
Anyway, so the first part of my answer is that we've developed a lot of technologies that are super good at recognition and they can do practical things. For example, technology might be a little bit of a forklift truck with a three-dimensional camera on it and connected to a computer that has sophisticated pattern recognition algorithms. The forklift truck goes automatically into some little pick-up warehouse place, sees all these cardboard boxes. It can read the labels somehow and it knows to pick up those boxes and load them into that truck and other boxes and load them into a different truck. And it sees some orange crates and recognizes that those are not cardboard boxes, et cetera, et cetera.
So we've become very good at this. And in turn, what I'm realizing is that — this is not a very original thought, this next thought — economists have realized that machines, computers, algorithms in particular, some digital machinery is getting rarely good at figuring out things that we thought only humans could do. This is a huge surprise. We thought, as recently maybe as 15 or 20 years ago, that's computers would be very good at logic and good at arithmetic and accounting and figuring out engineering solutions, but would never be good at recognizing cardboard boxes strewn on the floor, or people's faces. Now they can do things that humans can do and that's causing jobs, I think, to be replaced. And that's not controversial in economics. There are other economists, thinking of Andy McAfee and [Erik] Brynjolfsson at MIT, but many other economists have noted this and written about it and thought about it.
Where is technology taking us? We're entering an era where these digital technologies are bringing into industrial use applications that previously only humans could do. We can sort things by label. We can sit down and talk to a client in a bank to figure out would they qualify for mortgage. Now that can be done algorithmically and almost instantaneously. And so economists, I'm on been pretty firm ground here, or largely agreed that many, many jobs are getting replaced and will be replaced almost inevitably by digital means, digital algorithms, digital machinery. And so the big question that economics is, will there be new jobs? And that's really where the controversy is. It's not the jobs are disappearing, and instead what's going to replace these jobs? Nobody's quite sure.
Meanwhile, almost a hundred years ago, 91 years ago, John Maynard Keynes wrote a famous essay, I think it was called “Economic Possibilities for Our Grandchildren.” And it was looking roughly a hundred years ahead, which is close enough to where we are and he thought, and I think quite accurately, he thought there would be enough means, enough production to go around. He was talking about the UK, but he thought that what would have been called GDP or GNP at the time would be about eight times larger. Not too bad in estimate, roughly he's in the right ballpark I'd say. So we've plenty of goods and services and he thought there might be something he called technological unemployment. A few years before that, the concept of robots had been bandied about by a Czechoslovakian playwright. And Keynes wondered there might be robots then, and there certainly are. They're not standing up ones, but there may be machines like forklift trucks that are autonomous. And so there would be technological unemployment and what would that do to the economy? So my conjecture is thinking about Keyne's, thinking about artificial intelligence and automation and autonomous everything and the virtual economy, the digital economy. Putting all that together, I would make the claim that we're moving from an economy where the main issue, the main problem, is producing more goods and services into an economy where the main issue is access to the goods and services. The trough is large enough, how do the piggies share in it? Or will it be able to share in it?
I don't have an answer to this. I think the answers would be complicated. But my observation is that we're now moving into a distributive era in the economy. We're not even that conscious of that yet, I don't think it's been said that much in the press. But that brings us into other issues. That reflects in politics all over the place, not just in America but in Europe and, in some degree, in China and Asia, where there's a feeling of unease about jobs, our ticket to the goods and services. We’re uneasy about that in general, especially if we're blue collar workers or white collar workers. And we might think we're not happy that machines are replacing us. We used to have an awful lot of offshoring or globalization, jobs going to China. Now jobs are going to the virtual economy and that's just like sending them to another country, and that means paychecks are going to that economy. They may be going to Amazon or Google, not to human beings.
So there's a lot of unease, I think it's semiconscious, but on these, "How am I going to get by? How are my children going to get by?" We haven't had decades of wage growth like we used to have. We've had some wage growth but not reflecting economic growth. And so we're in a different ball game where this is not Kansas anymore. We're in a completely different game. Turns out that, in that game, economic efficiency is not important. We've tons of stuff to go around. So we will hear a lot less about sending jobs abroad or we'll hear a lot less about growing the economy and we're going to hear a lot more about jobs being created here and there, even if it's a trivial amount of jobs. We'll hear a lot less about trade deals, where it'll be efficient to make shoes in Mexico but not make them here. We're going to hear an awful lot more about why should we send jobs to Mexico when we can keep them here. And so questions of distribution are coming to the fore. That's already changed the game and politics, because all over the world there's a movement towards populism.
"I'll make sure you guys are okay. I'll make sure you're well supported. I make sure you have jobs if you just harken to what I'm saying. And I'll keep those awful foreigners out." And this is true in Hungary, it's true in Poland. It's the force, I believe, behind Brexit. We can't have all these farmers coming in. It's like what they used to say about the Irish in 1850. So we can't have all these people coming in. It's not racism, it's not prejudice, it's not even completely tribalism. It's more a semiconscious fear that we know there's enough to go around, but we're not quite sure we're going to elbow our way easily to get at this grand buffet that's possible. So the game has changed.
Michael: That same kind of fear — to bring up a topic that that caused great amusement at the symposium this weekend. That same kind of fear's on display in like, "Okay boomer," versus, "Boomer pride." You know Jim Rutt's #boomerpride, hashtag?
Brian: We're the guys who screwed it all up.
Michael: I think that the racism, the populism, nationalism, does seem to come in at the point where fear starts to erode our ability to actually devote the necessary cognitive resources to understanding the complexity, the issue. And I really appreciate that in this article you talk about that the situation makes unbalanced free market philosophy increasingly indefensible, because what we're asking is really to view the economy as an organism or as an ecosystem. Then it does require an understanding of the trophic network. Ultimately, it seems like we're asking, “Are we human beings who have stumbled upon these…" I talked about this with Olivia Judson a few episodes ago, that she's looking at major evolutionary transitions as being catalyzed by the creation of new energy sources. And so the industrial revolution, in some sense, looks like we had all of the necessary pre-cancerous mutations just waiting for that blood sugar influx, that suddenly we can set things on fire and suddenly we've got internal combustion.
But we're preloaded with all of our sugar obsession and this unilateral drive toward growth, but then what happens is, I feel like the question that we're wrestling with now as a species is, “Are we a tumor that is, the outer layer is alive, all of the billionaires are fine, but they're on a pile of skulls. You know? Or are we an organism where there is vasculature that gets into the core of the thing and it's not the growing edge, but you get into these Buckminster Fuller questions about whether job creation is even the right focus, or is the job specious?” And really what we're asking is just how do we provide for people. It's like a value shift.
Brian: I like that way of looking at things. What I would say is that the issue is not jobs directly, it's really access to the economy. And jobs have been the primary means, but only for two or 300 years. In the middle ages we might be apprentice to some guild or you might be working for your family on a farm. There were quite a few jobs then, but now jobs are dominant. Maybe I'm the steel worker or maybe I've got a job interviewing people or whatever or figuring out what's going on with the economy. These are all jobs.
What I would like to suggest, in fact what you just said, is that we should look at the problem not of jobs, but of access. I think it's almost inevitable that there will be a basic income, that had been suggested years ago by Milton Friedman, and it will start in Scandinavia or Germany, somewhat more progressive polities. The US doesn't like the whole notion of giving something away, but social security started here years ago, 70 or more years ago. I think there will be solutions like that. There may be solutions, there would be a lot of jobs, I think, created. Human beings doing something empathic with other human beings, advising or just sitting with older people or teaching kindergarten, maybe being part of a police force or whatever, where you have to really bring human skills.
I'd say there'd be a lot more of such jobs and they can be compensated, but when there's enough to go around, being made by machines that are largely antonymous, that need a little bit of care and attention, but largely autonomous, that's not clear to me why we shouldn't have some sort of access to them.
Let me put it this way. If there's a huge concert going on in Central Park in New York, whatever, Simon and Garfunkel, and it's repeating every day and it's there. Sure you can give out tickets and so on, but in the outer regions, why shouldn't people be able to take part in that or to listen to it? It doesn't cost very much to the rest of the people who are making money and having jobs. So, I think it's pretty well inevitable that we will have to redefine how distribution works in the economy. The good news is that a lot of the services that are being developed or produced, costs next to nothing. So you can have a mobile phone, a cell phone, and it costs very little and sending messages costs very little and emails cost next to nothing except a little bit of input of time and so on.
So a large part of what we do in the economy, in the future, will have very low marginal costs. There might be an awful lot of public ownership or civic ownership of cars. A car shows up at your door and takes you to the airport and the marginal costs of that, because it’s operating all day and night, would be very low indeed. Et cetera. So I'm not hopeless, I don't think this is free loading. I think it's going to be a major regeneration of the economy and I think it's going to take somewhere between 20 to a hundred years, probably 30 to 50 years for us to work out some new system. That’s what it took in the Industrial Revolution from the 1850s to 1900s, to make work bearable. I think that's what it's taken in other systems, such as mass production comes along around 1900, 1910, but it's at least 1950, 1960 until it's taken for granted. You can own a car with very little effort and go wherever you please.
So these things take decades. It's not a matter of Congress getting it. It's a matter of, really, experimentation and exploration and trying out different things and keeping an open mind. It's stupid to call this socialism. It's actually just, “How we're going to organize ourselves as human beings?” I think very good news is that all these technological changes we've had in the past, we've managed to adjust to their consequences, and I'm optimistic we will, by the way, with respect to climate change. This is just a conjecture, but I think that in the US we're getting close to a tipping point, among the public, from thinking climate change is not human-induced and not inevitable and not something you can do anything about, to thinking about that it probably is human-induced and it can be mitigated. The technologies are largely there already, and with the kind of will that went in The Second World War to defeating the Germans and defeating the Japanese, anything like that amount of will would certainly be very effective in the US or Europe or China as well.
Michael: My last question for you involves this time horizon that you're talking about. Because these previous adjustments to major industrial changes, this happened in a world that was a much smaller population, a much less well-connected population than it is now. And the rate at which an idea can spread is, obviously, greatly accelerated from the way it was. But there are other effects; you get into increasing returns and lock-in and in your other work, how suboptimal solutions can dominate a network like this. And I'm wondering, because I don't want it to take 50 years, you know? I just had a kid, I'd like to make sure that she's not growing up in a world of the same anxieties that I'm growing up in. Do you see it as limited by people's ability to accept a new way of thinking? And if so, what do you consider are probably the most potent or empowering tools for spreading a new way of seeing that emphasizes the importance of the political dimension and the distribution of resources?
Brian: Well, I tend to think that the limitation on all of these huge changes, say the realization of tobacco wasn't very good for our health, the century before that, the end of slavery and so on… I think that public attitudes are absolutely the key to making these large changes.
Public attitudes are actually the key limitation. And with respect to climate, I think that's starting to change all over the world. Certainly from what I've seen in China and certainly from what I've seen in Sweden. I worked at the Swedish Academy of Sciences in their climates, the Beijer Institute.
So the question really comes down to how could public opinion be changed? I don't know. I don't think government can do that much. Government tends to follow public opinion rather than lead it. I'm not trying to be cynical here. So I think what happens, in general in the world, is that some problem is perceived and then usually one or two events happen. So there's a problem perceived, say in the United States in 1936 that things are getting a bit out of control in Europe and that the Japanese Empire is expanding into the Pacific, maybe we should do something about it. But it's usually one or two triggering events, in that case Pearl Harbor. Immediately public opinion changed and immediately the government followed. The government didn't quite change public opinion, it was sort of both things instantaneously. And immediately then, within 18 months, the US economy had completely changed to take care of what was going to be a lengthy war and the resources were wheeled up.
There's no amount of Al Gores or people showing up, Greta Thunbergs, these are all necessary people and I salute them, but it's actually slowly, slowly, slowly people realize we've had more hurricanes than we want. California, in the fall, seems to be perpetually on fire. Where I work, they give out gas masks and so on. Do we really need this? And finally there's, I think, a tipping point, but usually it takes one or two major incidents. And I don't wish climate horrors on anybody, but I think unfortunately that's the way it's going to be. There'll be some horrible stuff happens and suddenly public opinion will change.
I was in this country a long time ago, in 1967, when the Vietnam War was raging. I was newly in the country and everybody I talked to supported the Vietnam War. One year later, hard to find anyone to support the Vietnam War. The Tet Offensive had happened. Robert McNamara and others had publicly come out against the war so intellectual leadership suddenly was against the war and the country flipped on a dime. After that, the war was still there, it had to be mopped up, but the will to fight it disappeared and the will to do something about it got very strong. So I have faith in America. I'm not commenting on whether this was good or bad for Vietnam or America. I'm just saying that once America decides it's had enough of something, more quickly than other countries, it flips.
Similarly with the end of communism. I was in a hotel in the Irkutsk in 1989, and my roommate in the hotel was a mathematician from East Germany. And I said to him, this was in Soviet Union, I said, "I said Hungary and Czechoslovakia and Poland have all gone over to a different system, when do you think that will happen in East Germany? Will it happen?" He said, "Oh yeah, it will happen." He said. September 1989. ”That will happen," he said, "but not in my lifetime, but it will happen." And I think it was two or three months later the Berlin Wall came down.
So these things appear to be impossible and somehow there's a complete flip. It's not because we're all sheep, it's really because it's not safe to express a different opinion. So I think one day, Mitch McConnell's... I'm an immigrant so I should shut up. One day, I think Mitch McConnell or somebody's going to come out and say, "Hey look, we need to clean up this mess," and completely unembarrassed by having had the other opinion, because suddenly it's safe to say so.
Michael: So I guess when it comes to technological unemployment and basic income, the key is to just not be under the avalanche when the sand pile comes down, right? Maybe, looking forward just a few years, now's a good time to not be a truck driver. And then you might be able to…
Brian: Yeah. I think that we're going to have to socially invent solutions. That's not generally done by governments. Sometimes they have a hand in it, but usually it's proposed by private citizens or politicians or doctors, in the case of 1850s suffragists and other people saying, "We need to treat shift workers much better. We need to look after their health. We need to mitigate the dangers in the workplace." These are proposed, make their way into the polity, and they're discussed. It doesn't usually come from government. It could start in some small place. Many experiments are done and then it appears totally obvious. And we've been doing that all along. In California we've done that for a hundred years. It may be true, but you know an idea has arrived when other people are claiming that they've done this for decades themselves.
Michael: Yeah. I'm sure the Santa Fe Institute will be out there in the streets claiming priority when the day comes. Brian, it's been such a pleasure. This has been a fabulous conversation. I'm very grateful that I've had two whole hours to sit down with you.
Brian: Great. Superb questions.
Michael: Thank you so much.
Brian: And I'm really glad it wasn't just that you understood what I was saying. I was trying to understand what you were saying in biology.
Michael: Aw, thank you.