Hard Sci-Fi Worldbuilding, Robotics, Society, & Purpose with Gary Bengier

Episode Notes

As a careful study of the world, science is reflective and reactive — it constrains our flights of fancy, anchors us in hard-won fact. By contrast, science fiction is a speculative world-building exercise that guides imagination and foresight by marrying the known with the unknown. The field is vast; some sci-fi writers pay less tribute to the line between the possible and the impossible. Others, though, adopt a far more sober tactic and write “hard” sci fi that does its best to stay within the limits of our current paradigm while rooting visions of the future that can grow beyond and beckon us into a bigger, more adventurous reality.

The question we might ask, though, is: which one is which? Our bounded rationality, our sense for what is plausible, is totally dependent on our personal life histories, cultural conditioning, information diet, and social network biases. One person’s linear projections seem too conservative; another person’s exponential change seems like a fantasy. If we can say one thing about our complex world, it might be that it always has, and always will, defy our expectations…

Welcome to COMPLEXITY, the official podcast of the Santa Fe Institute. I’m your host, Michael Garfield, and every other week we’ll bring you with us for far-ranging conversations with our worldwide network of rigorous researchers developing new frameworks to explain the deepest mysteries of the universe.

This week on Complexity, we join up with Caitlin McShea and the InterPlanetary Project’s Alien Crash Site podcast for a wild discussion with SFI Trustee, technologist, and philosopher Gary Bengier about his science fiction novel Unfettered Journey. This book takes readers forward more than a century into a highly automated, highly-stratified post-climate-change world in which our protaganist defies the rigid norms of his society to follow fundamental questions about mind, life, purpose, meaning, consciousness, and truth. It is a perfect backdrop to our conversation on the role of complex systems science in our understanding of both present-day society and the futures that may, or may never, come to pass…

If you value our research and communication efforts, please subscribe to Complexity Podcast wherever you prefer to listen, rate and review us at Apple Podcasts, and/or consider making a donation at

Thank you for listening!

Join our Facebook discussion group to meet like minds and talk about each episode.

Podcast theme music by Mitch Mignano.

Follow us on social media:
Twitter • YouTube • Facebook • Instagram • LinkedIn

Go Deeper With These Related Media

Paul Smaldino: The evolution of covert signaling in diverse societies
Geoffrey West: Scale
Bob May: Will a Large Complex System be Stable?
Melanie Mitchell: The Collapse of Artificial Intelligence
Melanie Mitchell: On Crashing The Barrier of Meaning in AI
Elisa Heinrich Mora et al.: Scaling of Urban Income Inequality in the United States
SFI ACtioN Climate Change Seminar: Complexity of Sustainability
Raissa D’Souza: The Collapse of Networks
David Krakauer: Preventative Citizen-Based Medicine
Simon DeDeo & Elizabeth Hobson: From equality to hierarchy
Peter Turchin: The Double Helix of Inequality and Well-Being

Speculative Fiction:
2019 IPFest World Building Panel Discussion with Rebecca Roanhorse, James S.A. Correy, and Cris Moore
Robin Hanson: Age of Em
Ayn Rand: Atlas Shrugged
Peter Watts: Blindsight
Isaac Asimov: Foundation
The Strugatsky Brothers: Roadside Picnic

Podcast Episodes:
Complexity 10: Melanie Moses on Metabolic Scaling in Biology & Computation
Complexity 14: W. Brian Arthur (Part 2) on The Future of The Economy
Complexity 19: David Kinney on the Philosophy of Science
Complexity 21: Melanie Mitchell on Artificial Intelligence: What We Still Don't Know
Complexity 22: Nicole Creanza on Cultural Evolution in Humans & Songbirds
Complexity 36: Geoffrey West on Scaling, Open-Ended Growth, and Accelerating Crisis/Innovation Cycles: Transcendence or Collapse? (Part 2)
Complexity 51: Cris Moore on Algorithmic Justice & The Physics of Inference
The Jim Rutt Show 152: Gary Bengier on Hard Sci-Fi Futures

Episode Transcription

Transcript provided by machine at and edited by SFI's Aaron Leventman.


Gary Bengier (0s): What I think happens is writers tend to pick some particular idea and then they take it to the absurd extreme. And that's where the conversation has fallen into is those extremes. I take a different tact is I think that we are better served to use a hard science framework for how this works. We can take the engineering today and what we know, and we can run that forward and we can come up with highly likely scenarios. And I think we should focus on those. So because then you're just having a conversation that is not very helpful because I think running those forward, we have some real problems to solve and we should focus on those problems 

Michael Garfield (1m 2s): As a careful study of the world, sciences reflective and reactive, constrains are flights of fancy anchors us in hard. One fact by contrast science fiction is a speculative world-building exercise that guides imagination and foresight by marrying the known with the unknown. The field is vast. Some sci-fi writers, Payless tribute to the line between the possible and the impossible. Others though adopt a far more sober tactic and right hard sci-fi that does its best to stay within the limits of our current paradigm while rooting visions of the future that can grow beyond and beckoned us into a bigger, more adventurous reality. 

The question we might ask though is which one is which. Our bounded rationality, our sense for what is plausible is totally dependent on our personal life histories, cultural conditioning information, diet, and social network biases. One person's linear projections seem too conservative. Another person's exponential change seems like a fantasy. If we can say one thing about our complex world, it might be that it always has, and always will defy our expectations. 

Welcome to 


, the official podcast of the Santa Fe Institute. I'm your host, Michael Garfield, and every other week, we'll bring you with us for far ranging conversations with our worldwide network of rigorous researchers developing new frameworks, to explain the deepest mysteries of the universe. This week, we join up with Caitlyn McShea and the interplanetary projects, 

Alien Crash Site 

podcast, for a wild discussion with SFI Trustee, technologist and philosopher, Gary Bengier, or about his science fiction novel 

Unfettered Journey

This book takes readers forward more than a century into a highly automated, highly stratified post climate change, a world in which our protagonist defies the rigid norms of his society to follow fundamental questions about mind, life purpose, meaning consciousness, and truth. It is a perfect backdrop to our conversation on the role of complex systems science in our understanding of both present-day society and the futures that may or may never come to pass. If you value our research and communication efforts, please subscribe to C


podcast wherever you prefer to listen, rate and review as @applepodcasts and/or consider making a donation at

I also highly recommend that you check out the show notes for this episode for an extensive list of follow-up resources to explore. Thank you for listening Gary, it's a pleasure to have you on complexity and 

Alien Crash Site


Gary Bengier (3m 46s): Well, Michael and Caitlin. I'm delighted to be here. Thanks for inviting me. 

Caitlyn McShea (3m 50s): Thank you for joining us. 

Michael Garfield (3m 52s): Caitlin, do you want to kick this off? 

Caitlyn McShea (3m 54s): Well, it depends on which direction you want to go. Do we want to link it from SFI and go outward? Do we want to start with Gary and come back? 

Michael Garfield (3m 60s): Yeah, I think let's start with the autobiography piece. The question would be then Gary, who are you? What is your relationship to SFI and what brought you here? What is the road that took you into your relationship which is some might argue a property, but certainly we won't go there yet. 

Gary Bengier (4m 23s): Well, I've been in technology for over 30 years. I did a whole series of startups in bio-sciences in computer chip design and the internet, high-tech windmills, lots and lots of technologies. I had the fortune to participate in.  I took eBay public. I was chief financial officer 20 years ago, and we grew that company to over a hundred billion dollars in stuff sold and thousands of employees. And then I then moved on to other things like philanthropy, and then now being a writer. 

And as part of eBay, I came in touch with Santa Fe Institute’s Pierre Omidyar who's been an emeritus member of the board for a long time, encouraged me to join. And I've been associated with the Institute all those years and had a delightful time just getting deep in the size. I think it's complexity sciences is fabulous. So that's my relationship there. As I said, I, I turned to philanthropy and then I went back to school. I backfield an astrophysics degree. I got interested in philosophy. 


I back-filled a philosophy degree. I got a master's in philosophy focused on theory of mind. And then I was interested in getting some of those ideas about human consciousness, sort of out into a bigger audience. What is consciousness? What is that I that’ at the center of you, Caitlyn? What is that really? And so those kinds of questions were things that I focused on thinking about for over a decade and make those ideas more accessible. I wrote this book, so the book is Unfettered Journey, and that's been out now about a year and the book has won six awards, and I'm very pleased with the reception it's been going. 

Great. So the summary is I'm sort of a technologist. I want to be philosopher and new writer. 

Caitlyn McShea (6m 20s): So may I ask because I am very fascinated by this history. My background is also in philosophy. And so even though I can't really get my head around the foundational limits at any of the models that any of our researchers are presenting in seminars about intelligence or life, I don't ever feel like I'm completely excluded from the conversation. And so I wonder if maybe there was a particular incident. How does one shift and it's a big shift from technological development and finance and the intersection of technology to a philosophical exploration of consciousness and then eventually speculative fiction. 

Gary Bengier (6m 54s): I'm not sure if it's a shift, I've actually been thinking about some of these ideas for 30 years. And so once I decided that I've done the technology route for a long enough, I like to say I had lots of at-bats. And so after I did that, I actually had the chance to go back to explore some things that I've been fascinated with my entire life. So no, it wasn't really a shift and I had the good fortune to be able to go deep on those ideas.

Michael Garfield (7m 21s): So there's kind of two chunks that I really want to discuss here because the copy of the book that I'm holding has not only your science fiction in it, but it has a rather comprehensive philosophical appendix in which you explore this stuff in a bit more of a formal way. And I'd like to spend some time on both of these. 

Gary Bengier (7m 44s): I think we start with the novel because I don't want any of our audience to fall asleep quickly. The philosophy itself is the oldest fields and the conversations can get very convoluted and the appendices is dressed more toward that audience. The appendices are entitled philosophical explorations on time, ontology and the nature of mind. And so there are three papers that cover those three topics. 

And those, quite honestly, those ideas are what I hope to get to a larger audience. And so the novel is a way to make some of those ideas accessible. So quite honestly, I'm a believer in the scientific method hands down. And I think that philosophers do not talk enough with scientists. I mean, we've had Dan Dennett come to the Santa Fe Institute, delighted to have that. He spent some time there for several months, but in general philosophers don't talk to mathematicians or physicists. 

And I think it's because they can't do the math. If you can't do the profound math, the very difficult math, then you go into philosophy and to be at the front end of theoretical physics, you have to do the math. So much of that conversation, I think is predicated on the fact that mathematics has proven because of its elegance to explain the world and no one knows why that's true. Wigner, very short paper, talks about the, basically the surprising effect of mathematical knowledge in the natural sciences. 

And why is that? Why does math, you find a elegant equation? And then you find that it relates to the real world. Why is it that if you follow that trail, the mathematics, you tend to find that the right empirical tests to find out the right answer that opens up how nature works. That is just astonishing. We have no idea why, but so at the front end of the theoretical physics, string theory, we've been looking at three string three for 30 years, and there are lots of crazy hypotheses out there, but none of them really are testable yet. 

Yet we spend an enormous amount of our physics, theoretical physics working on those ideas that we can't test. And that's because we continue to follow Wigner’s intuition, but that doesn't help us. So to summarize philosophers and physicists, don't talk to each other and there's a gulf there. And so I think there is some place that you can have a conversation. And in that interest, 

Michael Garfield (10m 29s): Definitely, I mean, reading this piece for me, hooked back pretty cleanly to the conversation that we had with David Kinney on this show, David being a formal epistemologist at SFI or rare beast indeed. But, for that reason, especially valuable I think, and your pieces on ontological relations and on the nature of mind and time, I think really illuminated for me some of the stuff that we were talking about in his own work on causal networks. 

But before we get there, I think you're right to let's start in the work of science fiction and without ruining plot for people, I would like to engage with you a little bit on the nature of world-building because, Caitlin through this interplanetary festival and curated this awesome panel in 2019 on world building, where we had a bunch of science fiction authors together on stage, talking about it in a way that reflected through the SFI lens makes the work of writing fiction look like the design of parameters in an agent-based model. 



So to set something forward in time, in 2161, you're coming in with a set of basically like bayesian priors, like assumptions about the world as it is, and then creating a verbal model at device and then letting it run is kind of how I understood your process here. 

Gary Bengier (11m 59s): So you mentioned the panel and I hope I'm not going to diss those participants because I might have a slightly different idea on this, but let's talk about the modeling thing. So as we know about agent based models, they typically have agents run by simple set of rules and those rules, then you put in motion and you see what happens, but we know that if you pick crazy rules or wrong rules, then they quickly grind to a halt. They give you no information. So it's important to pick those rules carefully. 

And I think if you look at across genre science fiction, these days, there's so much dystopian sci-fi out there these days. It's so genre. And what I think happens is writers tend to pick some particular idea and then they take it to the absurd extreme. And that's where the conversation has fallen into is those extremes. I take a different tact is I think that we are better served to use a hard science framework for how this works. We can take the engineering today and what we know, and we can run that forward and we can come up with highly likely scenarios. 

And I think we should focus on those. So because then you're just having a conversation that is not very helpful because I think running those forward, we have some real problems to solve and we should focus on those problems. So that's my take on how one should do that. Let's take the utopian dystopian future, humans have evolved with certain kinds of characteristics like altruism, like hope. And these are complex systems, of course, and as human kind moves forward, I think those are going to mitigate some of our worst characteristics and, yes, people tend to be competitive. 

We have some deep dark aspects are fellow writer, Cormac McCarthy on our Santa Fe Board. He explores those very, very deeply and disturbingly in many cases. But I think the reality of what our future looks like is somewhere in between. And so that's what I explored. So I have a hard science view. So maybe let me give you a couple of examples. If we can explore my world building and how that agent based model moves forward. So let's take out two or two or three. 




I think that for this next century, the two largest technologies that will drive everything about humankind are bio-science and AI and robotics. But I will say that though bio-science will have a tremendous impact on human life, that in a hundred years, in many ways, we won't notice. If you went back to the 1950s, they still had polio. We don't have polio now, but we live, 10 or 20 years longer than we did before and live with greater health. 

But we don't notice it. We'd expect that to be normal. And I think in a hundred years, 150 years will we cure cancer? I think we'll fix a lot of these things. Geoffrey West wrote his book on scale and he explores that the end of that perimeter. And I think he suggests that if we cure all of cancer, we will add to the human lifespan about six years. If we cure all of the heart disease related elements, we'll add on the order of three or four years. And then you take the next tier down, you picked one or two. 

So order of we'll add a decade to the lifespan by fixing all those things. So this will take a long time. Do I think that humans will live forever? No. We live a lot longer. Yes, I think so. So that's one, I think that's what bio-science will do. Let's move to the second one. AI and robotics. I have a slightly more conservative view than lots. There are forecasts, we see Boston Dynamics, the, we see the robots dancing with Mick Jagger. 

We see them shooting free throws from the center line on the court, during the Olympics and making perfect baskets. And we think, wow, this is just around the corner. I think that this is going to take a lot longer. It's more akin to the automobile, which basically took a decade, a century to get to the cars that we know today. Henry Ford was around a century ago, but you know, the cars that we have today with all the electronics, with the road systems, the infrastructure was needed with the legal systems and insurance issues and social issues of interfacing facing with these automobiles that kill us. 

It took a long time. And so I think that's true about AI and robotics, but I think it's highly likely that we, and 140 years we'll have robots walking among us. And why is that? Because we've got trillions of dollars of infrastructure that is human sized, and yet there's an enormous number of reasons why for economic reasons that will continue to be developed. And so that's a highly likely thing to happen. Are you disagreeing that in 140 years we'll have robots walking around? 

Caitlyn McShea (17m 4s): Can I just make a remark about that claim? Quite often, when we think about the robotic future in science fiction, it's human sized, roughly human shaped robots walking around. And I always took that to be a sort of anthropomorphic lack of imagination. So this infrastructure posit, but you make is really interesting to me because of course we wouldn't want to engineer new societies to accommodate whatever the AI is that, you know, it seems we still have roads. We still have doorways, we still have elevators. And so that suggestion as to why human shaped things will exist among us is the most persuasive I've heard so far. 

Michael Garfield (17m 40s): As far as pulling in the complex systems principle of path dependency or canalization or entrenchment, then the niche defines what fills the niche. But you know, I'm also thinking about Robin Hanson who wrote this book, 

Age of Em

, in which he's looking at it and starting from a kind of a similar place. He's looking at it in a very Geoffrey West kind of way. And arguing that following a Moore's law kind of arms race to the fast that robotics are going to get smaller and smaller and faster and faster, and then they will have basically leverage over things, operating on the human scale in the same way that humans basically eradicated megafauna. 

Gary Bengier (18m 26s): I've read 


, and I put that into my absurd end of the spectrum, quite honestly. And in fact, in terms of my scenarios, there's a couple of devices that we have in 140 years. And I think when people first read them, they might think, oh, that's a little weird. She's single magazine wrote that this future feels eerily realistic. And I think that's a fair thing. So as an example, you carry your iPhone around. You might use Siri. We connect to the internet through the cloud.

So in 140 years, can you imagine this, that you have a chip in your head and that you have a corneal implant that can be like a little screen and you have something called a nest, a neural to external system’s transmitter, which is basic on the chip and it connects you to the net, the cloud at the time. And so you can talk to it. You could just say, where's the closest pizza shop and what it does is it downloads it and it point it paints using an ARMO and an augmented reality map, overlay points, a little map on your cornea, and you can just follow the little red line to find the pizza shop, those sorts of things. 


So essentially you've got something that we already have, and it sounds weird that you would get a chip in your head, but I think that'll happen. Okay. But what are the limits? Elon Musk, he has this neural link and he was demonstrating that. I think that that is going to be quite limited because it's taken a million years of evolution to, to evolve our vision. Our hearing. The V1 and our brain takes up the size in our brain is about total cat's brain. Okay. It's huge. 

And those are the ways that will interface with the world, and those are working at chemical speeds. Very, very slow. Already are chips, or millions of times orders of magnitude faster than our humans. We will not make that work very well. So I don't think we're going to have realistic sort of cyborgs and yes, we'll have artificial limbs and that will continue. But getting the brain to interface is something I just don't think is going to happen. And that's where I think we're off in the crazy land of forecasting. 


Michael Garfield (20m 36s): Well, we already have high-frequency trading algorithms that exert weird leverage over the economy that like, these things happen. And then we look back on them 10 years later and we still don't understand. So I'm curious how else you see nonlinearities and this kind of thing fitting into this world-building, given especially that your world takes place after climate change and wars that precipitate out of climate change. So that has to figure into your timeline for tech development and the evolution of social hierarchy and so on. 




Gary Bengier (21m 11s): So that concede of the timeline is around the year 2100, we have the climate wars fought over resources, et cetera. And as a result of that, we need to rebuild certain things. And the rebuilding is accomplished a lot by robots taking the rest of the jobs and robots are building robots. And so in 2161, we still have lots of stuff. So let me point on two things economically, how I get there. The first, just before COVID, there was a workshop at the Santa Fe Institute with the title AI in the barrier of meaning. 

And I attended that. And a lot of AI scientists were down there, Melanie Mitchell, who you just had on an earlier podcast and many others. And they were fairly cynical about how fast this would develop. There was one presentation that I loved. It talked about the disappearance of jobs as we have more automation. And so the image was of a typology landscape, with hills and valleys and water rising was the analogy for jobs going away. So the question was what jobs disappear first? 

What's some of the top of the hills. Well, top of the hills might be your jobs. You know, podcasting. It's very hard to do that. And I argue that one of those jobs is roofer because the guy that climbs up on the shingles with them on the roof with a bunch of shingles and tax in place, that's really hard to automate, but that eventually because the roofers will be making $400,000 a year, that too will be automated. And that's the robots walking around when we have a general-purpose robot on a standard chassis, et cetera, et cetera, eventually that will happen. 

And when that happens, it's all over in the sense that the vast majority of what we think of jobs today will be gone. Hopefully, many will be replaced, but I think again, that's going to happen. That is a highly likely outcome. Where we are today, we have increasing automation and we're going to get to a place where we have lots of robots, robots making robots, and very few jobs. And we have to figure out how in this century we cross that economic chasm and keep a society, have a society that is operational, that works. 

And that's one of our big, hard science challenges in the world. 

Caitlyn McShea (23m 34s): Could I ask about, you made the very complimentary example of the podcast, interviewer being an exception to this sort of loss of job, but it seems like that example is a stark division between something like labor and something like thought. And I think a lot of this book and a lot of your interests and that very symposium about the barrier of meaning and AI still protects maybe the jobs for those who think, but I don't know if you think that maybe a 250 years, 300 years will GPT 3 our way out of thought. I don't know, but I wonder what kinds of positions or what kinds of approaches to existence in the world are safe from automation? 

Gary Bengier (24m 12s): I think that's hard. And if you look at the book, there are some what I'll call really cool jobs. There's a job of running an orbital base, circling the moon, whose mission is to lead humankind's efforts to explore exoplanets. But those are cool jobs. We would love to have those shops. I think it's interesting. If you think about this future, it's a future where maybe having a job is a privilege. And I think irrespective of the creative pursuits that we were talking about, there'll be too much output and it will be hard just as today we have a hard time absorbing all of the content. It's harder to convince people to read a book today because they're constantly being interrupted with their attention taken from all kinds of things. And so the key then question is how do we find purpose as individuals when you maybe make a lot of stuff, but n, one's reading your poetry. The fun, cool jobs of actually doing something in the real world are limited and, and there's tremendous competition for those. Here's an interesting economic fact. 

If you just take, I modeled the U.S. GDP and the world GDP going forward to the year 2161, looking at current growth rates for the average of last 20, 30 years. And it turns out by that year, we'll have 10 to 20 times as much stuff per person as we do now. So we'll have a lot of stuff and we have robots making robots. So now there's the, I think a normal social question, and this is a question that is crossing that chasm, how do we get there? 

Who owns the robot factories? This is the first time in humankind that in human history that we will not be faced with those questions of the bottom of Maslow's hierarchy, food, shelter, clothing, there'll be lots of stuff. And so here I am, I'm a bonafide capitalist. That's my current technology. And yet I'm a strong proponent of the guaranteed income. And I think that it is not sustainable, is not an equilibrium point economically to have the ownership of the robot factories being owned by anyone except all of society. 

And so what I'm suggesting from a world-building is that I really think this is highly likely. And I think that we will be replacing our current economic system at some point by something else. 

Caitlyn McShea (26m 44s): And in terms of thinking about creating this sort of authentic world, as you described it, is that thought that occurs to you a consequence of imagining a post climate resource war. It seems like there has to be a sort of redistribution. 

Gary Bengier (26m 56s): No, actually not. We have an enormous amount of concern about climate change. But I think the way I would look at it slightly differently is that this is a really long-term problem. This is millennium our own Harvard geologists. There was a zoom workshop for the Santa Fe Institute talking about climate change about eight months ago. And the most telling comment was the comment that geologists are starting to conclude that Greenland is lost, in other words, because the world is like a black body heat, just using that fundamental physics. 

If the world did not have any oceans, the amount of carbon the atmosphere would already caused runaway heating the fact that three quarters of the earth, our oceans is a heat sink, but that heat sink means that we've put off the problem that's already been baked in. And that even if we do lots of really good things, we're still going to have this fundamental problem that will go on for many, many centuries. So what I've done in the book is make the very optimistic assumption that in 140 years, we'll kind of finally get religion on this topic. 

We will figure out how to get to a net negative carbon. We'll do carbon sequestration. We'll do all those things we need to do. We'll probably need to have a fusion, I think, and I'm hopeful that we'll have vision and fusion together. It's still few fusion to make it practical is probably 50 years away for lucky, but fish and a fusion. I mean, and to get rid of all carbon use and maybe we'll turn the corner on that. And in my book, I assume then that Venice has lost probably, you know, New Orleans is lost. Jakarta, they're already moving the capital to Solo. There’s Mumbai. So there'll be lots of dislocations. And I think that a lot of that is going to happen, but I'm optimistic that we get past this because of human ingenuity. So it's not easy. So that's the thesis of my book is sort of somewhat on the border of utopian that we actually solve this existential problem. But again, there's lots of dystopian books that assume that have climate disaster and we can't ever save herself. And then we're just down into absolute calamity. 

Michael Garfield (29m 9s): One thing I found fairly believable, although I'd like to kind of pick at it a little bit, if you don't mind the emergent cast system, the levels act that stratifies American society into like three different, this is your legally pre-ordained guilt. And on one level that's very believable because we've got recent research kind of sobering disappointing research that was led by Elisa Heinrich Maura, Geoffrey West, Vicky Yang, Chris Kempes, and a couple other folks outside of SFI worked on this piece, showed that even though average per capita income grows faster than population in cities, the inequality grows even faster. 

And so actually more than wealth, what cities are generating is poverty. So, you know, like, can I just like put a couple pieces together here? 

Gary Bengier (29m 59s): I totally agree with you on that one. That's basically, I think that's not going to happen to that was probably my one exception, but one needs conflict to write a novel. The concede in the novel is because the us has more of a focus on property rights as this who owns the robot factories question is put to the society. Whereas other countries as Mike, the economist says, have fond more egalitarian answers in the U.S., the oligarchs who own those factories demanded a quid pro quo for giving them to society. 

And that was the set of laws that instituted something called the Level's Acts, where everyone's assigned a level from one at the top  to 99 at the bottom. And supposedly it was merit driven and you could move up and down the levels and all that sort of thing. But in reality, there's a question of whether it was there were legacy. And so here's, here's the question that I was hoping to pose that with that is because one of the things that science fiction and speculative fiction does is frequently is looks at our own society. 

So do we have levels today? 

Michael Garfield (31m 11s): Well, yes. 

Caitlyn McShea (31m 15s): The question is whether or not they're explicitly assigned.


Michael Garfield (31m 21s): This gets to a really gets to the question that I wanted to ask you, Gary, which is about a lot of people at SFI have scrutinized meritocracy and scrutinized also the idea in a way kind of similar to the way that you scrutinize certain notions about there being sort of like universal time, a single past present and future that's like a consistent non relativistic time, for all observers. There's this question about economic value and the idea of a caste system, such as the levels in your book,I think presupposes that we know what constitutes, how do we quantify this? And so another SFI adjacent person, historian Peter Turchin has this beautiful blog entry on what he calls the double helix of inequality and social stability, where he’ssaying basically the larger, the gap between the rich and the poor, the more likely this whole thing is to implode. And you look at research and complex systems dating back to Bob Mays 1972 piece on will a complex system be stable, where he's saying  the more edges in a network, the more opportunities there are for something to go wrong. 

Raisa D'Souza gave a great talk at this in our 2019 symposium we'll link to in the show notes. So, so there's this question that has to do with, how do we intentionally apply brakes so as to keep the stratification that you're describing here from growing so large, that it undermines its own ability to encode merit, because like right now we're seeing like all the tokenization and fractionalization of everything in three, it just seems like what actually constitutes money or value is completely up for grabs now in a way it wasn't 10 years ago. 

I'm curious about all your thoughts on that. 


Gary Bengier (33m 11s): So I deal with that topic and throwing it out there for conversation, maybe in the example of Dagny Taggart, who's the character who is the leader of the wise orbital based I described earlier, she's got this fabulous job. She's this tremendous business kind of a leader. You may notice that her last name resembles some character in a book. Does anyone know Dagny Taggart from Atlas Shrugged from Ayn Rand? 



So Rand's main character is Dagny Taggart. And this is where my study of philosophy. I think Ayn Rand created a book that encapsulates her ideas, but from a moral perspective, she goes off the rails because those characters lack any sense of understanding and compassion for an average person. They call for those makers are supreme and they can decide everything as opposed to everyone else who are the takers in that scenario and Dagny Taggart, she in a conversation with Joe describes how, no, you can't think of it that way because Joe says something that actually I'll give credit to David Krakauer, or I stole the line from him once. 

And Joe says, wait a minute, doesn't one Einstein make a University? The argument for the power of greatness. He's actually stating that. He actually says something, isn't it a story of giants calling to their brothers, which is a line from Niche, again, this over man kind of theory. And Dagny says, no, that's not right. We stand on the shoulders of the giants before us, even Tesla had people in his lab and the big moral failure is hubris. 

And so human society moves ahead because we work together in community. And that's how we move ahead as a species. So that whole conversation that I'm trying to raise there is a conversation about how we as humans can get across this chasm. And I think it's based upon that need for community, not withstanding, the benefits of the Einstein. 

Michael Garfield (35m 29s): Well to the point of research communities. And at this point, I feel like I want to toss the ball to Caitlin. Cause this is where I feel the alien crash site themes really start to take over, but you've got these two spaces in the geography of your story that seem very familiar to those of us working at SFI, one being lone mountain college, this location where your protagonist takes a sabbatical to cool his heels and think about artificial consciousness. 

And then there's the zone, which is this a kind of neither here nor there kind of prison area, a low technological lacuna in this advanced society. And of course zones are a preoccupation with alien crash site, but also with SFI in as much as it considers itself to inhabit a kind of theoretical zone in which things are not completely worked out. 

Gary Bengier (36m 30s): So just before you go to the alien crash site, so let me talk about the particularly the zone issue. So this was in part in exploration of the concept of how do we use our technology because this is a futuristic book. And I mentioned some of those technologies and that sounds a little crazy and not very human, but what is fundamental to humanity and that's where the characters end up in this place where they have to start anew, I don't know if you notice that this book has many layers. There's some famous characters said something. 

It's like an onion that has many layers. I think that was Shrek. But one of the layers, I don't know if you've noticed this, but it's essentially an allegory of the Adam and Eve story. 

Caitlyn McShea (37m 11s): I think it said no question as to why, when Joe finds himself suddenly rethinking his perspective on the world, that's out of a love for a woman named Evie and everything, including sharing food with him. 

Gary Bengier (37m 23s): You have to start civilization over in some sense here in Isolation. Exactly. I think those are fine spoilers because then where does that lead to? Can one do well on one's own? There's a lot of dystopian books today and they disturbed me because they imagine lots of apocalyptic things happen. And then the result is we all get our guns and we hunker down in shelter and we protect our family and we're willing to kill older people. 

There's this very dystopian feeling underneath that and it's lacking a morality. Is that our future? I, I hope not. But I think the answer is even if you started over, he would basically you’d build to something like we have today and you would face the same problems. You would face the same problems of complexity and you would have to deal with how we as human beings get along and work together. And that's going to let us build this intellectual property, which is the sum of human genius and to have to use it, to improve everything and ultimately to give us purpose. 


So that's kind of where the zone comes in. But Michael, I think he wanted to turn to alien crash. 



Caitlyn McShea (38m 46s): Yeah. And I mean, obviously technology and its relationship to humanity's future weighs so heavily as a central theme of this text. So it's clearly something you've thought abou.t  The between your texts is that it's a little more plausible than life after an alien visitation. At least these days, with all of our science out seeking the very exit planets you described. So before we get there, I wonder if there's even a relation, but you had talked about how you have the sort of optimistic thought that humanity will sort of, if they collectively attest to their damages, collect to resolve those and that technology could help in that endeavor. 

Obviously technology allows us to communicate along very broad distances now. So when you think about a future that is crossing this chasm of individuality into a collective endeavor, do you think the technology is a way through that? And if so, how do you imagine that to be the case? And then I'll ask you about aliens.  

Gary Bengier (39m 38s): I think it's a way through, in terms of our biggest challenges, climate change, we're going to need to use our technology to figure out answers to that problem and we're going to have to solve it. And we're going to live with consequences in any case, the question is just how difficult they make human life in the next many centuries ahead of us. But I think lots of speculative fiction overemphasizes technology and we're human, and we've had a million years of evolution and those things that make us human will remain unchanged. 

And so I think in the story, you'll realize the characters relate as humans that we can relate to. So it's not a weird world. It just feels like today in many ways, 

Caitlyn McShea (40m 20s): But it does seem like there's this really lovely separation between the lovers, let's say the two protagonists and then actually the rest of the world, some of their friends who are engaged in some interesting social justice work as well, but there's something about the removal from that technological world that causes a sort of reinvigoration or I guess a recalibration of behavior that I think is really kind of at the heart of how we might address climate change to. We’ll obviously need to technologically innovate ourselves out of it. But if we can sort of conjoin collectively in a, in a recalibration of the way that we behave in the world that we occupy, that would certainly be helpful too. 

So I'm not sure that it's not a coupled solution, 

Michael Garfield (40m 55s): Although it's also true that David Krakauer, his recent reflection on his optimistic assessment of collective behavior, as in response to the pandemic, he's like, well, I've, I've kind of given up hope that we're capable.

Caitlyn McShea (41m 9s): Maybe Gary hasn’t. 

Gary Bengier (41m 12s): Well, I think COVID has done a lot of things for us that let it caused a lot of people to reassess how they live and what things that they value. Some people don't want to go back to those terrible jobs they had before. Let's just fast forward our lives say pre COVID, where we have too many things coming at us. We have too much social media. We have too many things to do. We're on this treadmill. So many of the people on this planet are on the treadmill. And that's our life. Is that inevitable? What will happen in the future? 

When we have even more intrusive technology. We'll have less privacy, we'll have less headspace we can actually think or not because we're going to have to make that choice ourselves. So I think part of the zone, part of the book was to maybe have one,think about that in your own life and you know, what do we want as human beings? But it's not just if the answer is not just technology.

Caitlyn McShea (42m 10s): And that I at the center of Us is not certainly interested in technology. I mean, the, one of the first things to protect us with this book does separate itself from this Siri. 

Gary Bengier (42m 23s): Let's turn to Roadside Picnic, which I thought was a great book. I read the novel recently. I love it because it cuts down some of the standard tropes about what will happen with first contact, with some other species. The tropes include the war of the worlds, where they come and kill us or Star Trek where you have after the creation of warp drive, the Vulcans come and they come in logic and peace. This is just a very amusing other takes. 

So the aliens land, they sort of have a roadside picnic. We must be like mere ants, not even worth trying to communicate with, and then they leave and they have all this to try this left, which we can't understand because the technology is so advanced. So, I think that's great. I think that this is a book written by some really good Soviet science fiction writers. And I think it talks to a life in the 1970s in the Soviet Union. 

Caitlyn McShea (43m 23s): And your text, as you said, it's multilayers, I think that it's good to have these sort of analog explorations of the contemporary time that one lives with it. So, yes, I think that the Strugatsky Brothersare really touching upon that sort of Soviet lifestyle in this very terribly condensed economy. That's really all about what they cannot understand this other has done to their space, even though they want to. And even though it may have been friendly. Well, that was a fabulous synopsis. Now I have to ask you the alien crash site question. Gary, at the risk of imprisonment, great personal injury, even death. 

What object would you hope to uncover from an alien crash site? 


Gary Bengier (44m 0s): So my object is I would like to discover the equivalent of a credit card. This is a credit card that it's readable, decipherable, you can pull out the code and we could have the world's computer scientist digging through it and trying to figure out what it actually says. And what we're really looking for is one fact in our credit cards. It’s default rate because the default rate for in credit cards, it's about 1%. 



And the reason why that's so important is that tells us something about the species honesty. Because if that rate is really little, then we can rest with some sigh of relief that when, and if they come back and they're not going to kill us, and if it's really high, then we'll know we can't trust them. And we figured out how to arm ourselves, whatever we can do. So, because I think, and I think any of the in contact, the trustworthiness of the other species is so paramount the that species is so paramount to how that relationship will go.

Caitlyn McShea (45m 9s): And it seems that if we have that default rate where we able to decipher it as you, I think optimistically suggested we might be able to. What I think we glean is how much these individuals trust each other and realize that there's some contention there. So if they're not even cohesive in the same way that we're not cohesive, oh boy, watch out. 

Gary Bengier (45m 29s): But, but isn't that in some sense, perhaps an evolutionarily determined statistic. 

Michael Garfield (45m 36s): Well, now I'm thinking about Paul Smaldino’s work on the evolution of covert signaling, the way that telling a lie is kind of an evolutionarily stable strategy, it finds its level and you're given society, but it's like adjacent to the amount of time that you spend on the philosophical question of the mental, where we are going to find an interior, a subject, I kind of expected, there was a way to twist this, and maybe we can just ask you the same question twice. 

There's a way to twist this so that it reflects upon your book and the question of, you know, some first context stories like Peter Watson's novel Blindsight propose that what we're actually going to meet when we meet an extraterrestrial intelligence, are there robots basically. And then that seems likely given our current trajectory, you talk about that, sending robots out for exploration. 

Gary Bengier (46m 33s): I totally agree with that. But hopefully the robots will, presumably the robots will be programmed by those species and that that programming will reflect their own moral, fundamental code. And so, humans, for example, where did altruism come from? And there's been a lot of research on to trying to explain how that might arise through evolution. And it's a little bit of a quirky thing to think how that happens. It's sort of like the spring block is a creature in Africa. 

And it turns out that when the Springbok are surprised by a lion or something, one of them jumps up and down in place. And all the other ones run like hell and that one's most likely to be eaten. So how does that happen? That trait gets evolutionarily conserved. So, but in human and human evolution, somehow things like altruism and all of those traits have come about so that says something about who we are. 

Caitlyn McShea (47m 32s): And it seems to me that you think a lot about these sort of like anomalous, emergent, evolutionary traits against competition. And so this is almost, I don't want to say that maybe a high alien default rate would suggest that actually these aliens are fine. They trust because they know that there's some sort of a contract in place, but either way, it's like if by your house, by your spaceship, go visit earth, even though it's totally boring. I don't know what I mean, you give the Springbok as an example, but you, you seem to suggest that there's a lot more of it in humans. And I wonder why, why would that be, what is the distinction that might separate the human from having that a little more and demonstrating it more often, perhaps these aliens to then than the animal kingdom from which it emerged? 

Gary Bengier (48m 13s): I don't know. I mentioned earlier, I'm tend to be more of an optimist about in the bubonic death, black plague of the 13th century. I mean, we lost huge percentages of the population of Western Europe, and yet we continued. We, we persevere against all kinds of things and this climate change is one more example and I think we'll figure it out. So I'm hopeful that we'll come to that idea. That community is important. That the only way we solve these problems is to work together. 

It's going to be complex. It's going to be a non-linear century and how we get from where we are across that chasm of lots of robotics, very few jobs. How do you find purpose? How do we make sure that people don't sink into using synthetic drugs and that's the size themselves from their hopelessness of their life? All those things are, I think, are real issues, but I have some hope that we'll get there, because I think about that world, we'll have lots of stuff. We'll have met all of the needs from Maslow's hierarchy. The world will be more difficult to live in, but you know, there is a way to solve this. 

This is not hopeless. It's actually, there's a lot of hope for it being a lot better. 

Michael Garfield (49m 25s): If I can, Caitlin, I think we have time for one more quickie, which is elsewhere in other interviews. And in the piece that you wrote about your book for the Good Men Project, you bring up the fact that, yeah, most people writing science fiction before the iPhone didn't include the iPhone, that this was a kind of a singularity in its own, right. That linear projections tend to fail us. But in the same piece, you also said, perhaps we should focus on the much more, highly likely march of existing technological curves. 

So as we would expect from someone affiliated with SFI, you appear to contradict yourself, you're both arguing for and against the surprise basically. And I'm curious, just maybe as a kind of a parting volley, what would surprise you the most? Like what do you think cuts against your own expectations of probability about the future, be it the future that you write in this book or otherwise? 

Gary Bengier (50m 27s): There is some dissonance in those answers, and I think I'm arguing against what I see in a lot of science fiction today is as I started off in our beginning to take everything to the absurd. And so much of the conversation is focused on those absurd things, uploading brains, all this kind of crazy stuff about robots, and that's just nonsense. That's not going to happen anytime soon, I would argue because of evolution. And as I said, about the way we've evolved, it's probably not going to ever happen. Neural link is going to be a tiny bit of technology to help people with various kinds of impairments, but it's not going to be mainstream. 

We won't be doing that, but let's focus on where the technology will likely take us within that parameter. As we know, from Santa Fe Institute, it's so complex, there's lots of nonlinearities. So what kinds of nonlinearities? Well, Isaac Asimov in the Foundation Trilogytalks about a world that covers 10,000 years. He would too, was writing about his own time. And, WWII and Hitler and there is this crazy anomalous guy who disrupts everything. 

And so the psycho historians and all their wonderful predictions couldn't predict because of the nonlinearity. That is so true. We know that because of what we've learned at the Santa Fe Institute and understanding the theory. So this century I'm pointing you to what I think are the major issues that will breed a chasm, and we have to cross those. Can we as humankind do that successfully? I don't know, but I think that we have a bigger possibility of being successful if we focus all of these bright minds on those right problems and not get distracted, 

Caitlyn McShea (52m 13s): What an endorsement. 

Michael Garfield (52m 16s): Indeed. Yeah. Thank you, Gary. Both for your support and for writing an interesting book. I feel like we really only just kind of danced across the surface of this text. And I had like a whole page of questions we didn't have time for.

Gary Bengier (52m 32s): We missed some other things. You know, as I said, Unfettered Journey, you can find it wherever you find books. I mean, it's won six awards, including best spiritual book of 2020. There's an entire part of the, the thing that talks about how we find purpose in this rapidly changing world. So I would love to have more readers to continue this discussion. And thank you, Michael and Caitlyn, 

Caitlyn McShea (52m 55s): He's so much for coming and for encouraging individuals to seek out whatever that purpose is, because I think the encouragement is really serious and necessary in a time when I think people are becoming sort of like specialists in something that isn't their own fulfillment necessarily. It's very inspiring. And thank you for your alien crash site object.  I think we’d never find evidence of any dark credits, but that's okay. We have a credit card. 

Michael Garfield (53m 17s): I I'm actually now afraid of a first contact scenario in which we find the credit card of an alien race with a 75% default rate. Like, why do you have credit?  What is this?

Gary Bengier (53m 27s): This? this is wonderful. Thank you so much. Okay, great. Thanks a lot. This was really great. And Michael, yeah, we did cover some other topics that we haven't explored before this. So this is, and I think we've also added a few more comments on Santa Fe Institute that we normally get to. So that's great. All right. 


Michael Garfield (53m 53s): Thank you for listening. Complexity is produced by the Santa Fe Institute, a nonprofit hub for complex systems science located in the high desert of New Mexico. For more information, including transcripts research links and educational resources, or to support our science and communication efforts.