Can you write a novel using only nouns? Well, maybe…but it won’t be very good, nor easy, nor will it tell a story. Verbs link events, allow for narrative, communicate becoming. So why, in telling stories of our economic lives, have people settled into using algebraic theory ill-suited to the task of capturing the fundamentally uncertain, open and evolving processes of innovation and exchange?
Welcome to COMPLEXITY, the official podcast of the Santa Fe Institute. I’m your host, Michael Garfield, and every other week we’ll bring you with us for far-ranging conversations with our worldwide network of rigorous researchers developing new frameworks to explain the deepest mysteries of the universe.
This week on Complexity, we bring our two-part conversation with SFI External Professor W. Brian Arthur to a climax — a visionary exploration of multiple scientific methodologies that takes us from the I Ching to AlphaGo, Henri Bergson to Claude Shannon, artificial life to a forgotten mathematics with the power to (just maybe) save the future from inadequate and totalizing axioms…
We pick up by revisiting the end of Part 1 in Episode 68 — if you’re just tuning in, you’ll want to double back for vital context.
If you value our research and communication efforts, please subscribe to Complexity Podcast wherever you prefer to listen, rate and review us at Apple Podcasts, and/or consider making a donation at santafe.edu/give. You can find numerous other ways to engage with us — including job openings for both SFI staff and postdoctoral researchers, as well as open online courses — at santafe.edu/engage.
Thank you for listening!
Join our Facebook discussion group to meet like minds and talk about each episode.
Podcast theme music by Mitch Mignano.
Follow us on social media:
Twitter • YouTube • Facebook • Instagram • LinkedIn
Related Reading & Listening:
W. Brian Arthur on Complexity episodes 13, 14, & 68.
“Economics in Nouns and Verbs” by W. Brian Arthur (+ @sfiscience Twitter thread excerpting the essay
“Mathematical languages shape our understanding of time in physics” by Nicolas Gisin for Nature Physics
“Quantum mechanical complementarity probed in a closed-loop Aharonov–Bohm interferometer” by Chang et al. in Nature Physics
“Quantum interference experiments, modular variables and weak measurements” by Tollaksen et al. in New Journal of Physics
W. Brian Arthur (0s): Around the first quarter of the 20th century, people were hoping that there was a kind of axiomatic system that we could drive all of mathematics from, that this was timeless. That any number you could think of such as Pi or e were given, and then there was a natural reaction against that and said, oh, how do you know? And people were saying, well, you can't really rely on things being given in a platonic universe.
What you could rely on is, is there a way to construct these? Is there some algorithmic way whereby you can see how the digits of Pi may follow each other. Though, if that's the case, you'll never know what lies at the end of pi. At any time, you might know a finite number, but you don't know what lies in definitely on the other side. There's an openness in what those digits are.
So people began to see mathematics and possibly physics as not being given, but as something where you had to ask what processes generate numbers and the best you could say is we can have a well-timed method to derive those digits of pi, but we don't know exactly what they're going to be.
Michael Garfield (1m 53s): Can you write a novel using only nouns? Well, maybe, but it won't be very good nor easy, nor will it tell a story. Verbs link events allow for narrative communicate becoming. So why in telling stories of our economic lives, have people settled into using algebraic theory, ill-suited to the task of capturing the fundamentally uncertain, open and evolving processes of innovation and exchange.
Welcome to Complexity, the official podcast of the Santa Fe Institute. I'm your host, Michael Garfield, and every other week, we'll bring you with us for far ranging conversations with our worldwide network of rigorous researchers, developing new frameworks, to explain the deepest mysteries of the universe. This week on Complexity we bring our two-part conversation with SFI external professor W. Brian Arthur to a climax, a visionary exploration of multiple scientific methodologies that takes us from I Chingto AlphaGo, Henri Bergson to Claude Shannon, artificial life, to a forgotten mathematics with the power to just maybe save the future from inadequate and totalizing axioms.
We pick up by revisiting the end of part one in episode 68. If you're just tuning in, you'll want to double back for vital context. And if you value our research and communication efforts, please subscribe wherever you prefer to listen, rate and reviewus at applepodcasts and/or consider making a donation at santafe.edu/give. You can find numerous other ways to engage with us, including job openings for both SFI staff and postdoctoral researchers, as well as onlinecourses at santafe.edu/engage.
Thank you for listening. In the history that you present of economic thinking all of the math is before the publication of the origin of species. And the reason that you keep turning to biology seems to be that biology made this transition and the 19th century between basically a static taxonomy, a version of the world in which all of the species have been created and are timeless and permanent to a world in which species go extinct and species are created in an ongoing sort of innovation process.
So likewise algorithmic thinking seems to emerge through a kind of a Darwinian paradigm shift that you get this notion of the world as essentially an emanation of these eternal forms in the mind of God, to a world in which everything is in flux and the resistance that the theory has experienced in its various implications has a lot to do with the visceral discomfort that many people feel with the idea that the self and various other things that we take for granted as static as solid ground, upon which to base our understanding of the world are in fact constantly in flux. You make a point that all of this math biases us towards equilibrium thinking.
That makes sense if the world that you're living in, isn't changing in unexpected freaky ways every day, but then suddenly you need a non-equilibrium model because the world that you're living in is clearly not in equilibrium. So that's where we get this new epistemology of immutable self. You know, the Bucky Fuller, I am a verb. The Alfred North Whitehead is process philosophy. And again, this was recently formalized in a paper I was bringing up on the show, Krakauer, lead author of this paper information theory of individuality, where he showed that the category of the individual is itself, something that emerges out of these relational processes. It's informational integrity through time or it's informational scaffolding by its environment. So it's just funny how it's not just that the theories disclose new worlds, but that they themselves seem to be the precipitates of our interaction with the world and that in a shift to an algorithmic economics, one that allows us to explain where new technologies come from and the way that our various relational processes lead to unexpected phenomena and the way that this helps us strip ourselves of the bias of equilibrium.
That's just a riff that I'd love to pass back to you.
W. Brian Arthur (6m 34s): Okay. I think you're getting it all the important things. Just one footnote to make sure facts are straight. Darwin published The Origin of Species in 1859. And coincidentally mathematization of economics really took off in the 1870s. And from there after Darwin's ideas, in fact, Alfred Marshall, who was part of that process of mathematizing economics famously said, the Mecca of economics is biology.
He had read Darwin. He wanted an evolutionary economics and that evolutionary economics, he couldn't see how to do it. So he was very much instrumental in this equilibrium view and economics doesn't deny change. It just says, okay, we'll have a screenshot of the economy here. Then maybe firms electrified and stop using steam engines.
So we have another screenshot. We go from this equilibrium to that equilibrium. It's rather like saying that children grow up and that's age two, they look like this and that age 11, they look like something else. It's fine. But it doesn't really explain any process of how things develop. And if you're looking at the developments of a system of arteries or of neural systems, it doesn't work in finite snapshots.
And I think Marshall knew this in the 1890s, but there wasn't much she could do about it. And there is a bias in standard mathematics. It's not quite insurmountable. All of this is just awkward. I think standard mathematics is I keep saying has been wonderful at explaining relationships and economics, how incentives arise, how different forces in the economy bring forth certain behaviors, how behaviors are related, how prices in one sector steel might be affected by prices of oil. Standard economics standard mathematics allows us to understand all of this extremely well.
What it doesn't do is show us how things evolve and come about. So just a couple of things. Yes, standard mathematics tends to biases towards a system that doesn't have new variables coming along. You set up a model with variables X, Y, Z, maybe sometime further, a new variable might be invented or come along, say P and then another one, Q and R, but it's kind of awkward and hard to that in mathematics.
So I'm not saying processes can't be handled in algebra, but generally speaking, it's awkward. Just to amuse myself I wondered what would it be like to write a novel using only nouns. Somebody told me that Virginia Woolf had actually experimented with that. And then I started to wonder, and I don't want to be too mean here, but what would be like to sort of say they kissed using only nouns and the way I would do this, as you know, let's variable P be the position of his lips in three dimensions at variable P prime position of her lips.
And then you would say something like as T tends to sometime capital T the absolute value of P minus P prime goes to zero. So you can do it, but in a there's a kind of for God's sake. Why didn't you just say they kissed and I don't want to make fun of mathematics too much, but I do think that what I'm emphasizing is the different languages of expression. Algebraic, mathematics, and other mathematics would give you different views of the world.
You can shoe horn, all of these things into geometry, or it's harder to shoe horn, four and five dimensions into geometry or end dimensions. And that's part of why algebra took over. And it's hard to shoe horn processes with novel structures appearing into standard algebra. Algebra's good at saying this grows and this diminishes, this is up, this is done.
And so on. One thing I want to say, as you're talking, this is probably not in very logical sequence, but you mentioned none of this. Anyway, you mentioned that there's a growing idea or set of ideas in science and in philosophy that you would know about, I'm sure in both a movement towards saying things sort of exist, but hang on, they're changing.
So you can say planets exist. But if you look in a longer time, you can’t say, well, there is such a thing as Jupiter through all the eons of the solar system, it's that Jupiter’s in process. It's changing the source. The sun is the earth. And so there's something you mentioned called process philosophy dating from Alfred North Whitehead and before. And there are modern exponents of that.
John Dupre is very much in this area. There are some really thoughtful people thinking about the world, not so much as having fixed objects, but that everything in the world is changing. And in process for me, I'm thrilled and amused that this brings us back to ancient Chinese philosophy. I just want to mention that I'm a huge fan of Taoist philosophy.
And in that philosophy, they'd be reluctant to say any pattern is there for good. In fact, everything was evolving, changing, new things are forming. One thing is bringing into being the next thing and so on. So one of the oldest books on that, the I Ching is basically the book of change and we're starting, somebody could say, well, all right, is algebraic mathematics with all its equilibrium and it's fixed variables.
And its nouns is that over and not at all. But it may appear that Jupiter isn't changing that much and maybe in the next million years or so. So it's convenient to look at things as fixed and growing or changing in amounts. However, over a longer term, it's not much good to set a new industry, arises, computation arises.
What set of variables, for example, if you were looking at computation in the 1970s would tell you that the cloud becomes important. Three, four decades later I don't know what would tell you that telecommunications becomes important. So these things matter. We went from computation in the 1960s and seventies to being about machines on those machines, processed numbers industrially.
So this was called information processing. And then weirdly the erbium doped photon amplifier gets invented around 1987 and suddenly photonic information transmission becomes available industrially and similar sort of timescale, really good satellites. So suddenly by the early nineties, the scene has changed, it's like in a dream there's telecommunications as possible over phone lines and they’re slow.
And I can even remember those days and suddenly there's photon transmission, there's fiber optics and there's satellite transmission. So suddenly you can be an architectural firm in Seattle and you can have your detailing done in real time by a back office in Budapest. You could be a car manufacturer and Detroit and have parts supplied in China. That changes ,you get offshoring.
You get a completely different economy that leads to its new problems, but there's no way in 1970 or 1980, even in my opinion, where you could see any of that coming. So questions of change and formation and a slightly longer term than just looking at the next three years are extraordinarily important because the economy is always exploring and has processes of finding completely new things and restructuring itself.
I want to mention, and I think this has to do with, and still on the subject of your questions I hope, so is everything I'm saying just a big downer? Well, we have to be careful. We have to be careful about using standard mathematics and economics. If you think of it as focusing certain things, you can see quantity relations, how things are related, how things affect each other.
Wonderful, but it leaves out the idea of new structures and of processes affecting other processes and inhibiting other processes. So is there any mathematics that would take care of that? And there are such things as processed algebras and so on, but that's not what we use. So my answer is I began to realize slowly, oh my God, there is, of course, it's computation.
And I don't mean just making simulations and crunching numbers. Let me give my take on computation. I think not so much in terms of computers or pranking through things, but in terms of thinking algorithmically, algorithms are set up so that you can have processes execute this in this way and watch the process executing.
And after a while, if such and such is greater than if the number of iterations is greater than a hundred, then call a sub sub-process. If it's not called a different process. So algorithms are set up basically as sequences of events that call or inhibit other sequences of events, and often those processes or events form some news structure.
And this is all not just because they're algebraically based. They're heavily equation based most algorithms, but it's that they have if then conditional branching as it's called. And if you have done enough of such and such, and you can track a process, just like you might be an industrial process and then say, boom, we've done enough of that. Let's switch over to a different process. I don't want to see the entire world that way.
That would be a bit too mechanical. But what I will say is that that gives us an alternative language, one in which you can express verbs. This action happens that action triggers that action then inhibits those actions. And if you have enough of that, then something else happens. So algorithms are stories about how processes set in motion other processes, or stopped processes happening, but they're not deterministic.
You can feed in some form of randomness and you might find you're going down this path, rather than that path. You could in principle do some of this or quite a bit of it algebraically, but it'd be like saying we can do geometry pretty well in three dimensions. Let's try to do it on six dimensions on principle. But the language is awkward. So what you can do with computation, or I would prefer to say algorithmic language or algorithmic mathematics is to track processes, allow those to create new structures.
Those new structures might create new processes all within the algorithm working itself out, all endogenous. So you get endogenous change. I would go even further. What I noticed about computation is that computation or algorithmic processes respond in a magnificent way to context. So you can say, okay, I'm using an algorithm to steer my aircraft.
This is actually what happens. Fly by wire it’s called in the aircraft business, the wires being computer connection. So an aircraft may be traveling along, say 35,000 feet. Let's say it's an Airbus or whatever. And it's responding to outside buffeting. It's responding, maybe being rerouted, checking where weather is and checking its current position and automatically as a process adjusting.
And so a lot of that's done mathematically and that's what's my background is very much the sort of control. But if something happens, if some condition changes, if there's a nearby aircraft unexpectedly, the context changes. So algorithms with if then conditions, the if part says, if fireman steady air, do this. If there is weather ahead and 50 miles trigger the rerouting algorithm, if something else happens, if I'm held up at near an airport, go into a holding position.
If the price of tea in China changes then do such and such. So what algorithms lose and I want to be very clear on this is you can't easily see the logic working out. Things are more complicated. What mathematicians love about mathematics is it set up the system, just like a sort of Sherlock Holmes cast of characters. You set up the rules for the relations and the equations.
Then you start cranking and start manipulating the equations. If this is true, then that has to be true. Another few lines introduction, then that has to be true. And you get these moments in mathematics, oh, so that's how that works. Then you get this structure. Whoa. And then that leads to that. Wow. And then that explains why such and such is in relation to that. And you get this tremendous high from doing this sort of algebraic standard mathematics.
You can't get that high with algorithms. They just play out probably inside a computer or maybe inside your head, but you don't see that. So it's a big disadvantage of computation. You can say equally with algebra, you can't see the geometry working out on some graph, unless you plotted simple. It's so computation or algorithms work their way out.
We don't see what's going on and so on, but they have this wonderful compensation that's you can make them more detailed arbitrarily. So if you need more detail here, you can put more detail into the algorithm. That's an art, but all mathematics is art. You can track the context changing. So an algorithm may be partly creating its own context.
Imagine an algorithm that's looking at cars and dense traffic. Traffic doesn't exist per se. Cars exist and cars are responding to cars around them and their neighborhood slowing down, speeding up, passing other cars. And we can call the local pattern cars or creating. We can call that traffic. So the cars are creating the traffic. The traffic is causing the cars to respond. The traffic if you like as the context that the cars are creating. So this is I think at the essence of complexity and why it applies to the economy, individual objects in this case cars, are creating a fluid context that the individual elements have to react to. That's algorithmic. You could do it possibly with a bit of stress and strain algebraically, but with if then conditions, you can do it beautifully.
If some car in front or behind gets too close, then move to another lane or something like that. So it's harder to do that in algebra, could be done in principle, but now we can do this algorithmically. So algorithms are much better at detail. Detail can be arbitrary. We don't have to say all players are agents and the economy or identical. We don't have to say the rational.
We have models to say here's how they might explore and make sense. Any changes in pattern their respond to their immediate context. So changes in pattern and new structures can arise endogenously with algorithms. And I think above all algorithms can generate new things that we haven't seen before. So I don't know what system of equations you could write down to teach some system how to play Go.
And yet with alpha go zero, which is essentially suite of algorithms, you're basically saying, okay, I'll tell you the rules for the game of Go, but you're going to have to develop some smart and inductive way, maybe over days or weeks of computation to play a good game, but not with human input, with the actual intelligence of that game arising by playing against some other version of yourself.
And so alpha goes zero is based upon that. You can't get that easily with equations or somebody might say, well, in principle, theoretically, you could, but it would take many, many universes, an infinite amount of time almost to arrive there. And yet we can do this algorithmically. So to summarize here, I'm saying that algorithms are natural language for process.
You could look at what I've been calling processes in biology as algorithms, not to say algorithms describe everything. They don't describe human emotions and things like that in the economy, human behavior, not that easily, but they provide another language in which we can verbs and working together. So my argument is that science itself is shifting. Science shifts.
I believe when new instruments come along, literally x-ray crystallography plus the mathematics that goes with that. So suddenly by the 1950s, we can look at complicated organic molecules, 1952, 1953 DNA. And suddenly we can use this new instrument to see how they're composed and understand what they do. They're the basic components of genetics and life.
And that arises out of some new instrument of understanding. As instruments of understanding come along, they bring new insights, different types of mathematics, bring different insights, geometries wonderful. And we still use it. We graph everything we can, especially in statistics and algebra is wonderful, but it's limited to quantities and amounts, and it rules largely verbs and processes.
They're much harder to do verbs and processes are natural in algorithmic, what you might call algorithmic mathematics. And I think the person who understands this best connected to Santa Fe is Greg Chayton. And I've been following his work. I've known him for several decades through SFI. So it's connections with SFI that really count here. So then the proposition is this.
Suppose we changed from geometry to algebra in the 1870s and on for the next hundred years or more, what does that project economics. I'd argue quite a lot. What did geometry bring, quite a lot, but probably we want to include verbs and processes and events which have been ruled out largely. Then we have to open up to thinking algorithmically and using computation.
There's a lovely quote from Edsger Dijkstra, Dutch computer scientists, maybe 50 years ago or so, really smart guy. He says, if I can cut the quote right, he says, computer sciences knew more about computers than astronomy is about telescopes. And I love that because basically what we're learning and I w I'm influenced here by Steve Wolfram and others.
What we're learning is that algorithms occupy a whole world of their own. Turing didn't call them algorithms. He called the methods. Shannon called them algorisms. The modern word is algorithm, but then habit, a world that's infinite and a world of things changing and giving birth to things and calling in other algorithms and stopping other algorithms.
This is very close to how neural processes work. I'm not saying they're perfectly algorithmic and that's a huge debate and neuroscience. But what is true is that science is shifting from an amount of this and amount of that in colored test tubes, to seeing the world, as you were pointing out in terms of things forming and not just things, forming, things that can't be fully nailed down.
John Dupre is very fond of pointing out that there's no such thing as a thing that it changes over time. So even the table in your study is a process. It's changing. And I think there's a lot to be said for that point of view, but science itself is discovering a new language. You could call this computation. I don't like that term, too much about machines.
We would call it algorithms. That's sort of smacks of Google and Facebook. I like the term algorithmic mathematics, which is how do you do the logical expression that makes it mathematics, including Boolean logic. And you have a language now to talk about events, processes, things unfolding as the competition works out.
And those are along with conditional branching, if then conditions. So systems like this, and I think this is the essence of complexity. Complexity is about systems that are interacting with and creating their environment or their context or part of that context, and then reacting to that and thereby changing. And this may bring up new structures and this gives a language and economics for looking at structural change and formation.
Michael Garfield (34m 18s): So the last thing I want to bring up with you on this call is about that structural change. Listening to you speak on this stuff, I'm reminded of Marc Andreessen's famous article, Why Software is Eating the World and the notion that our technologies and our epistemic frames are themselves evolutionary products. And, you know, we live in a world that is both informed by and sort of creating and therefore amplifying this notion that everything is an evolving process.
I want to, I guess, ironically land this conversation in where it seems like this is all taking us, which is fundamental uncertainty. And hopefully by the end of this call, we can agree that what we've done in a discussion of algorithmic mathematics and its economic implications is sort of saved the future or at least save the openness of the future. And so I want to take us, or like tie a bow on this with a paper that you sent me A Comment in Nature Physics by Nicolas Gisin on how mathematical language shapes our understanding of time and physics.
And again, you know, just because this conversation is so rich with historical name dropping, I think it's worth it. He starts this comment by talking about the 1922 debate between Albert Einstein and Henri Bergson, philosopher, as well as around the same time the argument between mathematician David Hilbert and L.E.J. Brouwer. You know, the notion that the physics that we're working with and living in now is akin in some respects to this kind of algebraic economic thinking.
And as much as the winners of those early 20th century debates were advocating an axiomatic mathematics in which numbers are these sort of timeless platonic objects. Whereas we're starting to come back around to appreciating now that we live in a world that is computationally enriched, and capable of running these enormous simulations and making predictions we can't understand, et cetera. We're coming back around to the potential value of an Intuitionist mathematics in which numbers are themselves processes.
It's much like the biological species. It seems as though it contains this eternity, like decent says in here that basically our choice is to put all of the randomness at the singularity at the big bang and then have all of the laws and characteristics of physics specified in that what Terence McKenna called the one free miracle, or we can accept that there is a finite number of information encoded in real numbers.
And that there's a kind of fundamental randomness. And I'd love for you to just riff on this article that you sent me. He says, basically real numbers are marvelous tools that should not be abandoned. However, their practical use should not blind the physicists. After all their use does not force us to believe that real numbers are really real. In other words, one should not confuse the epistemological usefulness of classical mathematics with the ontology, which might well be better described by Intuitionist mathematics and in, so doing, he argues, we rescue the idea of in determinism of the openness of the future.
And also we defeat the law of the excluded middle. We allow again for propositions about the future that can be somehow neither true, nor false. So it seems like what you're pointing toward here actually as part of a larger gesture in thinking about math and science and philosophy toward first of all, an epistemic humility. And then also, one in which it's not just that, cars and traffic demonstrate emergent properties, but the very bedrock of modern thinking is itself mutable and evolutionary in this way that allows us to save time itself, to restore a notion of an evolving universe to physics.
W. Brian Arthur (38m 36s): Wonderful. And I think that you've answered your own question much better than that. So you have a very good understanding of this. My comment would be put it in the larger context. If you take a fairly large view of how science is developed, and I would define sciences, our attempt to understand and explain the real-world phenomenon, whether they're physical or human centered or society centered or chemical.
It turns out that after that I think Newton in 1687 or whenever it was brought out such a staggering simplification of planetary orbits that could be easily explained and using for equations are so that thinkers and philosophers almost got intoxicated by those ideas. And by the 1730s and 1750s, something called the enlightenment set in very heavily best by Newton's breakthrough.
And people started to think that everything in nature was explainable. I think the stanza in Alexander Pope's poem Essay on Man, 1733 is all nature is, but art meaning artifice are technicality or technique. All nature is, but art unknown to the old chance direction, which thou canst not see and so on.
And basically if we only understood the basic rules and we understood that the starting point, we could predict everything. So basically from that rather simple system, and that really brilliant breakthrough of Newton’s, we began to see the world is highly ordered that only we could understand its art or its technicalities, that it would be known to us and that everything was explainable and everything was given.
And possibly God given us that. And that's a platonic sort of world was justifiable that the world exists. It's given it's platonic and certainly on up to the 20th century, we assumed that if only we knew the starting conditions a la Laplace, then we could predict everything. And we knew the mechanisms. We got in for a series of shockers in the 20th century.
I'm glad to say with _____ showed that there was no axiomatic that could encompass all truths in a say in number theory or in arithmetic and with quantum theory and so on, we began to realize that not everything was predictable. So I think there's been a big shift in the sciences. I'm not a philosopher, so I'm not very articulate in this, but I would say that maybe around 1900 or the first quarter of the 20th century, when David Hilbert was dominating superb mathematician, people were hoping that there was a kind of axiomatic system that would explain, or it could drive all of mathematics from that this was timeless. That a number, any number you could think of such as Pi or E were kind of given. And then there was a natural reaction against that and said, oh, how'd, you know, and people like Brouwer were saying, well, you can't really rely on things being given in a platonic universe. What you could rely on is, is there a way to construct these? Is through some algorithmic way, some constructive method whereby you can see how the digits of Pi may follow each other.
Although if that's the case, you'll never know what lies at the end of PI. At any time, you might know a finite number, you know, 5 trillion digits of PI, but you don't know what lies indefinitely on the other side, you might say it exists platonically, but there's an openness in what those digits are. So people began to see mathematics and possibly physics as not being given, but as something where you had to ask what processes generate numbers, what processes bring about certain numbers, and you don't know in general, how those processes work out and the best you could say, we can have a well-timed algorithm or a well-timed method to derive those digits of PI, but we don't know exactly what they're going to be.
Now, that's my take on this. We're finding similarly that you can set up in economics or indeed in physics, the rules of the game, but you can't predict in advance what those rules are going to lead to. This is famous actually. And the theory of cellular automata, and they think here, well, friends done a solid, great service. He shows that even with the various simple rules in the cellular automaton, that you don't have to get very complicated.
These are systems that compute each other. They're either a lot of dots or squares that are black or white. And in the next iteration, given the squares black or white, according to some rules of what its neighbors were before. So that's a cellular automaton. And what we've learned about that is you can't predict what you're going to see in a billion or one iterations in any shorter way than just allowing that to happen.
You can regard this as messy and horrible, or if you've got a different temperament, which you seem to have and I have, you can regard this as a wonder that there is no way you can think of certain rules by which the universe works or the economy works, but there's no way to see what that will lead to. When I started to investigate technology, I realized that new technologies think of say the laser printer, which is a combination of computer processor, directing and laser to write images on a Xerox drum.
And so that's combination of previous technologies came along around 1972 or so in Park where I'm standing at the moment. So that's a well-defined process, but you can say, okay, in the year ten hundred or in the year, 1843, when railways were just getting underway, what's all that going to lead to. And again, you can say, well, we can't say if new technologies are created out of existing ones and those become building blocks for further technologies, we can say very far ahead, what sort of new technologies are going to arise?
And when, again, it's like saying, I don't know what the future digits of Pi are going to be, no matter how many we know as some Guinness Book of Records, numbers of digits, there's an infinite we don't know and never will, because all we can say is there's a process for discovering that, but we don't know. So I'm on the side of people who say something like this, maybe the future exists somewhere.
There's maybe out there there's, if you want to pause it God or something, future is known. But actually, I don't think so. I think that the future itself is being created. You might even know the processes by which it's being created in the economy, but yet you can't predict anything. Personally, I like that the architect, Robert Venturi in the 1960s was really put off by the simplicities of the Bauhaus and _________ and others saying everything has to be a minimal functional, highly ordered, highly structured, axial magic, if you like.
And he detested that. So he wrote a book called something like Complexity and Modern Architecture. It wasn't talking about complex systems. He was talking about non-simplicity and he contrasted what he called prim dreams of pure order versus what he called messy vitality. And I think I'm the messy vitality camp. I like the idea.
I wouldn't like the idea that if I were infinitely smart, I could switch on a detective show and predict the ending. 10 episodes hence, I like the idea that the universe is unfolding. I don't know if that answers your question, but that's roughly the best I can do.
Michael Garfield (48m 58s): I guess just in closing, you know, it occurs to me that just as you are comfortable with both economics and nouns and economics and verbs, having their place, having their function or their utility, and you argue that there isn't one lens to rule them all. It seems like it's important for us not to give listeners the wrong idea here and just say, well, okay, software's going to eat the world and everything's going to be software. And the algorithms are going to win against equations.
I know comparably it seems like there may be a way to at least satisfy everyone or at least mutually enrage everyone that, if you follow quantum physicists, like Yakir Aharonov or John Wheeler and the empirical evidence that they've offered that there is some sense in which we are capable of knowing at least certain details of the future or that our current observations are capable of influencing stuff in the past and here where we're well off the party line of SFI.
But I think it's worth seeing that maybe perhaps in research like that, there is a way to reconcile fundamental uncertainty and indeterminacy with the notion that there is some sense, at least some details of the future are already encoded in a process-based dynamic relational way with a now that an Intuitionist mathematics restores to physics. But that we never get the infinite knowledge that allows us to do the God's eye view.
In whatever sense the future exists. It is collaborating with us on the past,
W. Brian Arthur (50m 43s): I would say so. The economy itself, I'm just talking about economics here, but the economy itself is vast. It could be one country like the U.S. or it could be the whole world system or continent, but the economy is vast and it's extraordinarily complicated and it's completely and utterly dependent on human behavior. And it is detailed.
Fractally, you know, this happening in this industry, that's happening, these firms and the industry and these departments within firms and right down to human behavior. So the economy is an enormously complicated system and we have different ways or instruments of understanding it. We can examine history. We can examine statistics. We can do a lot of armchair theorizing and thinking through. We can do some mathematics.
And up until around 1970 economic theory was regarded as all of those. And these are different instruments for looking at this enormously complicated system. So it was a bit like saying the brand is enormously complicated and we can look at it through this MRI system or some other system. The economy, we did have half a dozen, whereas that were highly respected for looking at the economy when this hyper mathmatical approach came along, partly due to Samuelson and others, I can respect it.
But I do remember, I'm old enough to remember where it was controversial. Now it's taken over and people seem to assume that the only way to theorize about the economy is to do it in equation form using what amounts to fairly, not terribly advanced mathematics that, but good mathematics that's led to a lot of insight and a lot of cleaning up of the logic, but it only shows you certain things the way x-rays show you certain things about the sun rather than some other method.
So I think that what I'd like to see is respect for many ways of looking at the economy, including armchair inside the historical comparison, statistics, mathematics itself, algebraic analysis, and now something else, which is a new form of mathematics, algorithmic mathematics, or looking in terms of logical formalized processes.
But none of these can capture an actual economy. It's like saying, you know, you've a friend or a spouse, and you could say, what's their personality. Well, it could put that into a Myers Briggs system or some other system Jungian analysis, but you will never know the complication of that person. And personally, I cheer that on. So the bottom line here is that I would like to see standard mathematics treated as one very good way to look at the economy, but it looks in terms of things, things that are interrelated and objects that grow or not grow.
Now, we've another one. We have a computational economics, or I prefer to think of it as algorithmic theory. And that shows you different things. So you can say that over a couple of centuries, ago from geometry to algebra, algebra added onto geometry, but it didn't replace it totally. And I think algorithmic thinking, thinking in terms of processes and in terms of interacting events, illuminates things that we couldn't look at earlier and it adds on, so it's not going to replace the previous methods of doing theory.
I would like to use a word. I'd like to have a Catholic approach. I don't mean religious, I mean a wide approach. So yes, we will look statistically. Yes, we can look algebraically and yeah let's experiment and see how these systems work, systems that create new things and endogenously. So let's have a look at how algorithmically the economy works.
We'll see different things. We've got a wider view, but I don't think there's ever any end view. And I hope not of fully understanding an economy. In fact, the science keeps changing, and this is true in physics as the instruments of understanding change, be they're mathematical or mechanistic, then the science changes, then the understanding changes. And I don't think there's an insight here and I hope not.
Michael Garfield (56m 7s): Wonderful. So just maybe this is, I don't know, maybe we leave this out, but it just occurs to me in describing this open-ended recombinant approach to scientific methods. We're developing this syntax, like an integrated pluralistic approach. And that begs the question. If we're moving from nouns to verbs, like what comes next? Adverbs That almost gets back to more like a Goethen kind of science of properties, of things, you know, like a weaving the observer back in a more formal and rigorous way.
W. Brian Arthur (56m 50s): Well, in an unusual, bout of humility to simply say, I don't know, but I think it's wonderful note to know that with complexity economics that was really started at Santa Fe and now become a very large approach. We're basically assuming that the economy is not deterministic, highly ordered probabilistic, but it's actually, there's fundamental uncertainty, but we can model how people proceed with fundamental uncertainty.
Like I'm about to go for lunch. I don't know what will be there for lunch, but don't worry. I'll figure something out. So we can model that. And no small part actually, thanks to my colleague, John Holland, who was very much an influence on me and other people at Santa Fe Institute. We can just, as if you're training some algorithm and AlphaGo is started off just knowing rules, but not knowing how to operate, you can learn.
And that's how people do learn. Maybe not exactly in AI or machine learning terms, but there are well-known ways been studied people learn from past events and so on. So what I'm cheering on is the idea that we're backing off from seeing the economy as a highly ordered predictable system with this and that form of perfection. And we're backing off from that to seeing the economy is in the longer term is not very knowable, but to quite a degree, understandable. It's more a sort of seeing the economy as a friend, rather than as a machine.
The economy is an organism that keeps changing and morphing. I like that, but some people might find that's an uncomfortable view. You're trying to understand more of it. I don't think we will ever fathom the economy just as we will never fathom what makes music music. We can say, well, it's due to this or that, but it goes on forever. Literature's the same. These are all attempts if you ask me a few human beings to illustrate their worlds, to make it real, to tell stories about it. And I think that all of theory is a story. It's what's the story of planetary orbits? How do they arise? How do they get there? Now we're looking at stories that, of how things form, not just how things are. And so personally, I like that.
And how did planetary systems come about? What happens when a supernova explodes? How did the whole system of molecular biology arise? All of this complicated. How did brains in mammals arise? These are not fully explained. Probably never will be, but we are shifting from looking at ordered mechanical systems to looking at systems that are fundamentally unknowable, fundamentally uncertain, but we can say a lot of things about them.
Nevertheless. So rather than say some system like competitions, replacing standard mathematicians I hope it's not. I would prefer to say we have a new form of telescope. We have a new form of microscope and that is looking at things algorithmically. How do these events affect those events? What can you say? So it's a wonderful world. I wish I were 40 years younger, but it's fun to be in on the start of this. I think is very much a Santa Fe Institute view of the world.
I hope so. Anyway, meaning exploratory and not taking a given method for granted.
Michael Garfield (1h 1m 6s): Well, I mean, it certainly seems that we've graduated from the reputation that SFI had sort of engendered in the nineties as participating in this grand search for a theory of everything. It seems like we've backed up off of that. We're not willing to die on that hill anymore. It's hopeful.
W. Brian Arthur (1h 1m 26s): I remember that era. I was there for all of its since 1987. I remember the key person who was responsible for a lot of that too. We'll be nameless. I'm sure you know who he is. And yeah, I think what we're doing is there are many ways to define complexity, but I would just simply say, we're looking at systems that can endogenously create new structures based upon the structures and contexts that they've created themselves.
And that keeps going on. And, I think it's alive and it's wonderful, but it's not the last word. There never will be a last word. I hope.
Michael Garfield (1h 2m 14s): Well, I'll give you the last word on that one.
W. Brian Arthur (1h 2m 18s): Thank you.
Michael Garfield (1h 2m 19s): Thank you so much for being back on the show and folks, if you like this, check the show notes, we linked all the papers and we'll link to the first two episodes with Brian as well in which we look at how this stuff actually applies to economic thinking in a bit more detail. But again, thanks so much for taking the time. Thank you for listening to Complexities produced by the Santa Fe Institute, a nonprofit hub for complex systems science located in the high desert of New Mexico.
For more information, including transcripts research links and educational resources, or to support our science and communication efforts. Visit Santafe.edu/podcast.