COMPLEXITY

Artemy Kolchinsky on "Semantic Information" & The Physics of Meaning

Episode Notes

Matter, energy, and information: the holy trinity of physics. Understanding the relations between these measures of our world are one of the big questions of complex systems science.

The laws of thermodynamics tell us that entropy (loosely but somewhat inaccurately speaking, “disorder”) increases in any closed material system. But at the same time living systems constantly pump out entropy, thereby keeping themselves alive by harnessing flows of energy and information. We know that physical systems gain or lose energy as heat — what is the difference between exchanging heat and exchanging signals with information relevant to a system’s survival?

In other words, when is information meaningful? When do goals and meaning come into play, and how do a system’s constraints and embodiment figure in? Understanding how to formalize the interactions of our jostling cosmos and reveal the engine of emergent order is the quest of all quests…

Welcome to COMPLEXITY, the official podcast of the Santa Fe Institute. I’m your host, Michael Garfield, and every two weeks we’ll bring you with us for far-ranging conversations with our worldwide network of rigorous researchers developing new frameworks to explain the deepest mysteries of the universe.

This week we speak with SFI Program Postdoctoral Fellow Artemy Kolchinsky, who studies how information is organized and processed in biological, neural, and physical systems. In recent publications with SFI Professor David Wolpert, Artemy explores fundamental constraints on the energy required to process information, and seeks to define “semantic information,” or information bearing meaningful content. Our conversation takes us on a winding path into a thick, dark wood in which meet trails cut by cybernetics, cognitive science, statistical physics, and astrobiology…

Tis the season, so if you value our research and communication efforts, please please consider making a donation at santafe.edu/podcastgive — and/or rating and reviewing us at Apple Podcasts. You can find numerous other ways to engage with us at santafe.edu/engage.

Avid readers take note that SFI Press’ latest volume, Complexity Economics: Proceedings of the Santa Fe Institute's 2019 Fall Symposium, is now available on Amazon in paperback and Kindle eBook formats.

Thank you for listening!

Follow Artemy on Twitter and read the papers we discuss (and many more) on his website.

Join our Facebook discussion group to meet like minds and talk about each episode.

Podcast theme music by Mitch Mignano.

Follow us on social media:
Twitter • YouTube • Facebook • Instagram • LinkedIn

Episode Transcription

Machine-generated transcript provided by podscribe.ai. Human edits by Shirley Bekins & Rayyan Zahid.

-----

ARTEMY KOLCHINSKY:

I'm sitting here and there's constantly the molecules of air bouncing into me from all around and so for that reason, the molecules in my body are becoming correlated with the molecules in the room and that's information being transferred from the room to me, but that's not meaningful at all. That's just sort of ubiquitous correlations. On the other hand, you know, if I see a car coming towards me when I'm walking down the street, the way my nervous system and body is set up is that I use that correlation to jump out of the way and maintain myself. And so by virtue of how I'm structured certain information acquires meaning for me, it might tell me where there's food or where there's danger and it becomes very different from just correlations.

And so I think one of the things that I'm interested in is, you know, as I was saying, there's these kinds of new subfield in physics, stochastic thermodynamics, which a lot of what it looks at is energy requirements of acquiring and processing and using information. You know, it really doesn't make a distinction between this kind of meaningful, or you might say functional information, or just an empty correlation.

 MICHAEL GARFIELD:

Matter, energy and information, the Holy Trinity of physics understanding the relations between these measures of our world is one of the big questions of complex systems science. The laws of thermodynamics tell us that entropy, loosely but somewhat inaccurately speaking disorder, increases in any closed material system. But at the same time, living systems generally not closed, constantly pump out entropy, thereby keeping themselves alive by harnessing flows of energy and information. We know that physical systems gain or lose energy as heat. But what is the difference between exchanging heat and exchanging signals with information relevant to a system’s continued existence?

 

In other words, when is information meaningful, when do goals and meaning come into play and how do a system’s constraints and embodiment figure in? Understanding how to formalize the interactions of our jostling cosmos and reveal the engine of emergent order is the quest of all quests. Welcome to Complexity, the official podcast for the Santa Fe Institute. I'm your host, Michael Garfield. And every two weeks, we'll bring you with us for far-ranging conversations with our worldwide network of rigorous researchers, developing new frameworks, to explain the deepest mysteries of the universe. This week, we speak with SFI program postdoctoral fellow Artemy Kolchinski who studies how information is organized and processed in biological, neural, and physical systems. In recent publications with SFI professor David Wolpert, Artemy explores fundamental constraints on the energy required to process information and seeks to define semantic information, or information bearing meaningful content.

Our conversation takes us on a winding path into a thick dark wood in which meet trails cut by cybernetics, cognitive science, statistical physics, and astrobiology. 

TIS’ the season. So if you value our research and communication efforts, please consider making a donation @santafe.edu/podcastgive and/or rating and reviewing us at Apple Podcasts. You can find numerous other ways to engage with the Santa Fe Institute @santafe.edu/engage. Avid readers, take note that SFI Press’ latest volume Complexity Economics is now available at Amazon and sfipress.org in paper book and Kindle ebook formats. Thank you for listening. 

Artemy Kolchinski! This is long overdue, multiply rescheduled. I'm just delighted to have you finally on Complexity podcasts.

ARTEMY KOLCHINSKI:

Thank you, Michael. It's fated to be eventually.

MICHAEL GARFIELD:

You're in a very squeaky chair, but we'll make it work. That's actually kind of like perfect because it's information I'm sensing about the environment, right? Noise. Anyway, let's get into first because your work is so dense and labyrinthian and curious and ambitious. And I'm really in awe frankly, of the kind of questions that you're tilting after, almost a Quixotic quest to get at some of the deepest stuff here, but I'd love to know how you got into science in the first place.

What drew you into these inquiries at all? You know, how did you end up at SFI? This is where we typically like to set the launch pad for these conversations.

ARTEMY KOLCHINSKI:

Right, well, thank you that was a very flattering description, overly, maybe a bit. But I think my interest in science definitely started because my father is a biologist and so...molecular biologist And so I didn't necessarily grow up thinking I was going to be a scientist at all, but as I was growing up, I remember we would get into these conversations where he would just kind of just explain to me what we know about how the cell works and how biology works at the nanoscale. And I remember this intense feeling of wonderment at how complicated everything is and just bewilderment at how, where this came from, how it could have been constructed, how it all seems to work together.

And it was very captivating discussions we had. I mean, I think that, you know, that definitely sparked a certain interest in science. And I was also kind of a computer nerd growing up. I taught myself to program. I love playing with computers. And when I got to college, I actually didn't know what I was going to study. I was kind of a bit of a lost... in many ways but, and I started just kind of, with no major, very general program. And I was kind of bored and it was not a very good education. And I remember at some point I switched into this program in my college, which was New York University.

And they have a school in it called Gallatin School of Individualized Study where basically you can study whatever you want, as long as it's interdisciplinary and all over the place. And quite a few of my friends were in it. And I don't know, it seemed very freeform. And you know, you didn't have to go to these 300-person massive lectures, the core courses and stuff. So I entered that. And then towards the end of my undergraduate studies, I kind of like randomly browsing through the library stacks. I came across this shelf of books that was all about kind of like SFI stuff.

So there was a book about artificial life and there was a book... It was also...I think in the library, I remember coming across a book about artificial life and there's also a book about artificial consciousness or something. And I was totally fascinated by this. Also, it was just like a very random encounter and fascinated by this little, you know, aisle of pop science and also at that time, so this is the early 2000s when complexity really, it was kind of big in the pop science world and Barabasi’s’ book Linked came out and Steven Johnson's book on emergence came out this, you know, it was also being applied in all kinds of domains, Smart Mobs, that book by Lessig, I think, came out.

And so I started getting really into all this kind of pop science. I read Complexity by Waldrop and I got really into complexity and this idea of emergence and kind of very much the pop science version of it. This, you know, this, this notion of that neuron is not conscious, but a collection of neurons is, and, you know, you get qualitatively new properties when you combine large numbers of things and so on. And I got really also, like I became kind of a SFI fanboy, I guess, and became really interested in this place. And it's actually interesting. I think I applied to the summer school, like at the end of my undergrad, but I didn't get in and like many things at the time I think I wrote my like submission essay, you know, at five in the morning, like the morning before it was due. And a year or two ago I randomly came across it on my hard drive and I pulled it up and it was so bad, like, Oh, wow I clearly wrote this at five in the morning, you know, the day before it's due. 

Anyway, so that was kind of how I first got interested in SFI. It's not necessarily how I ended up here. Then after my undergrad, I went traveling for a couple of years. And again, I got really into kind of academic and scholarly activities and “A” life and this kind of stuff. But then in my undergrad, I didn't necessarily think it was going to be a career or anything like that.

And it also helped that I knew how to program, and it was quite, you know, that's kind of how I made money for a lot of years programming. So that also helped kind of get into that field of study because of course, what SFI does is from the beginning, it was very computational, and it's closely tied to computational methods. And I went traveling after undergrad for a few years, and I kinda got bored with things, and I decided to apply for grad school. And I went to basically like the only grad program that had really a PhD in complex systems, which was Indiana University Bloomington, at least to my knowledge, and at least in the broad sense. So really not sort of complex systems in terms of like complex materials tied to physics, but really networks, but also like complexity and biology and so on.

And during my PhD, I got really into information theory and information theory as a kind of universal language. A lot of complex systems is about seeing common patterns in very many different systems and information theory seemed like a very powerful language for doing that and for studying, kind of organization in general, or pattern in general. And so a lot of my PhD or at least towards the end was about using techniques from information theory to try to study complex systems or developing new methods. And it was quite methodsy, I would say, my PhD and I feel like I've been talking for a while, maybe too long. (laugh)

 MICHAEL GARFIELD:

You're good. This is good. Keep going.

ARTEMY KOLCHINSKI: 

Okay. Okay. Okay. Okay. I'll just keep going.

 MICHAEL GARFIELD:

You know, this is a deep dark wood we're entering here with a long winding road into it. So it just gets less and less...

 ARTEMY KOLCHINSKI:

Ok, ok, I'll just keep talking. I mean probably everyone is asleep at this point or anyway, and then the other thing I became quite interested in towards the end of my undergrad had actually, I sort of randomly sat in on this graduate course on cognitive science, which seemed interesting. And it seemed closely related to cybernetics, which was something I got really into during my undergrad. This is kind of forgotten pre-complexity, complexity theory. And one of the other things I got really interested in during my PhD studies was the cognitive science program at Indiana University in Bloomington, which is particularly known, it’s kind the origin of a particular kind of approach to cognitive science, which is sometimes called dynamical systems approach to cognition. 

And to simplify it a bit, it really tries to move away from a model or a metaphor of cognition as being like a computer, or like being a computer program, which, you know, it's tied to certain currents in philosophy of mind and in philosophy about representations and beliefs and ideas. And it's hard to understand how to sort of naturalize that or to connect to this traditional sort of approach to cognitive science. It's harder to connect it to other domains of natural science. The dynamical systems approach really sees cognition as basically adaptive behaviors. The ability to achieve certain goals by a dynamical system, which is both the body and possibly nervous system, but also in combination with certain environments in which the system is situated. And in some ways it really sees cognition as being present in even the simplest organisms. So it's much closer to how people study behavior or functionality or information transfer in all kinds of, for example, biological systems, including single cells.

And it lets us talk about, for example, information processing and even adaptive information processing without bringing in a lot of these really philosophically loaded terms that have a lot of baggage, like for example, representation, belief, desire, and so on. And so that was a big influence on me sort of being in contact with those ideas in Indiana and particularly Randall Beer is somebody who does really amazing work there and influenced me quite a bit. And then slowly winding my way to SFI. When I finished my PhD, I sort of was unsure how to proceed. I had some qualms about academia, which I think is pretty common after the emotional work of doing a PhD.

And I kind of, we just hung out for a little while and I ended up going to the Conference on Complex Systems, which happened that year in Tempe, Arizona. And I ran into David Wolpert who was giving a talk and I'd never met David before, but his talk was very provocative. I had some thoughts and I kind of chatted with him about it. And one thing led to another, we had a ton of interests in common and he invited me to visit SFI. And then I kind of got hired for a short period of time. And then we applied for some grants. And anyway, here I am four years later or something at SFI where I've been working with David. 

And the things that I've been working with David on, and really the thing that sort of I started doing since coming to SFI was using some of my existing knowledge of information theory and some of my other skills, but also combining that with this new kind of rapidly expanding subfield of statistical physics, which is called stochastic thermodynamics. It really looks at the thermodynamics of usually small fluctuating non-equilibrium systems. And it's very closely tied to information theory. It really, you could maybe say, it's almost like a reformulation of statistical physics almost entirely in terms of information theory and that might be a slightly controversial way to put it.

And it's very interesting. It sort of lets us analyze a lot of things rigorously that were kind of impossible to analyze before from a physical dynamical perspective, including for me, one of the most interesting things is the physics of systems that acquire information and process information and use that information. This could be like little computer chips. It could be like biological organisms. It could even be, you know, some kind of proto cells that have a flow of information through them and maybe use that information to do things like acquire food, acquire energy and so on. And so these are the topics that I've been working with David quite a bit on it. And really in a sense, they're sort of adding a somewhat physical perspective to a lot of these interests that I've had for awhile in particular, trying to understand things like minimally cognitive systems, you know, what are the constraints on those?

What is a good way to model them and to understand them and even things like autopoiesis, which is this idea that there's sort of a fundamental pattern that characterizes living things and that really distinguishes living things. And yeah, I think I'll stop there for now. That's the spiel. So that's kind of how I got to where I am now. And also the kinds of things that I like to think about.

MICHAEL GARFIELD:

Right on. Yeah, a lot of this is very resonant with me. I don't know, without knowing the year of your alma mater, I think you and I seem like we were kind of reading the same books and on the same trajectory, and then you just had a much more sort of hospitable environment that knew what to do with you. I mean, around that same time, I remember bringing this type of question to my advisors and they were just like, Define complexity for me. I dare you! You know, like you're going to have to bury big questions like this until you get tenure. There's no way that someone's going to allow you to charge after these things that as a graduate student, you're very fortunate, but at the same time, you know, I was really lucky. I don't know if you know the book Evolution as Entropy, Edward Wiley at the University of Kansas. 

ARTEMY KOLCHINSKI: 

I don’t.

MICHAEL GARFIELD: 

He was one of the people I spoke to about this. And it seems like his stuff is really, it's just strange. I don't know, you know, it just maybe a dispositional thing that you kept with it, but so one of the questions that I love that you are so eager to explore in your work and that I've just been wondering about rather fruitlessly for the last 15 years instead is, is this question about the relationship between energy, matter and information. And then as you alluded to just a moment ago, the way that information suggests a kind of sensation of the environments, you know, the cognitive act is an act of making sense of things.

I guess you might say in a way, I mean, you can, you can check me on any of this, but…

ARTEMY KOLCHINSKI: Makes sense to me. Good one!

MICHAEL GARFIELD: I mean, yeah. You know I love David Krakauer’s talks when he shows the math behind evolutionary adaptation and inferential guesswork, inference. It's the same underlying mathematics. And so there's the sense in which evolution as a distributed process of a system navigating its super system or whatever. And so where do you strike into this as someone who is working in statistical thermodynamics?

One of the things that I guess maybe the right place to start would be with your paper on semantic information that you did with David Wolpert. That piece, that's really important, I think because the common intuition for people is that information has content and that's what makes it information. And that's very much a cybernetics, Gregory Bateson, the “difference that makes a difference” kind of thing. It's information because it's about something, it's relevant in some way. And so how are you formalizing this? How are you decomposing this and making useful specifications? And then where do those lead you? 

ARTEMY KOLCHINSKI:

Right, sorry for the squeaky chair. Let me just squeak it all out now. So a couple of thoughts. I mean, one of the things you brought up is that this is pretty heady, big issues, this relationship between matter and information and cognition and life and so on. And I just wanted to make one comment about that. Since I've been at SFI and really had a chance to learn more about physics and kind of, it's been a bit of a journey and one of the beautiful things and one of the reasons for the success of physics-type thinking is sort of trying to boil things down to the simplest model that really focuses on the core issues or highlights the core issues and disregards everything else.

I know it's a bit of a cliche, but I think even in this case, that’s sort of how I've been trying to ground things and I've been thinking about it and I don't think I'm quite there yet, but I do think that my way to proceed without getting vertigo is to try to and it's something I've kind of been thinking about lately is to try to come up with a simple model. It could be like a set of chemical reactions or something like that that really captures the basics of maybe like a self-sustaining autocatalytic system that also is using information to sustain itself that also, you know, it has exchanges of matter and energy that are represented. So it's just that we can really analyze and build intuitions from, and understand how these things relate.

And I think that's also one of the things that I saw kind of happening in Bloomington. And I mentioned Randall Beers, something he did was build these minimal models of really these tiny neural networks that were kind of dynamical systems that could do a lot of interesting behaviors that people thought you needed representations for. So, really using these minimal models to touch, maybe say something relevant even to philosophy, right. Or so build these minimal models that could do things like relational categorization, Is one of these things bigger than another one? And you know, these are kind of classic debates in cognitive science. Oh no, like surely you need a full-on symbolic reasoning to do things like that. Well, no, not really. I mean, there's no reason to think of it. So that's kind of the way I try to circumscribe these things so it doesn't just like spiral out of control and then the other thing you mentioned, and which is really related to what we're talking about is this notion of semantic information as me and David called it in a paper and just this notion of the relationship between information and cognition and making sense of the world and, you know, perception, sensation, etc. And I think, in some sense there's a really important distinction to make.

And this is one of the distinctions I hope we sort of pushed and something I want to keep pushing in the future, is in some sense, information is ubiquitous throughout the physical world. So just in physics when things interact, they tend to become correlated. There's a kind of a famous thought experiment. I forget who it's by, it's by Wheeler or somebody like that. He says the gravitational field of a single electron being present or absent on the edge of the galaxy will change whether collision happens in a box of gas within less than a second, because it's a highly chaotic system. And even that astronomically small gravitational pull of an electron makes a difference.

And so in a way that's a difference that makes a difference. I mean, in the literal sense, right? Whether an electron is there or not makes a difference, whether let's say an atom collides with another atom or not, but this is spreading through everywhere. It's ubiquitous, everything is kind of correlated, but I think that something much more interesting happens, for example, in living things where it's not just that things are correlated, which is this notion of two things being correlated sometimes called syntactic information. It completely doesn't depend on their meeting. You know, the fact that an atom missed colliding with another atom in a box of gas, and the fact that that's correlated with the presence of an electron, it doesn't necessarily have any kind of meaning.

And in this paper, I mean, David defined semantic information, which is sometimes taken to be information that has meaning whatever that means in a very particular way. So we said that, you know, a piece of information, so basically a correlation has meaning for a system if the system uses that correlation to maintain itself in existence. So for example, I'm sitting here and there's constantly the molecules of air bouncing into me from all around. So for that reason, the molecules in my body are becoming correlated with the molecules in the room and that's information being transferred from the room to me, but that's not meaningful at all. That's just sort of ubiquitous correlations. 

So on the other hand, you know, if I see a car coming towards me, when I'm walking down the street, the way my nervous system and body is set up is that I use that correlation to jump out of the way and maintain myself. And so by virtue of how I'm structured certain information acquires meaning for me, it might tell me where there's food or where there's danger and it becomes very different from just correlations. And so I think one of the things that I'm interested in is, you know, as I was saying, there's this kind of new subfield in physics, stochastic thermodynamics, which a lot of what it looks at is energy requirements of acquiring and processing and using information. You know, it really doesn't make a distinction between this kind of meaningful, or you might say functional information, or just an empty correlation.

MICHAEL GARFIELD:

Okay. Yes. So, yes. So there's, forgive me for uttering the taboo here, but I mean, this suggests the kind of endogenous telos, right? This is the thing that I feel like people at the periphery of this conversation get really confused about, like this idea that yes, evolution does not have a direction. Okay. Or, you know, the universe in whole is running from order into disorder, but you know, you look at it and it's actually all of these nested systems. And like you said, you know, they have different structural properties and depending on how course your grain, at what level you focus, then you will see an opposite story that there is, like you said earlier, a goal orientation.

And so I know that you have been thinking recently about how this relates to the origins of life and you know, the search for life on other worlds. And it's interesting to ask in the framing of stochastic thermodynamics, where is the point at which we can say that a goal emerges? And is that the same as saying that this is the point at which meaning emerges at least from within the system, again, to draw on Varela and Maturana’s autopoiesis, that notion that these are systems making sense of their environments? What do you feel like your insights are into that?

It sounds from the conversations we've had on this show so far that it's not useful anymore. It's actually misleading for us to think of there being a moment at which the light comes on and there's life suddenly out of non-life. But what are we talking about when we talk about a gradient? Is that the same thing as like a gradient of non meaning to meaning or non…?

ARTEMY KOLCHINSKI:

Yeah. Yeah, definitely. I mean these are interesting questions and to sidestep the philosophical quagmires a little bit, I started to prefer to take a very operational view of these things. And what I mean by that is to say, and what the approach we took in, in the semantic information paper was to say, look, let's just treat the system basically as if its goal is to self-preservation as if its goal is to maintain itself out of equilibrium and maintain itself as an organized entity. But, you know, I certainly don't want to say that there's some other kind of teleological force additional to mechanistic forces or something like that.

On the other hand, you know, if we observed something like a chemotactic bacteria navigating its environment is certainly a very helpful compressed description of its behavior to say that it has certain goals and it acts to further those goals. I think you know, for me, it's sort of, we can say it's kind of a way of speaking, but it's also, you can make it very operational. We can say, if we knock out this gene, for example, what breaks and how does that let's say hurt the bacteria’s ability to maintain itself alive? Well, you know, we can then say the function of that gene, which is of course is a very goal directed language, right?

What is the function of something is to do, to do blank. We can say the function of that gene is to allow it to digest lactose or whatever. So I, you know, and biology, it's certainly very helpful to talk as if things have goals. And I think that can even extend beyond biology. So, you know, we might say something like a hurricane is a self-maintaining non-equilibrium structure. It's stable, it funnels energy from the warm ocean to the cooler atmosphere and in doing so it constantly is  rebuilding itself. Well, I think we can sort of analyze it as if its goal is self-maintenance and see maybe how different aspects of it structure, like the eye of the hurricane or whatever, contribute to that.

And I think we can do that without again delving into the philosophical quagmires, which I think Kant wrote a lot about this. I mean he was very scientifically literate and he wrote a lot about how organisms really seem to have goals. Well, he was kind of, I'm not saying that there's teleological forces, you know, final causes, but it certainly seems like they have goals for themselves. And this kind of this paradox that maybe it's just, they lend themselves so well to that description. I mean but it really, it's a difficult philosophical territory. And what I would say is like, let's just treat them as if they have goals and let's analyze it in that way.

And so, going back to meaning I think even maybe a hurricane has a tiny little bit of meaning. I don't actually know much about hurricanes, but we could imagine maybe if a hurricane could preferentially move towards warmer waters instead of just being shoved around by the winds, then it could maintain itself for longer. So, maybe we could say it's using some kind of information about where warmer stuff is, where there's more energy to feed it to maintain itself. I mean, I think, it's a bit of a stretch, but I think it's less of a stretch if we maybe start to talk about things like protocells right, which are little chemical hurricanes that are maintaining themselves and are thought to have laid before the origin of modern life, but already have some things like simple metabolism and simple self-maintenance. And we can say, well, okay, well we can easily imagine one of these protocells could maybe sense what's going on in its environment and respond in different ways to its environment and, and thereby maintain itself around for longer. And that might have a little bit of more meaning, more useful functional information than let's say a hurricane.

And you get to things like mammals or animals in general, which have these incredibly sophisticated nervous systems that are very tuned, precisely to pick up a huge amount of meaningful information from the environment, right? Where the food is, where the mates are, where the danger is and have this kind of huge amount of functional information flowing through them. So, I would definitely agree with you. I think it's very much a continuum. And I think one of the things that we've been thinking about is even like, how could you define quantitative measures of meaning?

 

MICHAEL GARFIELD: 

So, in that line of questions. The hurricane has no sensory organs, right? Whereas you and I obviously do. And in fact, even beyond that, there is this sense, again, drawing on cybernetics and the work of people like Marshall McLuhan, like media theory, talking about the electronic surround as an extension of the human nervous system. There's a sense in which the history of science itself can be understood as the evolution of new instruments. We were talking with Peter Dodds on the last episode of this podcast, about the way that we're able to use text modeling and timeline analysis from social media, as a way peering into the collective mentation of the human species.

And that's like the latest example of this trajectory of us developing ever more nuanced and sophisticated tools of sensing our environment. And so, I'd love to hear you talk a little bit more about that specifically about this whole matter energy information relationship and how it plays out in what seemed to me at least to be evolutionary arms races in the evolution of new sensory equipment.

This is something I was thinking a lot about seven years ago when Google glass came out and it was, it very swiftly seemed like one of these haves or haves not kind of things. Like, are you going to be technologically augmented? Are you going to be able to look at somebody and pull up their biographical information and you'll have an edge on them? And that feels a lot like what was happening with the evolution of the eye in the Cambrian explosion over 500 million years ago. And that there's a continuity here that the evolution of intelligence is the evolution of sensory abilities and that there's, a co-evolutionary dance going on that's constantly ratcheting these things as living in a more intelligent and sensorily empowered, sensory motor sophisticated environment has a higher demand on a system and in terms of its ability to navigate it.

How do you, technologically augmented 21st century SFI postdoc make sense of all of this and where is the underlying uniformity there?

 

ARTEMY KOLCHINSKI:

One of the things I would say is like, do we really know a hurricane doesn't have a sensory organ? And what I mean by that is it's not always clear what is, and is not a sensory organ. So, for example, there is a model of protocells where there's no explicit sensory organ for sensing the direction of food, but just because like where the food comes from, the membrane grows faster on that side effectively, you end up moving towards the food, right. Or for, in a bacterium we think of the flagella as a way to move around, but it's also actually used to sense the environment because it interacts with the world and it can actually send information back and forth.

 So, I don't think that's contradicting anything you said, but I kind of want to point out that I think the separation of the system into specialized sensory and effectory organs, which actually, I'm not totally sure how separated they are, right? Like the hand is both very sensory and very effectory in people, but given even then having specialized effectors and sensors might be kind of a recent thing or even a special thing. This is something I want to explore, but I suspect that even simple things like stochastic chemical networks have some minimal, essentially sensory abilities in the sense that for example chemical systems have all kinds of correlations running through them.

 As I said, when things just interact with each other, they build up correlations. The information is there, you just have to use it to do something functional? So that was my first point. The second point is that I think you bring up a really interesting point about development of specialized sensory organs, and that can actually trigger a whole new entrance into new niches and a new sort of evolutionary landscape.

 

Once you develop eyes, now you can start to become selected on how well you hunt. So, it's like you add dimensions of behavior by being able to sense new things and you kind of expand.

 I think it's a very kind of good example of adding complexity almost in like a qualitative way. But I don't know if I have much to say about all that scientifically, at least in terms of what I was talking about, because one of the things about this new field of non-nuclear school of physics, and their relationships with human matter information and energy, is in general, these relationships are meaningful. Not all, but in general they tend to be at the molecular scale or the level of very small energy fluctuations. And they sort of set fundamental bounds as determined by physics.

 And one of the big questions is, and lots of people are kind of skeptical, that do they have something to say about the energy trade-offs involved in the mammalian brain or any brain for that matter because brains operate very, very far from these limits. It's kind of like trying to apply quantum physics to explain why I went to get lunch. It's like, we have to be really careful because at macroscopic scales, these quantum effects, they really disappear. The world is mostly classical and is kind of a mistake to think that it applies.

 And at least it seems like many of these really fundamental relationships between information, energy and matter are mostly meaningful at the scale of molecular systems, at least in my point of view. And that's one of the reasons why I'm interested in trying to apply them to things like protocells, origin of life, because protocells are very small. There's a good chance they were very small in terms of maybe thousands or tens or hundreds of thousands of atoms. And I actually think that these fundamental relationships might have played a big part in like very, very early trade-offs in very early evolution.

 I'm not sure, like non-equilibrium physical physics has something to say about the complexification of life via development of new sensory organs in general, at least at the fundamental scale of universal laws, which is basically what I'm talking about. These are like universal laws, the second law of thermodynamics. It's a question. And I kind of talked earlier about what I've been thinking about is the minimal model to work out these relationships and the minimum model I'm thinking of most of the time is something like molecular, where these quantities make sense. You know, so a good rule of thumb is that a lot of these quantities are expressed in units of KT.

 

And KT is like the size of an energy fluctuation, basically at temperature T. At room temperature the energy of KT is like 500th of the energy released by the burning of a single sugar molecule. So that's kind of the scale where a lot of these things are expressed then.

 

MICHAEL GARFIELD::

I don't know if this is a challenge to that or but to bounce to the complete opposite end from the microcosm to the macrocosm, this question about the limits imposed upon evolutionary processes by thermodynamics begs the question about what your research might say about the work that's being done elsewhere throughout the SFI research network, on scaling laws and the developmental constraints on evolution. And why it is that we only see certain kinds of life forms out of everything that we might imagine. We had Melanie Moses on the show, and we were talking about the moment at which the growth of an insect, and then the emergence of a social insect colony, like ants or bees represents a kind of phase transition in the computation being passed from the system at one level to a new sort of meta individuality at the next level, as a way of overcoming some of these scaling constraints.

 And it seems like your work really sheds light on that kind of a question. And so, I'm curious what your thoughts are even though you're talking about the amount of heat generated by molecular interactions. What do you think is possible at the scale of human civilization, or at the scale of the biosphere? What are the invisible limits to growth, we just sort of assume they're not there because we don't see them, but it seems like this would cast some light on that.

ARTEMY KOLCHINSKI:

I will qualify my statement a bit, and this is something I've been recently talking to Chris Campus about who is a faculty at SFI who works a lot on scaling and it's related to some work Chris did with David earlier is I don't mean to say that these fundamental relationships between energy and information, don't matter for modern life. And in particular where you think that they matter a lot for the ribosome, which is a little molecular machine that makes proteins in every cell and where most of the energy is used in a cell ,more than half for bacteria, it's been incredibly optimized. And it's this incredible machine that's basically organizing matter as it runs along.

 So, it reads off RNA, it strings together amino acids to make proteins. It's really like building your machine. And it's actually probably highly constrained by the second law of thermodynamics because it's been optimized to death. And it's kind of like the most important machine in life maybe. And that's inside every cell of every organism, except for some exceptions. At the same time at the micro level. as you get bigger, there's other constraints, including other physical constraints that come into play.

 If you're a bear and you catch a salmon and it's very far away from KT, of course, ultimately that's feeding down to your cells, which are working at KT, the kinds of pressures and constraints that might characterize how much salmon a bear needs and how much energy it can get out of it, our new laws start to enter. Even though we think the whole universe is just one big quantum thing, we don't expect the universal laws of quantum physics to explain everything. And even though they are universal, that's kind of one of the central ideas of complexity and emergence, as you get qualitatively new laws, you get qualitatively new explanations, you get new vocabulary that you have to use and new types of vocabulary.

 And so, I guess that, that's why I was pushing back a little bit. I don't necessarily think even though these are universal relationships and they're really interesting, that they do say something about why a group of insects evolves eusociality. I think that could be just explained by other ideas from evolutionary pressures, from group selection, kin selection, endosymbiotic, and all of these revolutionary ideas. And in some sense, I think evolution is a much more multi-scale principle because you can really, as long as there's a heritable variation and fitness, can kind of apply evolutionary thinking and it's different forms.

 

MICHAEL GARFIELD:

Let's pin back a little bit because there's another pre-print that you just did with David Wolpert on "Work, entropy production and the thermodynamics of information under protocol constraints". And we kind of skipped over this a little bit in this conversation, but if we're going to talk about the relevance to civilization, this has a lot to do to my understanding with how efficient we can make our inquiries of the world. It seems like this sheds light on the bounds of how efficient an organism can be or a battery or what you can actually accomplish with computers.

 And in that respect, it's maybe not a constraint on size or scale or complexity, but it's constraints on efficiency and on the retrievability of information and on the lower limits of how efficient you can make a system. I'd love to hear you provide an exegesis of this piece.

 

ARTEMY KOLCHINSKI:

Yes. Happily. I'm glad you brought it up because I think it's conceptually very closely related to the stuff we were discussing before about semantic information and meaningful information versus just sort of meaningless correlations. One of the ideas in non-equilibrium physics is that there's a classic paradox in cynical physics, which is something called Maxwell's demon, which some of our listeners are probably familiar with. It's this idea that the second law of thermodynamics says that things basically if left to themselves go to an equilibrium and to take them out of an equilibrium you have to do some work on it.

 Maxwell came up with this kind of toy model where he imagined a box of gas in equilibrium, and he imagined a little intelligent being, just observing the particles that are flying around in this box. And he observed that by opening and closing a little door , this being can sort the particles into hot and cold ones, which should not be an equilibrium state. And it can do this without doing any work or doing an arbitrarily small amount of work. And so, this is a classic paradox. It's this cynical physics that people argued a lot about. And actually, one of the reasons why there has been this kind of explosion of interest in the physics of information and the growth of this field, studying information and matter and energies, is because people feel like it's really kind of resolving this paradox in a very deep way. And the way it resolved it is that it basically shows that in order to make a measurement, the demon has to write down the information it measured into some kind of physical system like a little hard drive.

 And if it does this over and over and over, it has to then erase what it wrote before and write the new measurement. It has to do this over and over and over. And basically, there's different ways to explain it, but in this manner of explaining it every time erases what it wrote before it has to do a little bit of work. So, if you do the calculation, you can see that the total amount work that the demon performed in order to write down these observations and then erase them into it, write them down is at best that least as big as the work that I could then extract by sorting the particles in hot and cold ones and putting some kind of piston or some kind of engine between them and using it to lift the weight or do something useful.

So basically, the resolution is if you kind of also think of this intelligent being this demon, as part of the physical world, also obeying the laws of physics, also obeying the second law of thermodynamics then the paradox is in some sense, completely resolved. It's a really interesting observation. And one of the things it says is if the demon is optimally efficient, basically then for each bit of information, it has about the system, like each yes or no question it can answer correctly about the system, it can extract a little bit of work and this little bit of work is actually proportional to about KT. This is kind of the conversion factor. It's KT log 2. But you can think about KT, it's just the unit thing.

And then this paper that you mentioned about entropy production and work under constraints, maybe it sounds very practical. I would still say it's kind of a theoretical conceptual argument, but the argument is kind of related to these ideas about some managing information ideas, like actually how much work the demon can extract from the system, which is in some sense, the value of the information for the demon.

 So, if the demon is a little organism and is using this work that it extracts to maintain it's all fix itself, make more proteins or whatever, then work is really valuable. And so, the information it has about the system is really valuable. But one of the things we kind of pointed out is like, this information is valuable only if this demon let's say, can manipulate the system in a way that takes advantage of this information. And what do I mean by that? So, let's say, I mentioned the electron in the edge of the universe, which kind of messes up whether or not, and one atom collides into another atom or not, let's say, I know for sure whether this electron on the edge of the universe is present or not, or the electron the edge of the galaxy or whatever. There's no way I can use that information. So even though I know it, I can't put a little piston in place and have that electron push against it  because it's too far away, I'm a limited being.

And so, there's a kind of alignment between what I know and what I can take advantage of. And a lot of this paper is, is kind of working out the implications of that and working out how to formalize that properly. And so, we kind of use over and over again, this example, which is closely related to Maxwell's demon, and it kind of comes from the literature of that is that you have a box and you have a single particle in the box flying around. And you then make a measurement. So, you acquire a bit of information: is this particle on the left or the right side of the box. And depending on whether it's on the left or the right side of the box, you put in a little vertical partition to separate the two halves. And then if you now know whether it's on the left or the right side of the partition, you can actually move the partition in this slowly and extract some work. And basically, you can turn that knowledge, that information you have into something useful, which is energy.

And what we point out is that the whole set up depends on the alignment between the fact that you measure, whether the particle is on the left or the right side of the box. And then you will have a vertical partition that can split the box into the left and the right half, if you had a horizontal partition barrier that could split the box into a top and bottom, it would be completely useless to you to know whether the particles are on the left or right. You couldn't actually take advantage of that.

This connects to this idea of semantic information, what is meaningful information to you depends on how you can interact with the world. And certainly, organisms are very limited in how they can interact with the world. For example, we're very local, right? We can only go to one place and eat from one place at a time. We can't use magnetic fields to a couple, two distant locations at once. And we have lots of other constraints on how we can attract them with the world.

 And I think at a very high level, certainly what we've evolved to measure is the kind of information that we can take advantage of. But also, one could use these results to maybe look at a system that we didn't know if it evolved or not if we didn't know anything about it. And we could say, how efficient is it in the sense, is it measuring things that it can then use or not? So that's kind of the breakdown of the paper.

And again, in a way it's kind of in some high-level ways it's influenced by this idea from embodied cognitive science, right. So partly what gives meaning to things is the body you have also what constrains you sometimes in very useful ways. Like we're not just missing embodied brains and vats that are just like processing bits.

 

MICHAEL GARFIELD:

Because I'm in this body, in this place in time, which is responsible for distilling this stuff for Twitter, I'm always watching for like the pithy aphorism or like, how do I encapsulate this in a way that I can make it through the electronic membrane. And what this boils down to, it seems like for me, correct me if I'm wrong, is what you're saying is that you're providing the mathematical formulations for why time sometimes is, but it's not always money. Why knowledge sometimes is, but it's not always power.

And then like how, when we had Mirta Galesic on the show and she's talking about politically motivated cognition and how people have these biases, where they project out into the world, their inferences are based on local information. And they error all of us, err by assuming that the features of our little corner or a little patch of the world is representative of the world as a whole. And therefore, you get into these weird situations. where we don't actually care about actually, what is true so much as we care about what allows us to synchronize and collaborate effectively with the people upon whom we depend or the systems upon which we depend that it seems like your work is shedding light on that.

And like why it is that we favor certain kinds of knowledge over other kinds of knowledge. And yet we tend to paint it all with this single brush and call it all the truth. Or the fact when in fact, it's much more variable and heterogeneous.

 

ARTEMY KOLCHINSKI:

Certainly, knowing the lottery ticket numbers for yesterday's drawing is different from knowing them for tomorrow. But similar types of information can have very different implications. I think you painted this constrained nature and sort of a drawback or a limitation. But I also think and this is not really something we address in the paper, but something I'm kind of interested in exploring is like, I actually think it is because we, as organisms are so constrained that it makes sense to acquire information. It makes sense to have intelligence and make sense to have very sophisticated, complex behaviors.

And what I mean by that is if, for example, we could be in every place at once. I mean if we could couple with distant locations, we wouldn't need eyesight. And we wouldn't need visual processing and we wouldn't need all these like amazing locomotion body movement behaviors that organisms do. Maybe it's kind of a truism, but like the really amazing cognitive behavioral and sensory things that we do individually and collectively are ways to overcome our limitations.

Just to give you a simple example, we go back to Maxwell's demon, which is sometimes seen as like a minimal model of a kind of information using organisms, which I think is actually a bad model for various reasons. Although it's a provocative one. But as I said, like Maxwell's demon, if it's operating completely optimally can at best break even meaning the best it can do is the same that it would do by not doing anything at all. One reason for that is because it's not constrained in any way, so there's no sort of advantage to acquiring the right kind of information or processing it in the right way and so on.

 

MICHAEL GARFIELD:

It's a fascinating thing. I feel like we've rounded the bases here. If people's heads aren't spinning now, then all I can do, I guess, invite them to look you up on Twitter.

ARTEMY KOLCHINSKI:

Play crazy sound effects like they do on Mexican radio stations.

MICHAEL GARFIELD:

Read these papers and see if that does it for you. ARTEMY KOLCHINSKI: thank you so much for being on the show. It's just such an inspiring thing to talk to you every time I have the opportunity to do so.

 

ARTEMY KOLCHINSKI:

Thank you, Michael. Thank you for doing the show. I mean, you're like running an amazing one-man content producing and disseminating. I think it really does an amazing job with the show and the social media stuff in general. Very impressive.

MICHAEL GARFIELD:

Like with everything at SFI, it is a team project. This show is supported through a lot of help. People like Jenna Marshall, our communications manager, the SFI press, Laura and Sienna and Katie and everybody working at IP Fest. You know, it's just, it's a beautiful thing to be part of. I don't know what you would call this a Markoff blanket of dorks, but you know, I really hope that wherever this research leads you, this conversation leads people to you.

And it leads people into these questions because I find these questions to be some of the juiciest and most nutritious questions we can be asking about this world. And, and really it's just recursive. It's just endless. It gets you all the way there. And then you realize you're never going to get there because the horizon keeps on. Rolling. So thanks a lot, man.

ARTEMY KOLCHINSKI:

Yeah. It's good to think about one small part of it that encapsulates the bigger issues and I think it can be done. And I think maybe that's one way forward actually. Anyway, thanks a lot. Let's hang when the, lockdown is over, maybe?\

MICHAEL GARFIELD:

Sounds good.

CLOSING

Thank you for listening. Complexity is produced by the Santa Fe Institute, a nonprofit hub for complex systems science located in the high desert of New Mexico. For more information, including transcripts research links and educational resources, or to support our science and communication efforts. Visit Santa fe.edu/podcast.