COMPLEXITY: Physics of Life

James Evans on Social Computing and Diversity by Design

Episode Notes

In the 21st Century, science is a team sport played by humans and computers, both. Social science in particular is in the midst of a transition from the qualitative study of small groups of people to the quantitative and computer-aided study of enormous data sets created by the interactions of machines and people. In this new ecology, wanting AI to act human makes no sense, but growing “alien” intelligences offers useful difference — and human beings find ourselves empowered to identify new questions no one thought to ask. We can direct our scientific inquiry into the blind spots that our algorithms find for us, and optimize for teams diverse enough to answer them. The cost is the conceit that complex systems can be fully understood and thus controlled — and this demands we move into a paradigm of care for both the artificial Others we create and human Others we engage as partners in discovery. This is the dawn of Social Computing: an age of daunting risks and dazzling rewards that promises to challenge what we think we know about what can be known, and how…

Welcome to COMPLEXITY, the official podcast of the Santa Fe Institute. I’m your host, Michael Garfield, and every other week we’ll bring you with us for far-ranging conversations with our worldwide network of rigorous researchers developing new frameworks to explain the deepest mysteries of the universe.

In this episode, I speak with SFI External Professor James Evans, Director of the University of Chicago’s Knowledge Lab, about his new work in, and journal of, social computing — how AI transforms the practice of scientific study and the study of scientific practice; what his research reveals about the importance of diversity in team-building and innovation; and what it means to accept our place beside machines in the pursuit of not just novel scientific insight, but true wisdom.

If you value our research and communication efforts, please consider making a donation at santafe.edu/podcastgive — and/or rating and reviewing us at Apple Podcasts. You can find numerous other ways to engage with us at santafe.edu/engage. Thank you for listening!

Join our Facebook discussion group to meet like minds and talk about each episode.

Podcast theme music by Mitch Mignano.

Follow us on social media:
Twitter • YouTube • Facebook • Instagram • LinkedIn

 

Key Links:

• James Evans at The University of Chicago

• Knowledge Lab

• Google Scholar

• “Social Computing Unhinged” in The Journal of Social Computing


Other Mentioned Learning Resources:

• Melanie Mitchell, “The Collapse of Artificial Intelligence”

• Alison Gopnik’s SFI Community Lecture, “The Minds of Children”

• Hans Moravec, Mind Children

• Ted Chiang, “The Life Cycle of Software Objects”

• Re: Recent CalTech study on interdisciplinarity and The Golden Age of Science

• Yuval Harari, “The New Religions of the 21st Century”

• Melanie Mitchell & Jessica Flack, “Complex Systems Science Allows Us To See New Paths Forward” at Aeon

• Complexity Episode 9 - Mirta Galesic (on Social Science)

• Compexity Episode 20 - Albert Kao (on Collective Behavior)

• Complexity Episode 21 - Melanie Mitchell (on Artificial Intelligence)

Episode Transcription

Machine-generated transcript provided by https://podscribe.ai  — if you would like to volunteer to help edit this or future transcripts, please email michaelgarfield[at]santafe[dot]edu. Thanks and enjoy!

 

James Evans (0s):

We're not asking questions as scientists like could fail. Why do we even need to be there? It's like science can just run like as a machine, like, we don't even need anyone to run these experiments. We just, we just do a, run, a random walker over this space of things, across papers or prior experiments. And we would discover everything that humans are going to discover, but some people do take risks. And the entire system benefits from those risks that are taken. Most of those risks don't succeed. And this is really the same in the context of social media. And we're trying to speak across boundaries and we're trying to find common ground. Most of those conversational moves. Most of the compromises that we might conceive of, or even the common languages that we try to form to facilitate conversation between people, very different perspectives and positions are not going to succeed, but the whole system benefits by trying enough of those failures that we find these new configurations that seemed inconceivable or were certainly inconceivable from the perspective of some like incremental advanced by either one. 

 

Michael Garfield (1m 24s):

In the 21st century science is a team sport played by humans and computers. Social science in particular is in the midst of a transition from the qualitative study of small groups of people to the quantitative and computer aided to study of enormous datasets created by the interactions of machines and people. In this new ecology, wanting AI to act human makes no sense, but growing alien intelligences offers useful difference and human beings find ourselves empowered to identify new questions. no one thought to ask. We can direct our scientific inquiry into the blind spots that our algorithms find for us and optimize for teams diverse enough to answer them.

 

Michael Garfield (2m 7s):

The cost is the conceit that complex systems can be fully understood and thus controlled. And this demands that we move into a paradigm of care. For both the artificial others we create and human others we engage as partners in discovery. This is the Dawn of social computing, an age of daunting risks and dazzling rewards. That promises to challenge what we think we know about what can be known and how. Welcome to complexity, the official podcast of the Santa Fe Institute. I'm your host, Michael Garfield, and every other week, we'll bring you with us for far ranging conversations with our worldwide network of rigorous researchers, developing new frameworks, to explain the deepest mysteries of the universe.

 

Michael Garfield (2m 55s):

In this episode, I speak with SFI external professor James Evans, director of the University of Chicago's knowledge lab about his work in and new journal of social computing, how AI transforms the practice of scientific study and the study of scientific practice, what his research reveals about the importance of diversity in team building and innovation and what it means to accept our place beside machines in the pursuit of not just novel, scientific insights, but true wisdom. If you value our research and communication efforts, please consider making a donation@santafe.edu/podcastgive and or rating and reviewing us at Apple podcasts.

 

Michael Garfield (3m 39s):

You can find numerous other ways to engage with us at Santa //fe.edu/engage. Thank you for listening James Evans. It's a pleasure to have you on complexity podcast.

 

James Evans (3m 52s):

Thanks. Really nice to meet with you and chat, Michael.

 

Michael Garfield (3m 55s):

So I think where I'd like to start this with you, as we usually start it is with a bit of personal background because I like to humanize these ideas and this work. So just a bit of intellectual autobiography about how you came to care about and to research the ideas that we plan to discuss today would be a good place to kick it off. 

 

James Evans (4m 19s):

Sure. So I would say from a young age, I was really interested in the, you know, the big scope of history and how it is that ideas and social movements and these things changed over time. I was an anthropology undergraduate at Brigham Young University who took that kind of socio-cultural view on large kind of historical and intellectual historical movements. And then I went and worked at Harvard Business school as a researcher. And at the same time, it was kind of taking classes at Harvard. My wife was a law student there. And while there I took a wonderful class on social network analysis with Peter Marsden. And I became really interested in the idea and possibility of using these structured formal mathematical systems to think about not just connections between people, but the connections between ideas, how to represent knowledge and culture at scale.

 

James Evans (5m 12s):

And then as a graduate student back at Stanford University, I spent most of my time trying to do that, trying to build formal systems that allowed us to think about represent, measure the difference between and character of different kinds of ideological systems, knowledge-based systems, cultural systems, et cetera. And I've kind of taken that in analyzing science, writ large polarization information and misinformation, information warfare and information operations and all these kinds of things over the last 20 years.

 

Michael Garfield (5m 47s):

And so now you're in a, a rather esteemed position at the University of Chicago's department of sociology. Could you talk a little bit just about the knowledge lab and broadly what you and others are doing there?

 

James Evans (5m 60s):

Sure. How about 10 years ago? A bunch of students mostly in sociology and computer science and elsewhere and I, and some fellow faculty were really interested in building a group really patterned on this was before I joined the Santa Fe Institute, but I visited there. It was just really built on the idea of building prototypes, doing research, kind of understanding the limits of knowledge and culture, and then also kind of mocking up opportunities to push beyond those limits in various ways. So we had a commitment to playfulness, a commitment to rigor and exploration. And I would say a number of special themes of focus on innovation, on scientific and technological transformation on political polarization information and misinformation language and how language and thought interactive, et cetera.

 

James Evans (6m 51s):

And then over time, that became really a nice venue for convening things at the University of Chicago surrounding those topics and bringing in, I would say in sustaining large scale data resources that then everyone in the group could use and deploy a little more.

 

Michael Garfield (7m 8s):

So again, a lot of that shows up in the paper that I want to spend the lion's share of this conversation discussing with you, which is your piece, social computing unhinged, which came out last year in the journal of social computing. It's interesting because until I started listening in to conversations going on inside SFI, I don't think I really understood. And I don't think a lot of people really understand what a profound change is underway in the sciences over the last 20 to 30 years. And the role of the computational metaphor in understanding how science is practiced, the role of computers in that practice.

 

Michael Garfield (7m 50s):

And I'd love for you to just provide us with a brief overview of this piece. And then, then I've got kind of more granular questions for you about specific claims that you're making hypothesis that you're posing in it.

 

James Evans (8m 3s):

Sure. And I should say, I mean, that piece was, it's the first piece in the journal of social computing, Joe Soco, that I'm also the editor of chief of, so this is a piece that's intended to kind of kick off a set of post-disciplinary interdisciplinary conversations about social computing. And we'll talk about what that means, but the hope was here to really redefine social computing. Social computing as a field and topic of interest in computer science has really focused on the idea of using computers to do social things, right? So I think people think historically to the Plato system of the University of Illinois at Urbana Champaign, that kind of birthed like news sites, blogs, email, like all the kinds of institutions that we use on the internet to kind of do social things that we did before.

 

James Evans (8m 57s):

But in new ways that we were kind of encased within this new network metaphor. But exactly, as you say, I feel like that completely undersells the range of things that social computing could be, which is the recognition. And this is really reflects the work of a number of exciting colleagues at the Santa Fe Institute, both as external faculty, those who were there, the way in which social systems are computing their own answers and computing resolutions to their own problems. Secondarily, that we have situations in which computing itself becomes a metaphor for individual and collective human cognition, right?

 

James Evans (9m 39s):

So if computers themselves are attempting in some ways to allow us to offload our cognition to them, then social computing becomes a way of really talking about how can we actually improve these computers because how do we solve our problems today? Well, through conversation argument, through building off of an accumulating on prior arguments, and then there's also on top of that, I would say this layering of actual social computing reframed in this new context, like how can you actually engineer new designs at the interface of humans and competitional devices that will allow us to do what it is that we do better, right in so far as we're computing the future through our ongoing social interactions, how can we actually build machines that can compliment human limitations and compensate for those limitations to, to allow us to think further, to think bigger, to overcome some of the range of problems that end up being enlisted in the process of polarization over specialization, the kinds of things that limit the ways in which we think and bias the ways in which we think.

 

Michael Garfield (10m 50s):

So just to, in a kind of a metta way, zoom in and enhance on one of the points that you just touched on. You make an interesting analogy here in this piece to the ship of Theseus and Greek mythology and how you're changing the parts of something. And until you you're forcing the question of whether or not that thing is actually the same thing anymore, and you say, this is what's going on. And it has been going on in the sciences and elsewhere in the piece, you talk about high throughput, big data research, and, you know, the use of things not limited to, but including deep neural networks, machine learning, generative adversarial networks, et cetera, as adjuncts to social science.

 

Michael Garfield (11m 32s):

And that, that really like this, this paper comes out of an acknowledgement that the way that science has performed is not in certain important ways, the way that it was performed. And I think a little bit of history on, on how that transformation has actually unfolded since the 1990s and the era of agent-based models and their prominence would be really interesting.

 

James Evans (11m 56s):

Sure, absolutely. First, let me just take us way back to like Robert hook in the Rose society. This is like the first society that's by scientists, forces scientists, you know, develop an emerged in the 1660s and here Robert hook comes out with this idea of prosthetics, like instruments as prosthetics that overcome, you know, human limitations that are the result of the fall of Adam. You know, you kind of like have this whole rendering of instruments is extending human sensory capacity, computational capacity. And that's the way in which they should be thought of as like they're human limitations. So we can kind of transcend those limitations. And I think really that's the way in which one might think about some of these computational opportunities.

 

James Evans (12m 42s):

I think it's the first place to start thinking about them. It's not the last place. And I think I talked about artificial intelligence in this piece here as really a terrible ending point for example. In the way, in which one might think about computational intelligence in the sense that we've got seven plus billion humans on the earth and the most strategical and ethical investment cannot be creating machines that will displace them. It has to be in creating machines that will do things that they can't do, and that can link them together in ways that they couldn't be linked together before. And so this is the ship of Theseus idea that as we're replacing these pieces, these new pieces, aren't exactly like the old pieces.

 

James Evans (13m 23s):

And if we select those designs, right, those replacements piece by piece to counteract limitations, biases to compliment the existing pieces, then the entire enterprise becomes a very different thing. And I think ancient based models are interesting. They, as you said, I mean merged 30 years ago in some ways, you know, in the 1990s. And I would say they've become really important again in the age of large scale data. And why is that well, because how do you have a hypothesis? The answer to any hypothesis in that context is like an enormous, super high dimensional data scape. So it's like, what's, what's the question to which that thing is an answer?

 

James Evans (14m 5s):

And the answer is there's no human question to which that's an answer. And so I would say agent-based models and other kinds of simulations, which aren't necessarily, agent-based where you take a hypothesis from first principles and you simulate that hypothesis in an agent-based scenario, or in some other scenario, the creates basically a world of simulated or hypothesized data that you can lay on top of the actual data. And I think one of the powerful advances really in the last 15 years, or even the last for the last decade has been the way in which neural networks and other kinds of data compression approaches are allowing the massive integration of very different kinds of data, of images and film and text and tables and the results of these simulations and these hypotheses.

 

James Evans (14m 60s):

So basically all of a sudden, now you can basically put, you know, your questions or your expectations and the data inside like the same framework at the same time. And what that analysis to do is, is develop machines that are smart enough to anticipate what will surprise us to anticipate what we will find interesting. So rather than just like generating uncle loads of garbage with unsupervised approaches, these approaches can difference what it is that they find from what it is that we expect to identify precisely the kinds of things that we would find interesting, not to say that those are the only things that we should explore, but it will dramatically augment human capacity to imagine and explore.

 

Michael Garfield (15m 43s):

So you make an interesting point about this in the piece where you talk about quoting Sue Suchow and Griffiths experimental design as algorithm design and how this changes the way that we think about trying to take random samples. And instead it's like you just said, it's a shift towards inference where you're leaning on the machine and the machine is leaning on you in order to actually guide research into a space. You know, I remember thinking about Jennifer Dunn's work on trophic networks, and I guess maybe this is a little different, but it seems like that kind of a model seems like it could be transposed into other settings, like the kind of inferential work that you're alluding to here in understanding where new technologies might emerge or that kind of thing.

 

Michael Garfield (16m 36s):

And you talk about in here, you talk about this process of machine assisted inference. You say a nature article published in 2019 revealed how embedding chemicals and properties to a vector space from millions of prior research publications can be used to predict 40% of the novel associations, more than two decades into the future. So this seems like a shift that I've heard, Kevin Kelly, who's also a big proponent of what you talk about in this paper as alien intelligence, rather than simply reproducing human intelligence also said that, you know, we're, we're moving out of an era where the answers have the priority in value to an era work, finding the right questions, takes the priority.

 

Michael Garfield (17m 20s):

And to just wrap this weird bouquet of stuff, you end this piece in a really kind of bold statement that I love. It's, you know, it's in the hope of this chaotic conversation, partly beyond human comprehension, certainly at the risk of peril that we launched the journal of social computing, that like in a way, you know, this interfacing with the cognitive other in order to improve the way that we search the space of possible scientific discovery is putting us back into the rainforest. We're no longer an apex predator. And consequently it changes what we believe is possible with science in terms of actually landing on a final or unified theories.

 

Michael Garfield (18m 9s):

And yeah, I'd love to hear you talk a little bit more just about the, the epistemic and paradigmatic shift underway with all of that. 

 

James Evans (18m 18s):

Yeah, no, no. I think these are really exciting and strange developments and I think it is at the risk of which I'll talk about in just a moment. I think the idea is if we are going to work with this realm of, of possible intelligences of computational intelligences, when we talked about experimental design is algorithm design. In some ways that's kind of like taking those intelligences down into the small setting. So for example, typically in the social sciences, we love to run surveys. We love to do mid-scale the small scale experiments in which, you know, you'll gather a hundred people, 200 people, 500 people, maybe a thousand people for survey, maybe 10,000 people.

 

James Evans (19m 2s):

And, and the idea is you designed that all up front because you kind of imagined, well, what don't we know? What do we know? How are we going to like balance the sample so that it can kind of teach us the things that we anticipate that we might want to know, but that's not what you would actually do. If you could think between asking each person each question, right? If you could ask a person a question, some answers they would give you would raise new questions. Other times if they gave you a certain answer, then you wouldn't have to ask most of the questions that you fall. Like everything becomes predictable. Like, you know, if you know, someone's from a certain place and they voted for a certain person and there were a certain thing and they like a certain food and they own a certain kind of gun, then there, there are many questions that you don't have to ask because so many things are correlated in this case, in this universe.

 

James Evans (19m 53s):

And so you would ask another kind of question, right? To maximize the information value of your time with that person in the same way that you, when you're having a podcast, like you don't have a rigid set of like five questions, right? Cause you asked one question and the person has nothing to say, and then you ask another question that's really compelling, but no, I've got to move on to my next question. So I'm going to stop the interesting stream of answers. They're not, you don't do that. No one intelligent would do that. And so pushing, you know, experimental design to algorithm design allows us to basically put some intelligence between every single question that we ask. So you've got some agent that's kind of computing, Oh, as a result of that question that you asked in this survey, I'm going to ask you this other question.

 

James Evans (20m 34s):

I'm going to give you this other information task. Or if I'm running a teams experiment like I am right now. I'm using Bazin optimization. So after, you know, every single team-based experiment, then it comprises the optimal next experiment, just the very next experiment that we run immediately afterwards, that takes into account what we learned from that last experiment and everything we knew before. Or in chemistry, where if you're, obviously, if you've got some enormous design space like this 10 variables, there's 20 different kinds of chemicals at various titrations, you're going to try to mix, the possibilities are just explosive. And so you can't sample randomly sample all those possibilities or you could, but then you'd be wasting most of your time on useless information.

 

James Evans (21m 21s):

So part of it is about projecting that kind of intelligence into the small, between every question between every step of every experiment, applying the same kind of logic, creativity, intelligence, curiosity, using these computational prosthetics. But as you say, it also goes to the large, right? So if we're going to create intelligences, which are fundamentally complimentary to us, they're going to have different capacities than we have. They're likely also going to have different goals than we have. And that puts us at odds or puts us in conflict with the way in which many people thought about the problem of algorithmic in artificial intelligence, which they think of as a control problems to Russell and a number of other scholars, Nick Bostrom among others.

 

James Evans (22m 12s):

The way to deal with this is to think about it as a control problem. What we're trying to do is create Alison Gopnik describes it as the Stepford Wives approach to containment. So we're going to try to kind of create like bots that tightly tethered their desires to us. They only want exactly what we want. We want nothing else than our satisfaction. And the problem is that's not going to create things that are complimentary. It's not going to create things that really can expand our own capacity, but what's the alternative to control. I would say with Patricia Churchland and with Alison Gopnik and others, well caregiving, but caregiving puts you, it's true in a very different status position.

 

James Evans (22m 55s):

Like the one that you were discussing you know, like you're the rainforest again, you're not the apex predator. I mean, if you're controlling, you are the alpha, right. And you're telling that algorithm to do exactly what it is that you want it to do. You're anticipating what it's going to do that you don't like. And you're going to abrogate that from the outset, becoming a caregiver to these algorithms so that they can explore and discover the range of capacities, which are not your capacity, which may beyond your capacity to send so cognize or anticipate puts you in a lower status position. Caregiver is the one who respects the fact that the one they're giving care to may have potential that they don't have.

 

James Evans (23m 36s):

And that, that discovery process is, I mean, in some ways they hope that those desires and capacities can be used for good of the one that they're giving care to. But there is epistemic risk if we are allowing, for example, more and more of our science to be in the quote, unquote know, heads of these digital algorithms, you know, it's like a thousand dimensional tensor. And that, that may be actually the most parsimonious descriptions, certain kinds of social and physical phenomenon. Then that's, that's basically building a system where we can understand very little about what is understood and that is a precarious position.

 

James Evans (24m 16s):

And yet it's also, you know, if we were to think about really the kind of explosive technological achievements and even business achievements, the last two centuries, since the second, since the first industrial revolution, that's all about complimentary technologies. That's all about the evolution of these technologies with persons to create like radical augmentations to human capacity. And so to forgo, the possibility of those potentials is kind of inconceivable. I mean, not everyone's going to do that, right. And so we need to learn how to do that. We need to develop a kind of epistemic humility as you suggest that would allow us to do it in the most reasoned and careful way.

 

Michael Garfield (24m 59s):

So there's, yeah, there's a lot of juicy stuff there. I think just to suppose this, I'm thinking about a couple years ago, we'll link to this in the show notes. One of the SFI symposium, Melanie Mitchell, gave a talk on boom and bust cycles and AI development. And it just speaks to the relative value of creating an other, creating something that we can't understand. And, and to which we're in a position of caregiving instead of control that, you know, you look at how we've actually gotten to where we are now. And it's through things like the calculator that are radically other and augmenting us rather than replacing us. Compare that to the thousands of years of perennial interest there has been in the creation of humanoid automata.

 

Michael Garfield (25m 46s):

And it seems like, well, we know where the, the research incentives actually lie if we look at this, but you know, there's this other piece, which I think is a little bit more interesting, which is as a parent, but long before I was a parent, I was thinking a lot about this in terms of like Hans Moravec and his idea of mind to children. And like you're saying, you know, this, this notion that when we give birth to something new in this world, all new technologies have these unintended knock-on consequences and all we can do is hope to raise them right. And so it's, it's interesting. You speak to this tension. I love this idea that you float in this paper where if the goal is to create diverse cognitive groups, then actually the Turing test is the complete wrong set of optimizations that like actually concealment of purpose may be critical, competitive performance and failing the Turing test is the best stealth.

 

Michael Garfield (26m 48s):

And so you talk about, again, you talk about bringing the logic of generative adversarial networks into the way that we design human machine collaborations. And I think that there's an implication here also that is kind of suggested by other research that you've done interrogating the difference between qualitative and quantitative research that I feel like wherever I look in your work, there's this extremely strong argument for bringing in outsiders for unpredictable interdisciplinarity. And we're talking about this entirely in the realm of human, social scientists and computers, but it seems equally true of, you know, arts, the, the, the interplay between science and philosophy or science and art, which you mentioned here in terms of how quantum theory emerged out of a particular philosophical context that was committed to challenging notions of causality.

 

Michael Garfield (27m 49s):

So I don't know, that's just a lot to throw up on the table.

 

James Evans (27m 54s):

It's true that there's a lot of ideas here, but one of the things that I was inspired by when you mentioned this kind of artistic rendering of this as Ted Chang's novella The Lifecycle of Software Objects, where there's this setting in which you have precisely this, you know, kind of human caregivers that are trying to provide experiences of these kinds of bots. And I think it's, it is true that I've systematically seen. And, and, and, and we, you know, of the broader scholarly community of seen in this idea of the wisdom of crowds, which really emerged was articulated by Francis Gulland. Then this 1906 nature paper, really for the first time, it shows that a bunch of people would go to this stock fair in England.

 

James Evans (28m 41s):

And they, you know, they're all kind of betting on the size of this big steer. And there, they, you know, they put on their, you know, their bets and the average should in theory been the median, but the average was much better than any particular persons that, and we know that this typically comes from a diversity of approaches like algorithmic approaches for calculating these things and a diversity of experiences. The fact is, you know, those people were that stock fair, like had cows at home. They knew how much those weighed, like that's why they were at the fair presumably to sell or to buy these things. And so it seems like we're in a place where we can value existing diversity and we can begin to design diversity that would allow us to kind of think beyond ourselves.

 

James Evans (29m 28s):

And that's, I think that the exciting position that we're in. You mentioned piece that I described at the end of this piece, and it's associated with a, a paper that we're working on, hopefully finishing within the next few days in which we, we did look at this algorithm that was put out by some scientists in association with Berkeley, a number of colleagues there who put up this piece where they use these AI down betting algorithms to identify from the literature, the likelihood of these long distance publications. And we were really interested in that. We're interested in, okay, so what if we garnered all the literature, right? Then you can, that's bring a crowd together. And as you mentioned, that predicts about 40% with about 40% precision proposals so that materials that will have certain valuable energy related properties in the next 20 years. We tried to basically say, what if we actually took that intelligence, but then we actually took actually into account the relative position of persons in that landscape. We tried to design against that. We tried to design things that work sitting in the places that people are sitting in. So basically we kind of identified all these persons in that landscape. We built a new landscape that proposed hypotheses that would avoid where people were. It would avoid the kinds of questions that people would ask. In fact, we would only ask the kinds of questions that there are no people present to make those inferences are asked.

 

James Evans (30m 56s):

And it turns out when we lay that on top of really strong simulations of the likelihood, for example, these energy related properties like thermal electricity, that the proposals are no less likely than those that happened to be championed by individual scientists. I mean, it suggested to me the pilot and actually by including those humans, of course, you can like double the predictive of the out of box instead of predicting 40% precision precipitating at 85 or 90% precision because we recognize we're not just predicting the things that are true. We're predicting the things that are true, which people will publish. And again, that allows us to turn that around and use those to accomplish, to radically compliment human communities.

 

James Evans (31m 38s):

And so I think one of the things that's striking is to do this right, to build a kind of new AI, like an augmented alternative intelligence and intelligence that really designs diversity on to radically expand human capacity and potential requires actually those algorithms to know us much better than they would have otherwise. Rather than just mapping on the outcomes, making sure the outcomes like match human level outcomes are like human level outcomes, but like a little marginally faster or cheaper than human level outcomes. Here we're in radically different position where we're targeting things that the educational system is designed against asking those kinds of questions.

 

James Evans (32m 23s):

And so like, we actually need to understand human bias, cognitive bias, you know, like social conflict and like to understand all those things allows us to grow computational intelligence is that can really help us do things that we could not otherwise do. And to help us do the kinds things that we do. 

 

Michael Garfield (32m 41s):

So this raises a, an issue that keeps me up at night, which is that all of the advancements that have been driven by market forces to give us things like the Netflix recommendation algorithm or, you know, at least to apply and develop those ideas for these insights to find purchase in society and, and to be taken up by these enormous private corporate research teams and, and developed into the way they have transformed the world. Now, a lot of this stuff is being used to isolate people. It's rendering us with evermore granularity in, in order to make us more legible to machines that are serving a set of incentives that is deeply out of alignment with what you just described.

 

Michael Garfield (33m 31s):

And in the section where you're talking about how re-imagining human computer interaction and human centered computing as social computing, again, you know, we're kind of orbiting this, but you know, you talk about incorporating principles like diversity, promoting collective creativity and intelligence through the creation of cognitive dissonance and destabilizing conflict. And like, arguably, we've done this on accident with social media algorithms that have driven wedges between already polarized groups. And yet you make this key point here, which is that how much cognitive dissonance is too much. Like you say, Oh, interlocutors who disagree must nevertheless be able to communicate.

 

Michael Garfield (34m 13s):

And so I'm thinking about this paper that just came out from a set of researchers at Caltech, identifying not only interdisciplinarity and a couple other key features, but specifically the establishment of a common language as a hugely important piece of this. So it's sort of maybe like how do you swallow the spider to catch the fly kind of a problem? How do we find the balance here?

 

James Evans (34m 40s):

Yeah, I would say, I'd say your discussion here stimulates two things that I think are interesting. So one is this idea of common language and communication is it does suggest that you want to create, I mean, there's a tension, right? So if you're, and the same is true, the same is true actually in population, genetics is, is, and human communication. If species are too far apart, right. Or they become too far apart, they, they actually ceased to be what we call viable species because of, you know, say geo-physical separations, then all of a sudden they can't exchange DNA, right? They can't have sex. And so they become evolutionarily irrelevant to each other in many ways. On the other hand, if they're like in the same population experiencing the same phenomenon, then they're also irrelevant to each other, which is to say they don't bring anything new.

 

James Evans (35m 29s):

Whereas if you have them in two different environments and they're responding to different kinds of pressures, then basically that different environments are driving. Evolution is driving them to generate novel traits that become relevant to one another. This is the phenomenon biology that I would say is still understood or underappreciated, which is this phenomenon of hybrid vigor, which is where, you know, you've got like sub species that are far enough apart, and yet they're able to procreate, they're able to have sex. And so the communication between those species becomes substantively valuable and they have performance enhanced properties across, you know, the planting and the animal kingdom, what we've observed, et cetera.

 

James Evans (36m 13s):

And I think the same is true for the scientific spaces. Like we need to develop enough of a common language. I would say, a partial, common, a common language, which facilitates interchange and novelty and surprise through that communication. And yet, you know, not so much of a common language when I say so much for coming on, I'm really thinking about back to like live nets. And for again, all these other, you know, computational and social and natural scientists who really fantasized about the idea of, of what they call the concept script variously. Forget what would his book progress shift the concept script, where every symbol has like an unambiguous link to some concept in the world.

 

James Evans (36m 58s):

And I think that would be terrible for innovation. It would be great for innovation in the short term, because all of a sudden now you get everything and you can basically create a common editorial machine that explores through the space of all those interactions. The problem is that it kills the generativity of each of those concepts to kind of like spin off concepts that are edits periphery, differentiate themselves from that concept by counter position, et cetera. So I think the idea of kind of creating these universal languages ends up creating a system in which that that's potentially harmful. I mean, I would say decidedly harmful for longterm innovation. The thing that keeps you up at night, I think is something that's slightly different.

 

James Evans (37m 42s):

And I think we definitely see an analog of it in science. So I've looked at scientific attention for really decades. And one of the things that we know, and we can show and has been shown by others over and over again, is that basically, if you want to maximize the expected value of your citations, of people, other people recognizing you in scientific space, you will publish nearby them in kind of topic space, right? If I'm relevant to them, I'm publishing things that are nearby, that other people will pick up on those. Other people will credit me for those that have to credit me for those, given the kind of the moral political economy of that space. And so if we're going to optimize citation metrics like citation impact the number of citations that a particular paper has or career citations, then that's systemically creates a situation in which people are driven away from leaving the herd, you know, leading the pack.

 

James Evans (38m 37s):

If they leave the path, then they have the potential to do something radically different. If they don't do something that's radically different, that succeeds the kills, like all that other work that everyone else was doing, there are very strong vested interest in non killing. Then they won't get the attention that one might imagine and hope for if one wanted a robust evolving scientific frontier. And I would say that algorithms are doing a very similar thing in the social media world. They're trying to predict you. And so they're trying to anticipate what you're going to click on. You know, what's going to be most interesting to you by building a really narrow model of what it is that, that you might find interesting. And then they're trying to populate with that, with those things that they expect that you will like in that space.

 

James Evans (39m 21s):

And again, following those same incentives, they're going to try to create things that are really close. I mean, the things that are right nearby you are the things individual that have the greatest likelihood that you're going to click on them, which kind of, again, if you just turn that crank and that creates filter bubbles and echo chambers in which people, you know, are in deep conversation, only with themselves. On the other hand thing for thing that will be the most interesting for you are things that most of them will not be interesting for you. And so I think this is the same with science, but we want things that have the potential to fail. If we're not looking at things, we're not asking questions as scientists, for example, that could fail then like, why do we even need to be there?

 

James Evans (40m 4s):

Like then, you know, it's like science can just run like as a machine, you know, it's just kind of, you know, like we don't even need anyone to run these experiments. We just, we just do a, run, a random walker over the space of things across papers or prior experiments. And, and we would discover everything that humans are going to discover, but some people do take risks. And the entire system benefits from those risks that are taken. Most of those risks don't succeed. And I think this is really the same in the context of social media, when we're trying to speak across boundaries and we're trying to find common ground. Most of those conversational moves most of the compromises that we might kind of conceive of, or even the common languages that we try to form to facilitate conversation between people. Very different perspectives and positions are things that are not going to succeed, but the whole system benefits by trying enough of those failures that we find these new configurations that seemed inconceivable or were certainly inconceivable from the perspective of someone incremental advanced by either one.

 

James Evans (41m 6s):

So I think, I, I think of both, I would say like political discourse, you know, we're trying to find meaningful compromise configurations and really scientific advances. What we're trying to kind of find things that your harvest, the advances in one area and channel them to another are ones where, you know, they're fraught with, you know, high likelihood of failure. And if we don't incentivize the potential and the willingness to undertake those failures, then we're not going to be able to experience the upside. You're not going to be able to experience that kind of where inconceivable position that beyond the structure of any existing system. That's where we need to go, but we can't build it out from the politically liberal conservative position. It has to be a new configuration of those things that requires potentially a new language and most such innovations are going to fail.

 

James Evans (41m 57s):

But again, you know, doing that requires, for example, with science funding, it just requires a completely different mindset with congressional scrutiny over like every single grant that, you know, it's like, Oh, this is a stupid thing to do. Most things should be stupid. Like we need to, you know, if we want to actually harvest the events that we need to really amp up the kind of inconceivable projects, that high likelihood of failure projects. And I think we can harvest computation to help us again, think outside of ourselves, by nudging them to positions that are either collectively or individually inconceivable by thinking different ways by attending to, and having different educations than we can or have chosen to provide a result for institutions, et cetera. Understanding the way in which we organize ourselves, like taking that social science into these machines allows us to design them for diversity in a really directed way.

 

James Evans (42m 53s):

So it makes it possible.

 

Michael Garfield (42m 55s):

You know, it seems almost like what we were talking about, about using computers for inference to identify these unoccupied spaces where we can go explore. It seems like what you're saying is that it's a similar approach to developing computer mediators, like a marriage counselor that both parties of the marriage had a problem with each other, but now they have the luxury of a problem with the counselor.

 

James Evans (43m 28s):

I think when you can create machines that can create those kinds of problems. That's beautiful.

 

Michael Garfield (43m 33s):

I mean, I just, you know, something that came up for me listening to you talk about all this is that I think you may have already addressed this, but just to, just to be super clear on this point, the skewed incentive structure for science that produces expected return on investment for individuals looking to maximize citations are for organizations that are looking to actually, you know, see technological returns on their funding or whatever. It speaks again, to what I see happening online, which is in the human technology co-evolution and lots of people have written about this as the machines become more and more responsive as they take on more and more of our externalized formerly considered human uniqueness.

 

Michael Garfield (44m 21s):

And, you know, you follow Herrera Rivera gave a great tech talk at Google about this a few years ago about the new religion of Silicon Valley that were emptying what we thought of as the singular human into this new framework of understanding a person as merely a collection of algorithms. And those algorithms are tuning themselves to their environment. Like all my artist friends that have learned to hack the Instagram social media algorithm. And it looks just like the kind of failure that Melanie Mitchell and Jessica Flack wrote about it at Ion in terms of key performance indicators and academic grading and how you end up not actually measuring the things you want.

 

Michael Garfield (45m 5s):

And you've created these people who are just excellent test takers. So part of what's keeping me up at night is this dehumanizing elements. And, you know, maybe that's inevitable, maybe I'm being old-fashioned.

 

James Evans (45m 18s):

Well, maybe you can help me understand just a little bit more. I mean, I think certainly the key performance indicator idea is a flattening of the high dimensional qualities that research and the persons possess, you know, more valuating like well research portfolio. Then we basically put high stakes on a single indicator. This is talked about all over the social sciences. In fact, everyone's developed their own law for this, right? The Lucas critique and economics, Goodhart's law and political science Campbell's law and sociology,  It's all the same thing, which is basically that if you have a quality indicator, then basically creates this incentive to capture the indicator without capturing the quality.

 

James Evans (45m 59s):

So then that drives down the correlation between the indicator and the quality, and then you get garbage, right? So this is the scenario that you're describing. And one way of dealing with that is to not have like one indicator, but rich diversity of indicators. If you have a single indicator, this is what happens. Like everyone tried, well, how do we get citations without like writing a mindblowing piece that I can't conceive of how to write? Well, then I'm gonna do, I'm gonna hack the system. I'm gonna hack the Instagram algorithm and hack whatever. And I think this is also the way in which we need to think about building our machines, right? If we, if they have a single objective function that's maximized, this is that it is precisely actually the control based criticism of like Nick Bostrom created some huge machine that's going to turn, you know, it's going to create paperclips.

 

James Evans (46m 46s):

And that's the only thing that we validate, the only thing that we value, then it's going to like turn the whole world and the paperclips, sorry, it's going to, it could, it could go a ride. These kinds of machines could go, right. But that's not what we do because when we're searching for everything from a mate to a good idea, we're not looking for like a single key performance indicator. At the very least with thinking of an archetype, right. Which is like a bundle of many qualities. And at most we don't know what we're looking for. We're actually looking for something that seems right. And that feels right, which means like it literally, we're basically saying that it registers and all these kinds of sensors that we can't even enumerate, that we don't even know exist.

 

James Evans (47m 30s):

And so I guess what I'm suggesting is that if we want to avoid these kinds of challenges, we need to basically build a much richer, diverse array of sensors that captured the diverse array of qualities. So for example, okay, I hate renderings of the scientific method like, like that there is a scientific method and that you can score it. So recently I was, I was working with my daughter on a kind of math, science fair thing. And there was like, you know, there's a scientific method that it's being scored against in the Chicago public school district. And you look at all the pieces of the method, you realize that, that the experiment that would have to win in the end has to be like a town cleaner comparison.

 

James Evans (48m 14s):

You know, you can like replicate it many different times. You can validate, you know, its existence under a variety of different conditions that have this certainty over something that's completely irrelevant, you know, about, you know, brand A versus brand B you know, it's like 1% better, but statistically significantly, absolutely 1% better. So I think we need to blow up the, the idea that there's like a solitary, singular objective function. And I think, you know, we need to provide not just humans and humans institutions, but also these algorithms themselves with like rich compliments of objectives with competing objectives of balancing objectives, which is how it is that we make every important decision in our lives, because there are multiple things that we value.

 

James Evans (49m 4s):

And often we don't even enumerate those precisely because we don't know what we value until we're in the act of discovering that thing, you know, like a partner like you, you know, how do you, how do you know what you value until you have discovered the thing that you've come to value? So I couldn't agree with you more that there is a drive to kind of like to reduce these objectives, you know, to create singular key performance indicators. And I think this is that AI piece that you were mentioning really pushes against that in a way that's powerfully productive. And I would say is definitely something that is achievable in the context of not only human institutions, but also in terms of artificial intelligence.

 

James Evans (49m 46s):

Like the kinds of artificial intelligence is that we would hand over important decisions or, or hold hands with and making those important decisions are ones that are going to explore the space of values and explore a range of indicators that capture those qualities and balance those, right, those high dimensional qualities in ways that allow us to trust the judgment and wisdom of those kinds of algorithms. And we wouldn't even conceive of it as wisdom if it didn't undertake to account for various values, right. That's what makes something wise is that it's not a singular objective function. It takes into account the range of objective functions often across the range of people that make up design

 

Michael Garfield (50m 31s):

Well, Olay. Yeah. One of the, one of the things that all of this brings up for me is how this entire conversation nests inside this other broader epistemic shift about, you know, the discovery of the ecological reality of identity, the microbiome research discovery and psychology of heating neural motifs, that all sort of take turns presenting as the self. And so in that way, nothing that we've said is truly challenging to this new relational and network-based sense of an emergent identity, the self as a plural process, the whole thing, feral child language development that, you know, we are social creatures, but we're social creatures that all of these different scales at once.

 

Michael Garfield (51m 23s):

And we're social creatures again, across all of these substrates. So I'm about to ask a little bit more of a personal question than I usually do on this show, which is stewing in this, as you do full time, what have you noticed are the consequences for you and how you understand what James Evans is and, and how you exist in this world? Because I think that the answer to that question seems like it may be a useful guidepost along the way to resolving some of the anxieties that people seem to possess about what is happening to the human in the 21st century.

 

James Evans (52m 7s):

Whoa, Olay back at you. I think I certainly don't know my personal answer is going to solve the critical question about a merchant identity. Again, at the same time, I couldn't agree more that I think there's a really strong seeming correlation between these ideas of emerging identity and diversity in composition and the way in which I'm arguing that we should push machines on the one hand and this human machine nexus on the other, right. How we basically designed diversity to kind of do it as that, what we want to do. And of course that becomes complicated because then that also increases the diversity of wants that we want, et cetera, et cetera.

 

James Evans (52m 49s):

For me, I think there are benefits and costs. On the one hand, I think it does mean that I'm a loosely coupled system, you know, so all kinds of different commitments, many of which are in conflict with one another. And I've become very comfortable with my ignorance and very comfortable with that conflict. And I don't seek to resolve, I mean, it's a part of doing the science of science or kind of Metta science is like I'm bringing together theories from very different places that have very different idioms that are saying very different things. And I'm hesitant to try to resolve them too, because when we resolved them too quickly, that almost always means that one is dominating the other, that we're basically just kind of like cheaply taking a rank order.

 

James Evans (53m 35s):

And we're allowing one to kind of drive rather than understanding the emergent puzzle, right? That these things can configure into. Like, what's the possibility that if we could take advantage of the strengths of all these different things, like what what's possible, that's not possible in the context of any of these different systems. So part of it is a kind of a radical hope that emergence will allow us to exceed the hopes of any one of these different systems. I think the costs are that you're kind of an outsider to every system in some sense,.  You're constantly telling every system about things that it feels like are foreign to it and potentially alien and destabilizing to it.

 

James Evans (54m 16s):

And that it forces you. And I, when I say we, as a system need to become comfortable with failure, it forces me as an individual to become really comfortable with ambiguity and uncertainty and kind of amp up that uncertainty as we've put together more and more pieces of this emergent puzzle that we hope. And we have evidence from the past that those like radically different pieces have created things that are vastly greater than the sum of their parts.

 

Michael Garfield (54m 45s):

Wow. You know, you really just spoke to something that seems to come up a lot in the context of interdisciplinary or transdisciplinary research, which is how it creates a chronic endemic imposter syndrome. That's like, that's a thing that I remember the SFI research undergraduates were very lively discussion about the fact. And they were like, Oh, the senior faculty were coming in and no, no, all of us deal with that all the time. You know, you can only be an expert in so many things and you're living in this big complex world. So, you know, maybe the right place to end, this would be to invite you to propose a question that you feel in genders or affords really useful, potentially transformative ambiguity.

 

Michael Garfield (55m 37s):

Like what are you chewing on? Or what do you recommend that listeners chew on that they may never be able to fully digest, but might get us somewhere interesting.

 

James Evans (55m 50s):

That's you know, that it's a tough question to answer, Michael. And the reason it's tough is because in some ways I I've spent and I'm certainly spending most of my career right now, trying to build machines that can basically pose an answer, some of these questions that we can't actually conceive of. Right. So you're basically kind of asking, you know, what, what's give me like a one nexus, not like thousand nexuses, ones that you know, that each of us can kind of like conceive of together. And I would say that this is what SFI does so well. So if you look at any of the other podcasts that you've done or elsewhere, these are cases where people were kind of asking these kinds of questions.

 

James Evans (56m 32s):

They're saying, okay, you know, what, if we've got a system particles in this field and we've got like a bunch of humans that are interacting kind of in this space, and we've got some formalism here and to what degree can we transfer this class of models to like, describe an aluminate what's going on in this other context. And so basically I I'm spending time just building machines that generate thousands of those and dozens of, you know, some of those are things that, you know, make sense to people. I mean, like that, you know, we can kind of see that like look at that model and we can kind of like imagine that social network and we can like put those two things together and we can ask the question, does it generate insight?

 

James Evans (57m 14s):

Does it predict better things that we're doing right now than things that were being done before, et cetera, et cetera. So for me, it's, it's, it, it would be difficult to just pick one thing. I mean, I think there are, I mean, one that I've been thinking about is the relationship between a hybrid vigor and innovation, right? This idea of, you know, how do we think about this across all scales, which is to say, you know, how do we create a system that, you know, if you get two particles that are outside the kind of like the speed of light space, time, calm, then what's possible for them to, if they combined create something beautiful and amazing or terrifying and horrible, but they, they will never interact.

 

James Evans (57m 60s):

Right. So how do we, how do we actually create diversity that facilitates, I would call it sustainable innovation, right? How do we build a system that generates enough diversity that we can sustain really radical innovation and prosperity in the future? And I think this is something that happens in the biological system and it happens in like the social system that happens in physical systems. And so I think finding ways to build that sustainable growth and the draws on all these systems that observe it right, and have tried to formalize it in their own separate ways is something that I think is potentially really powerful way of building more models and achieving certainly above and beyond the advances of any one of these particular domains.

 

Michael Garfield (58m 49s):

Well, I mean, it certainly seems like that's the trillion dollar question in human resources, right. If only if only we could hire this way instead of looking for the person we think is going to perform best in this. Yeah. So anyway, James, this has been a total delight. I want to thank you so much for being on the show. Any parting thoughts, final words for folks before we go?

 

James Evans (59m 14s):

No, I, I mean, I I've really appreciated your questions and kind of pushing me to, to think and rethink these, these kinds of ideas. I, with respect to this last idea of imposter syndrome, you know, I just, I really hope that we can cultivate more imposters. 


 

Michael Garfield

So abducting them into our research and abduct the rest of us into, you know, 


 

James Evans

Kind of like a bigger world of imagination. So. Excellent. Thank you. Thank you.