COMPLEXITY

Nature of Intelligence, Ep. 2: The relationship between language and thought

Episode Summary

Complex language is unique to the human species. It’s part of how we evolved, the backbone of our societies, and one of the primary ways we judge others’ intellect. Is it our intelligence that leads to our language abilities, or conversely, does our ability for language enhance our intelligence, or both? How do language and thinking interact? And can one exist without the other? Guests: Evelina Federenko, Steve Piantadosi, and Gary Lupyan.

Episode Notes

Guests: 

Hosts: Abha Eli Phoboo & Melanie Mitchell

Producer: Katherine Moncure

Podcast theme music by: Mitch Mignano

Follow us on:
TwitterYouTubeFacebookInstagramLinkedIn  • Bluesky

More info:


Books: 


Talks: 

Papers & Articles:

Episode Transcription

Complexity Season: Nature of Intelligence

Episode 2

Title: The relationship between language and thought

Spoken and written language is completely unique to the human species, and it’s part of how we evolved. It’s the backbone of our societies one of the primary ways we judge others’ intellect. So, are humans intelligent because we have language, or do we have language because we’re intelligent? How do language and thinking interact? And can one exist without the other? Guests: Ev Fedorenko, Steve Piantadosi, and Gary Lupyan

Ev Fedorenko: It is absolutely the case that not having access to language has devastating effects, right? but it doesn't seem to be the case that you fundamentally cannot learn certain kinds of complex things.

[THEME MUSIC]

Abha Eli Phoboo: From the Santa Fe Institute, this is Complexity

Melanie Mitchell: I’m Melanie Mitchell.

Abha: And I’m Abha Eli Phoboo.

[THEME MUSIC FADES OUT]

Melanie: Think about this podcast that you’re listening to right now. You’re, hopefully, learning by just listening to us talk to you. And the fact that you can take in new information this way, through what basically comes down to sophisticated vocal sounds, is pretty astonishing. In our last episode, we talked about how one of the major ways humans learn is by being in the world and interacting with it. But we also use language to share information and ideas with each other without needing firsthand experience. Language is the backbone of human culture.

Abha: It’s hard to imagine where we’d be without it. If you’ve ever visited a country where you don’t speak the language, you know how disorienting it is to be cut off from basic communication. So in today’s episode, we’re going to look at the role language plays in intelligence. And the voices you’ll hear were recorded remotely across different countries, cities and work spaces.

Melanie: Are humans intelligent because we have language, or do we have language because we’re intelligent? How do language and thinking interact? And can one exist without the other?

Melanie: Part One: Why do humans have language?

Melanie: Across the animal kingdom, there are no other species that communicate with anything like human language. 

Abha: This isn’t to say that animals aren’t communicating in sophisticated ways, and a lot of that sophistication goes unnoticed. 

Melanie: But the way humans talk — with our long conversations and complex syntax — is completely unique. And it’s part of how we evolved.

Abha: For several decades, a dominant theory of human language was something called generative linguistics, or generative grammar. 

Melanie: The linguist Noam Chomsky made this idea popular, and it basically goes like this: there’s an inherent, underlying structure of rules that all languages follow. And from birth, we have a hard-wired bias toward language as opposed to other forms of communication — we’re biologically predisposed to language and these syntactic rules. This is why human language is, according to Chomsky, unique to our species and universal across different cultures.

Abha: This theory has been incredibly influential. But it turns out, it doesn’t seem to be right. 

Gary Lupyan: So I've never been a fan of generative linguistics, Chomsky's kind of core arguments about universal grammar or the need for innate grammatical knowledge.

Abha: This is Gary Lupyan.

Gary: I am Gary Lupyan, professor of psychology at the University of Wisconsin-Madison. I'm a cognitive scientist. I study the evolution of language, the effects of language on cognition, on perception, and over the last few years trying to make sense of large language models like lots of other people.

Melanie: In recent years, the development of large language models has bolstered Gary’s dislike of generative grammar. The old thinking was that in order to use language well, you needed to be biologically wired to know these language rules from the start. But LLMs aren’t programmed with any grammatical rules baked into them. And yet, they spit out incredibly coherent writing. 

Gary: And so even before these large language models, there were plenty of arguments against that view. I think these are the last nails in the coffin. So I think producing, correct, grammatically sophisticated, even, you know, I'd argue semantically coherent language. These models can do all that even without, you know, by modern standards, huge amounts of training. It shows that in principle, one does not need any of this type of innate grammatical knowledge.

Abha: So, what’s going on here? Steve Piantadosi is a psychology and neuroscience professor at UC Berkeley, studying how children learn language and math. He says that language does have rules, but those rules are emergent. They’re not there from the start.

Steve Piantadosi: So I think that the key difference is that Chomsky and maybe mainstream linguistics tends to state its theories already at the high level of abstraction. So they say, here are the rules that I think this system is following. Whereas in a large language model, when you go to build one, you don't tell it the high level rules about how language works, right? You tell it the low level rules about how to learn and how to construct its own internal configurations. And you tell it that it should do that in a way that predicts language well. And when you do that, it kind of configures itself in some way.

Melanie: What's an example of a high level rule?

Steve: Yeah, so for example, a high level rule in English is if you have a sentence, you can put it inside of another sentence with the word that. So I could say, you know, could have a sentence like, I drank coffee today. That's a whole sentence. And I could say, John believed that I drank coffee today, right? And because that rule is about how to make a sentence out of another sentence, you can actually do it again, right? So I can say, you know, Mary doubted that John believed that I drank coffee today. And so if you were going to sit down and write a grammar of English, if you're going to try to describe what the grammatical and ungrammatical sentences of English were, you'd have to have some kind of rule that said that, right? Because any English speaker you ask is going to tell you that, yeah, you know, John said that I drank coffee today is an acceptable English sentence. And also I drank coffee today is an acceptable English sentence. I think what's, well, so large language models, when they're built, they don't know anything like that rule, right? They're just a mess of kind of parameters and weights and connections, and they have to be exposed to enough English in order to figure out that rule. And I'm pretty sure ChatGPT knows that rule, right? Because it can form sentences like that that have an embedded sentence in that way. So when you make ChatGPT, you don't tell it that rule from the start, it has to construct it and discover it. And I think what's kind of interesting, right, is that building a system like ChatGPT that can discover that rule doesn't negate the existence of that rule in English speakers' minds, right? So like, internally in ChatGPT somewhere, there has to be some kind of realization of that rule or something like it. And so the hope for these other theories, I think, or at least these other kind of basic observations about language is that they will be realized in some way inside the internal configurations that these models arrive at. I think it's not quite that simple because the large language models are much better than our theories. So we don't have any kind of rule -based account of anything that comes close to what they can do. But they have to have something like that because they exhibit that behavior. 

Abha: And we should say, these rules we’re talking about are not the same as the quote-unquote “rules” you learn in school, like when your teacher tells you how to use prepositions or, “don’t split an infinitive.”

Steve: Yeah, sorry, let me just clarify one part, which I guess would be good to just generally clarify that in linguistics or in cognitive science, when people talk about rules like this, they don't mean the rules like don't split infinitives. So there's things, like basically anything you heard from an English teacher, you should just completely ignore in cognitive science and linguistics. It's just made up. I mean, it's literally made up, often just to reinforce class distinctions and things. The kinds of rules that linguistics and cognitive science are interested in are ones which are descriptive, right, that talk about how people actually do speak. People do split infinitives, right, and they do end sentences with prepositions and, you know, pretty much like any rule you've ever heard from an English teacher, they had to tell you because it's going against how you naturally speak. So that's just some weird class thing, I think, that's going on. And what we're interested in are the kind of descriptive rules of how the system is kind of actually functioning in nature. And in that case, most people are just not even aware of the rules. 

Melanie: Apologies to all the English teachers out there. 

Abha: But to recap, language does have innate rules, like the “that” rule that Steve described, but we’re not born with these rules already hardwired into our brains. And the rules that linguists have documented so far aren’t as complete and precise as the actual rules that exist — the statistical patterns that ChatGPT has probably figured out and encoded at some point during its training period.

Melanie: Yet, none of this explains why we humans are using complex language, but other animals aren’t. I asked Gary what he thought about this.

Melanie: So there's a lot of debate about the role language plays in intelligence. Is language a cause of or a result of humans' superiority over other animals in certain kinds of cognitive capacities? 

Gary: I think language is one of the major reasons why human intelligence is what it is. So more the cause than the result. There is something, obviously, in our lineage that makes us predisposed to language. I happen to think that what that is has much more to do with the kind of drive to share information, to socialize, than anything language specific or grammar specific. And you see that in infants, infants want to engage. They want to share information, not just use language in an instrumental way. So it gives us access to information that we otherwise wouldn't have access to. And then it's a hugely powerful tool for collaboration. So you  can make plans, you can ask one another to help. You can divide tasks in much more effective ways. And so language, without language, even if you take a very social, collaborative species like humans, you take away language and you take away the major tool for creating culture and for transmitting culture.

Melanie: Just to follow up, chimps and bonobos are very social species and have a lot of communication within their groups. Why didn't they develop this drive you're talking about for language? Why did we develop it and not them?

Gary: It's only useful to a particular kind of species, a particular type of niche. So it has a really big startup cost. So kids have to learn this stuff. Their language is kind of useless to them before they put in the years that it takes to learn it. It's also, and many have written on this, language is also very easy to lie with. So it's an unreliable system. Words are cheap. And so, reliance on language sort of only makes sense in a society that already has a kind of base level of trust. And so, I think the key to understanding the emergence of language is understanding the emergence of that type of prosociality that language then feeds back on and helps accelerate, but it needs to be there. And so if you look at other primate societies, there is cooperation within kin groups. There is not broad scale cooperation. There is often aggression. There’s not sharing. So language just doesn't make sense.

Abha: As Gary mentioned, there’s a huge startup cost for learning language. Humans have much longer childhoods than other species.

Ev: Ever since we're born, we start paying attention to all sorts of regularities in the inputs we get, including in linguistic inputs. 

Abha: This is Ev Fedorenko. Ev’s a neuroscientist at MIT, and she’s been studying language for the past two decades. As she mentioned, we start learning language from day one. That learning includes internalizing the structure and patterns that linguists used to assume were innate.

Ev: We start by paying attention to how sounds may go together to form kind of regular patterns like in, you know, syllables and various transitions that are maybe more or less common. Pay attention to that. Then once we figure out that some parts of that input correspond to meanings, right? Like, you know, the example I often say is like every time mama says cat, there's this fuzzy thing around, maybe it's not random, right? And you kind of start linking parts of the linguistic input to parts of the world. And then of course you learn what are the rules for how you put words together to express more complex ideas. So all of that knowledge seems to be stored in this, what I call the language system. And those representations are accessed both when I understand what somebody else is saying to me, because I have to map, I have to use this form to meaning mapping system to decode your messages, and when I have some abstract thing in my mind, an idea, and I'm trying to express it for someone else using this shared code, which in this case is English, right?

Abha: And often, we learn this shared code by interacting with our surroundings. Like, as Ev described, learning about a cat if there’s a cat in the room with you.

Melanie: But, you could also learn about cats without being able to interact with one. Someone could tell you about a cat, and you could start to create an idea for this thing called, “cat,” which you’ve never seen, but you know that it has pointy ears, it’s furry, and it makes a low rumbling sound when it’s content. That’s the power of language. Here’s Gary again.

Gary: So much of what we learn, and it's very difficult to quantify, to put a number on, like what percent of what we know we've learned from talking to others, from reading. Most of formal education takes that role, right? Like it would not be possible in the absence of certainly not without language, but even without written language. If you have enough language training, you can just kind of map onto the visual world. And we've done my lab, some work connecting it to, previously collected data from people who are born congenitally blind and the various things that they surprisingly learn about the visual world that one would think is only learnable through direct experience showing that well, normally sighted people might be learning it through direct experience, but a lot of that information is embedded in the structure of language.

Abha: And when we learn through language, we’re not just learning about physical objects. Language gives us the ability to name abstract concepts and categories, too. For instance, if you think about what the word “shoe” means, it refers to a type of object, but not one specific thing.

Steve: We wrote a paper about this and gave the example of shoes that were made out of eggplant skins. Okay. And like, you could imagine doing that, like drying out an eggplant skin and kind of sewing up the sides and adding laces and kind of fitting it around your feet and whatever. And you've probably never encountered shoes made out of eggplants before, but we all just agreed that that could happen, right? That you could find them. And so that tells you that it's not the physical object exactly that's defining what the concept means, right? Because I just gave you a new physical object. It has to be something more abstract, more about the relationships and the use of it that defines what the thing is. I don't think it's so crazy to think that, you know, language is special in some way. There's certainly lots of things that we acquire through language. Right, this is, I think, especially salient if you talk to a kid and they're asking why questions and you know, you explain things that are abstract and that you can't show them just in language and they can come to pretty good understandings of systems that they've never encountered before, you know, if they ask how clouds form or, you know, what the moon is doing or whatever, right? All of those are things that we learn about through a linguistic system. So the right picture might be one where, you know, there's a small kind of continuous or quantitative change in memory capacity that enables language, but then once you have language that opens up this this kind of huge learning potential for cultural transmission of ideas and learning complicated kinds of things from your parents and from other people in your community.

Melanie: So Abha, we asked at the beginning of the episode why humans have language. And what we've heard from Gary, Steve, and Ev so far is that language probably emerged as a result of humans' drive to socialize and to collaborate. And there's a feedback effect between these social drives and language itself. So language is an incredible tool for collaboration, and collaboration drives our intelligence. Gary, for example, thinks that language is a major cause of human intelligence being what it is. 

Abha: Right, right. It was interesting how Steve also pointed out that language enables a whole new way of learning and of cultural evolution. Language allows us to quickly learn new things, you know, from the people around us, say our parents, our friends, and other people we interact with. It also lets us learn without having to experience something ourselves. Say, for example, when we are walking with our parent when we were little and they said, you know, “Don't jump out in front of the car.” We tend to trust them and not have to experience it ourselves. And this is enabled because of language, right? 

Melanie: Yeah, we should definitely appreciate our parents more. But on the downside, Gary also pointed out that language makes it easy to lie and to trick people. So relying on language only makes sense when society has a basic level of trust. 

Abha: That is so true. I mean, if we don't trust each other, it's hard to function as a society, but trust comes at such a high cost too. And the other downside of language, you know, requires a long learning period because we can't learn a language overnight. We're not born necessarily speaking a language. Our childhood is so prolonged and that's another high cost. 

Melanie: Yeah. So the advantages of language must have outweighed those downsides in evolution. 

Abha: Yes. Another interesting point that just came up is that today's large language models have shown that certain linguistic theories are just wrong. Steve claims that LLMs have disproven Noam Chomsky's notion of an innate universal grammar in the brain, right? 

Melanie: Yeah, people have really changed their thinking about how language works in the brain. In part two, we'll look at what brain imaging can tell us about language and what happens when people lose their language abilities. 

Abha: Part Two: Are language and thought separate in the brain?

Abha: One of Ev’s signature methods is using fMRI brain scans to examine which systems in the brain light up when we use language. She and her collaborators have developed experiments to investigate the relationship between language and other forms of cognition. 

Ev: It's very simple. I mean, the logic of the experiments where we've looked at the relationship between language and thought is all pretty much the same, just using different kinds of thought. But the idea is you take individuals, put them in an fMRI scanner, and you have them do a task that you know reliably engages your language regions.

Abha: This could be, for example, reading or listening to coherent sentences while your brain is being scanned. Then, that map would be compared to the regions that light up when you hear sequences of random words and sounds that sound speech-like, but are completely nonsensical. 

Ev: And if you guys visit MIT, I can scan you and print you a map of your language system. It takes about five minutes to find. Very reliable. And again, if I scan you today or 10 years later, I've done this on some people 10 years apart, it's in exactly the same place. It's very, very reliable within people. It's a very, very robust, so we find those language regions. And then we basically ask, okay, let's have you engage in some form of thinking. Let's maybe have you solve some math problems or do something like some kind of pattern recognition test, And we basically ask, do circuits that light up when you process language overlap with the circuits that are active when, for example, when you engage in mathematical reasoning, like doing addition problems or whatnot. And we basically very consistently find across many domains of thought pretty much everything we've looked at so far, we find that the language regions are not really active, hardly at all, and some other system non-overlapping with the language regions is working really hard. So it's not the case that we engage the language mechanisms to solve these other problems.

Melanie: I know there's been some controversy about you know how easy it is to interpret the results of fMRI. What can you tell us about like is that a hard thing to do? Is it an easy thing to do?

Ev: I don't think there's any particular challenges in interpreting fMRI data than any other data. I you want to do robust and rigorous research. you want to make sure before you make a strong claim based on whatever findings, you want to make sure that your findings tell you what you think they are. But that's kind of a challenge for any research. I don't think it's related to particular measurements you're taking. I mean, there are certainly limitations of fMRI. Like I mentioned, one of them is that we can't look at fast time scales of information processing. We just don't have access to what's happening on a millisecond or tens of milliseconds or even hundreds of milliseconds time scale, which for some questions, it doesn't matter. But for some questions, it really does. And so that makes fMRI not well suited for those questions where it matters. But in general, good robust findings from fMRI are very robustly replicable.

Steve: I've been actually very convinced by Ev's arguments in particular. 

Abha: That’s Steve Piantadosi again.

Steve: You can find people who are experts in some domain, like mathematics experts or chess grandmasters or whatever, who have lost linguistic abilities. And that is a very nice type of natural experiment that shows you that the linguistic abilities aren't the kind of substrate for reasoning in those domains, because you can lose the linguistic abilities and still have the reasoning abilities. There might still be a learning story. Like, it would probably be very hard to learn chess, right, or learn mathematics without having language. But I think that once you learn it or learn it well enough to become an expert, it seems like there's some other kind of system or some other kind of processing that happens non-linguistically. What it shows you is that you can be really good at language without having the ability to do the kind of sequential, multi-step reasoning that seems to characterize human thinking. And that I think is surprising, right? It didn't have to be like that. It could have been that language was the substrate that we used for everything or that language was such a difficult problem that if you solved language, you would necessarily have to have all of the underlying kind of reasoning machinery that people have. But it seems that that's not right, right? That you can do quite a bit in language without having much reasoning.

Abha: And on the flipside, you can do a lot of reasoning without language. As Ev mentioned before, she and her collaborators have identified language systems in the brain that show up very reliably in fMRI scans. These language systems are mostly in the left hemisphere. So, what happens if someone loses these systems completely?

Ev: And then this fMRI approach is very nicely complemented by investigations of patients with severe language problems, right? So another approach, this one we've had around for much longer than fMRI, is to take individuals who have sustained severe damage to the language system, and sometimes left hemisphere strokes are large and they pretty much wipe out that whole system. So these are so -called individuals with global aphasia. They can't, like if you give them a sent– they just cannot infer any meaning from this. So it seems like the linguistic representations that they've spent their lifetime learning is lost, is really destroyed. And then you can ask about the cognitive capacities in these individuals. Can they still think complex thoughts?

And how do you test this? Well, you give them behavioral tasks. And for some of them, of course, you have to be a very clever experimentalist because you can no longer explain things verbally. But people come up with ways to get instructions across. They understand kind of thumbs up, thumbs down judgments. So you give them well-formed or ill-formed mathematical expressions or musical patterns or something like that. And what you find is, there are some individuals who are severely linguistically impaired. Like the language system is gone for as best as we can test it with whatever tools we have from a physiology. And yet, they're okay cognitively. They just lost that code to take the sophistication of their inner minds and translate it into this shared representational format. And a lot of these individuals are severely depressed because they're taken to be mentally challenged, right? Because that's how we often judge people is by the way they talk. That's why foreigners often suffer in this way too, right? Judgments are made about their intellectual capacities and otherwise and so on. Anyway, but yeah, a lot of these individuals seem to have the ability to think quite preserved, which suggests that at least in the adult brain, you can take that language system out once you've acquired that set of knowledge bits, right? You can take it out and it doesn't seem to affect any of the thinking capacities that we've tested so far. 

Melanie: So here's an extremely naive question. So if language and thought are dissociated, at least in adults, why does it feel like when I'm thinking that I'm actually thinking in words and in language?

Ev: I mean, so it's a question that comes up quite often, not naive at all. It's a question about the inner voice, right? A lot of people have this percept that there is a voice in their heads talking. It's a good question to which I don't think we as a field have very clear answers yet about what it does, what mechanisms it relies on. What we do know is that it's not a universal phenomenon, which already kind of tells you that it cannot be a critical ingredient of complex thought because certainly a lot of people who say that they don't have an inner voice, some of them are like MIT professors and they're like, “What are you talking about? You have a voice in your head? That's not good. Have you seen a doctor?” And it's a very active area of research right now. A lot of people got interested in this. You may have heard like about 10 years ago, there was a similar splash about aphantasia, this inability of some people to visually image things, so similar like how some people don't know what you mean when you say you have an inner voice, some people cannot form mental images. Like, you say “Imagine the house you lived in when you were a child,” and they're like “Got nothing there.” You know, it's like blank. Like I can describe it know facts about it, but I can't form that mental image So there's these kinds of things like inner voice mental imagery. Those are very hard things to study with the methods that we currently have available. 

Abha: Yeah, I think I was talking to someone who actually told me they don't have an inner voice and they actually are left with a feeling, but they can't necessarily describe the feeling. And so they don't know how to put it into language when they have a thought.

Ev: That's interesting, yeah, well, and that's a very good point because my husband who doesn't have an inner voice, often uses this as an argument. He's like, “If we were thinking in language, why is it sometimes so hard to explain what you think? Like, you know you have this idea very clearly for yourself and you just have trouble formulating it.” And yeah, that's a good point.

Melanie: But, Gary sees the relationship between language and thought a bit differently. He doesn’t think they can be separated so neatly.

Gary. I think Ev and her lab are doing fabulous work and we agree on many things. This is one thing we don't agree on.

Melanie: So in Ev’s example, patients who had had strokes lost their language systems in the brain, but they could still do complex cognitive tasks. They didn’t lose their ability to think. 

Gary: So it's possible to find individuals with aphasia that have typical behavior. And so that shows that at least in some cases, one can find cases where language is not necessary. So there are two complications with this. One is that people tend to have aphasia due to a stroke that tends to happen in older age. And so they've had a lifetime of experience with language. Where just because a task doesn't light up the language network doesn't mean the task does not rely on language. It doesn't mean that language has not played a role in basically setting up the brain that you have as an adult. Right. Such that you don't need that. You don't need language in the moment, but you've needed exposure to language to, to enable you to do the task in the first place.

Abha: We asked Ev what she made of this argument, that even if language isn’t necessary in the moment, it still plays a big role in developing your adult brain. But she doesn’t think it’s as important as Gary does. She refers to another population of people, which are individuals who are born deaf and aren’t taught sign language.

Ev: Unless there is other signers in the community or unless they're moved into an environment where they can interact with the signers, they often grow up not having input to language. Especially if they're like an isolated community, growing up. They figure out some system called home sign, which is a very, very basic system. And so you can ask whether these individuals are able to develop certain thinking capacities. And it is absolutely the case that having… not having access to language has devastating effects, right? You can't build relationships in the same way. You can't learn as easily, of course, through language. I can just tell you all sorts of things about the world. Most of the things you probably know, you learn through language, but it doesn't seem to mean, it doesn't seem to be the case that you fundamentally cannot learn certain kinds of complex things. So there are examples of individuals like that who have been able to learn math. Okay, some take longer, right? If you don't have somebody to tell you how to do differential equations, can figure it out from whatever ways you can. So it's certainly the case that language is an incredibly useful tool. And presumably, the accumulation of knowledge over generations that has happened has allowed us to build the world we live in today. But it doesn't undermine the separability of those language and thinking systems. 

Abha: In a lot of areas, it seems that Gary, Steve, and Ev are on the same page: language has helped humans achieve incredible things, and it’s a very, very useful tool.

Melanie: But where they seem to differ is on just how much language and thought influence each other, and in which direction the causal arrow is pointing: Does language make us intelligent, or is language is the result of our intelligence? Ev’s work shows that many types of tasks can be done without lighting up the language systems in the brain. When combined with examples from stroke patients and other research, she has reason to believe that language and cognition are largely separate things. 

Abha: Gary, on the other hand, isn’t ready to dismiss the role of language so easily — it could    still be crucial for developing adult cognition, and, generally speaking, some people might rely on it more than others.

Melanie: And Steve offers one more example of how language can make our learning more efficient, regardless of whether or not it’s strictly necessary. 

Steve: So, you know, if you're an expert in any domain, you know a ton of words and vocabulary about that specific domain that non -experts don't, right? That's true in scientific domains if you're a physicist versus a biologist, but it's also true in non-scientific domains, right? Like people who sew know tons of sewing words and people who are coal miners know tons of coal mining words and I think that those words are, like we were discussing, real technologies, right? They're real cultural innovations that are very useful. Like that's why people use those words is because they need to convey a specific meaning in a specific situation. And by having those words, we're probably able to communicate more efficiently and more effectively about those specific domains. So I think that this kind of ability to create and then learn domain specific vocabularies is probably very important and probably allows us to think all kinds of thoughts that otherwise would be really, really complicated, right? Like could imagine being in a situation where you don't have the domain specific vocabulary and you have to just describe everything, right? And it becomes very clunky and hard to talk about. And so that's why in sciences, especially, we come up with terms, so it really enables us to do things that would be really hard otherwise.

Melanie: Steve isn’t saying that it’s impossible to learn specific skills without language, but from his perspective, it’s more difficult and less likely. 

Abha: But Ev has a slightly different view.

Ev: There are cultures, for example, human cultures that don't have math, don't have exact math, right? So like the Peter Ha or the Chimani, like some tribes in the Brazilian Amazon, they don't have numbers because they don't need numbers.There are people who will make a claim that they don't have numbers because they don't have words for numbers. And I don't understand how the logic goes in this direction. I think they don't have words for numbers because they don't have the need for numbers in their culture. So they don't come up with a way to refer to those concepts. Then of course, you know, I mean, there's different stories for why numbers came about. You know, one common story has to do with farming, right? When you have to keep track of entities that are similar, like 200 cows, and you want to make sure you left with them and came back with whatever 15 cows. And then you figure out some counting system, typically using digits, right? A lot of cultures start with digits. Anyway, and then you come up with words. And once you have labels for words, of course you can then do more things. You can solve tasks that require you to hold onto those. But it's not like not having words prevents you from figuring out a system to do like a system of thought and representation to keep track of that information. So I think the directionality is in a different way than some people have put it forward.

Abha: So Melanie, our question for the spot of the episode was about whether language and thought are separate in the brain. And Ev seems to have very compelling evidence that they're separate. 

Melanie: Yeah, her results with fMRI were really surprising to me. 

Abha: Right? Me too. Both Steve and Ev stress that language makes communication between people very efficient, but point out that when people lose their language abilities, say because of a stroke or some other injury, it’s often the case that their thinking, that is their non-linguistic cognitive abilities, are largely unaffected. 

Melanie: But Abha, Gary pushed back on this. He noted that people who have had strokes tend to be older with cognitive abilities that they've had for a long time. So Gary pointed out that maybe you need language to enable cognition in the first place. And his own research has shown that this is true to some extent. 

Abha: I guess there are really two questions here. First, do language and cognition really need to be entangled in the brain during infancy and childhood when both linguistic and cognitive skills are still being formed? And the second is, are language and cognition separate in adults who have established language and cognitive abilities already? 

Melanie: Exactly. Ev's work addresses the latter question, but not the former. And Ev admits that the neuroscience and psychology of language have been contentious fields for a long time. Here's Ev. 

Ev: Language, as you know, has always been a very controversial field where people have very strong biases and opinions. You know, the best I can do is try to be open minded and just keep training people to do rigorous work and to think hard about even the fundamental assumptions in the field. Those should always be questioned. Everything should always be questioned.

Abha: So here’s another question: what does all of this mean for large language models? In theory, the skills LLMs have exhibited are the same skills that map onto the language systems in the brain. They have the formal competence of patterns and language rules. But, if their foundations are statistical patterns in language, how much thinking can they do now, and in the future? And how much have they learned already?

Murray Shanahan: I mean, people sometimes use the word, you know, an alien intelligence. I prefer the word exotic. It's a kind of exotic mind-like entity.

Melanie: That’s next time, on Complexity. Complexity is the official podcast of the Santa Fe Institute. This episode was produced by Katherine Moncure, and our theme song is by Mitch Mignano. Additional music from Blue Dot Sessions. I’m Melanie, thanks for listening.