Dr. Eddie Chang: The Science of Learning & Speaking Languages | Huberman Lab Podcast #95
- Welcome to the Huberman Lab podcast
where we discuss science and science-based tools
for everyday life.
[upbeat music]
I'm Andrew Huberman,
and I'm a professor of neurobiology and ophthalmology
at Stanford School of Medicine.
Today, my guest is Dr. Eddie Chang.
Dr. Eddie Chang is the chair of the neurosurgery department
at the University of California at San Francisco.
Dr. Chang's clinical group focuses on the treatment
of movement disorders including epilepsy.
He is also a world expert
in the treatment of speech disorders
and relieving paralysis that prevents speech
and other forms of movement and communication.
Indeed, his laboratory is credited with discovering ways
to allow people who have fully locked-in syndrome,
that is, who cannot speak or move,
to communicate through computers and AI devices
in order to be able to speak to others in their world
and understand what others are saying to them.
It is a truly remarkable achievement that we discuss today,
in addition to his discoveries about critical periods,
which are periods of time during one's life
when one can learn things, in particular, languages,
with great ease as opposed to later in life,
and we talk about the basis of things
like bilingualism and trilingualism.
We talk about how the brain controls movement
of the very muscles that allow for speech and language
and how those can be modified over time.
We also talk about stutter,
and we talk about a number of aspects of speech and language
that give insight into not just how we create
this incredible thing called speech
or how we understand speech and language,
but how the brain works more generally.
Dr. Chang is also one of the world leaders
in bioengineering,
that is, the creation of devices that allow the brain
to function at supraphysiological levels
and that can allow people
with various syndromes and disorders
to overcome their deficits.
So if you are somebody who is interested
in how the brain works normally,
how it breaks down and how it can be repaired,
and if you are interested in speech and language,
reading and comprehension of information of any kind,
today's episode ought to include some information
of deep interest to you.
Dr. Chang is indeed the top of his field
in terms of understanding these issues
of how the brain encodes speech and language
and creates speech and language
and, as I mentioned, movement disorders and epilepsy.
We even talk about things such as the ketogenic diet,
the future of companies like Neuralink,
which are interested in bioengineering
and augmenting the human brain, and much more.
One thing that I would like to note is that in addition
to being a world-class neuroscience researcher
and world-class clinician, neurosurgeon,
and chair of neurosurgery,
Dr. Eddie Chang has also been a close, personal friend
of mine since we were nine years old.
We attended elementary school together,
and we actually had a science club
when we were nine years old
focused on a very particular topic.
You'll have to listen in to today's episode
to discover what that topic was
and what membership to that club required.
That aside, Dr. Chang is an absolute phenom
with respect to his scientific prowess,
that is, both his research and his clinical abilities,
and he's one of these rare individuals
that, whenever he opens his mouth, we learn.
Before we begin, I'd like to emphasize that this podcast
is separate from my teaching and research roles at Stanford.
It is, however, part of my desire and effort
to bring zero-cost-to-consumer information
about science and science-related tools
to the general public.
In keeping with that theme,
I'd like to thank the sponsors of today's podcast.
Our first sponsor is Levels.
Levels is a program
that helps you see how different foods affect your health
by giving you real-time feedback on your diet
using a continuous glucose monitor.
I started using Levels about one year ago.
The Levels monitor allowed me to see how different foods
change my blood sugar level or my blood glucose level.
This turns out to be immensely important
for being able to predict how, for instance, certain foods
will affect your energy level, your ability to exercise,
your ability to recover from exercise,
and how it will affect other hormones
like testosterone, estrogen, thyroid hormone, and so forth.
The other thing about using a Levels monitor
is that it gave me insight into how food and exercise
and other activities and even how well I was sleeping
or how poorly I might happen to be sleeping
impact my blood glucose levels.
It even taught me that the sauna,
that generating a lot of heat in my body,
was changing my blood glucose levels,
which turned out to inform
how I should shift my eating patterns, foods I should eat,
timing of eat, and so on and so forth.
It really gave me great insight
into how all the important aspects of my health
were interlocking and affecting one another,
not just how food was impacting my blood glucose.
So if you're interested in learning more about Levels
and trying a continuous glucose monitor yourself,
you can go to levels.link/huberman.
That's levels.link/huberman.
Today's episode is also brought to us by Eight Sleep.
Eight Sleep makes smart mattress covers
with cooling, heating, and sleep tracking capacity.
I started sleeping on an Eight Sleep mattress cover
a few months ago, and it is simply incredible.
In fact, I don't even like traveling anymore
because they don't have Eight Sleep mattress covers
in hotels and Airbnbs.
One of the reasons
I love my Eight Sleep mattress cover so much is that,
as you might have heard before on this podcast or elsewhere,
in order to fall and stay deeply asleep,
you need your body temperature to drop
by about one to three degrees,
and I tend to run warm at night,
which makes it hard to sleep
and sometimes wakes me up in the middle of the night.
When you sleep on an Eight Sleep mattress cover,
you can program the temperature of that mattress cover
for specific times in the early, middle,
and late part of your night
so that the mattress stays cool,
and as a consequence, you sleep very, very deeply.
It also tracks your sleep,
so it's paying attention to how many times you're moving,
how deep your sleep is, it gives you a sleep score,
all wonderful data to help you enhance your sleep.
And of course, sleep is the foundation
of mental health, physical health, and performance,
which makes an Eight Sleep a terrific tool
for enhancing not just your sleep
but all aspects of your life really.
If you're interested
in trying the Eight Sleep mattress cover,
you can go to eightsleep.com/huberman
to check out the Pod 3 cover,
and you can save $150 at checkout.
Eight Sleep currently ships to the USA, to Canada, the UK,
and select countries in the EU and Australia.
Again, that's eightsleep.com/huberman
to save $150 at checkout.
Today's episode is also brought to us by InsideTracker.
InsideTracker is a personalized nutrition platform
that analyzes data from your blood and DNA
to help you better understand your body
and help you reach your health goals.
I've long been a believer in getting regular blood work done
for the simple reason that many of the factors
that impact your immediate and long-term health
can only be analyzed with a quality blood test.
One problem with a lot of DNA tests
and blood tests, however,
is you get data back about levels of metabolic factors,
levels of hormones, et cetera,
but you don't know what to do with that information.
InsideTracker makes interpreting your data
and knowing what to do about it exceedingly easy.
They have a personalized platform
where you can go and you can see those levels
of hormones, metabolic factors, lipids, et cetera,
and they point to specific nutritional tools,
behavioral tools, supplement-based tools, et cetera,
that can help you bring those numbers into the ranges
that are optimal for you.
If you'd like to try InsideTracker,
you can go to insidetracker.com/huberman
to get 20% off any of InsideTracker's plans.
Again, that's insidetracker.com/huberman to get 20% off.
The Huberman Lab podcast
is now partnered with Momentous Supplements.
To find the supplements we discuss
on the Huberman Lab podcast,
you can go to livemomentous, spelled O-U-S,
livemomentous.com/huberman.
And I should just mention
that the library of those supplements
is constantly expanding.
Again, that's livemomentous.com/huberman.
And now for my discussion with Dr. Eddie Chang.
Eddie, welcome.
- Hi. Hi, Andrew.
- Great to be here with you.
This has been a long time coming.
Just to come clean,
we've known each other since we were nine years old.
- Yeah.
- But then there was a long gap
in which we didn't talk to one another.
I heard things about you,
and, presumably, you heard a thing or two about me,
for better or for worse. [Eddie laughs]
And then we reconnected years later
when I was a PhD student and you were a medical student.
We literally ran into each other
in the halls of the University of California San Francisco
where you're now the chair of neurosurgery,
so it all comes full circle.
When you were at UCSF, you were working with Mike Merzenich,
and I know that name
might not be familiar to a lot of people,
but he's sort of synonymous with neuroplasticity,
the ability of the brain and nervous system
to change in response to experience.
So for our listeners,
I would just love for you to give a brief overview
of what you were doing at that time
because I find that work so fascinating,
and it really points to some of the things that can promote
and maybe hinder our brain's ability to change.
- Oh, wow. That's fantastic.
So we did bump into each other serendipitously back then,
and, at the time, I was a medical student at UCSF
studying with Mike Merzenich.
In particular, I was studying how the brain organizes
when you have patterns of sound.
And in particular, we were studying the brain of rodents
and trying to understand how different sound patterns
organize the frequency representation
from low to middle to high frequency maps
in the brains of baby rodents.
And one of the things that I was very interested in
was trying to understand
how the patterns of the natural environment,
let's say the vocalizations of the environment
that the rat pups were raised in,
or just the natural sounds that they hear,
how that shapes the structure of the brain.
And one of the things we did was to try an experiment
where we raised some of these rat pups in white noise,
continuous white noise that was essentially masking
all of those environmental sounds.
- And what was the consequence of animals being raised
in white noise environment?
- Well, one of the things that we didn't expect
but we found, which was quite striking,
is that there's this early period in brain development
where we're very susceptible
to the patterns that we hear or see.
In neuroscience, we call this a critical period
or a sensitive period.
And we have this for our eyes,
but we also have it for our ears.
And one of the most striking examples of this
is that any human can essentially grow up in a culture
where they hear different speech sounds
from one language to another,
and it's like after a couple of years,
you lose sensitivity to sounds
that are not part of your native language
and you have high sensitivity
for the languages of your native culture.
And that's pretty extraordinary
that human brain has that flexibility
yet, at the same time, has that specialization for language.
And so we were trying to think about how do we model this,
for example, in rodents, who obviously don't speak,
but we're just understanding how sounds
and environmental sounds
modulate and organize the auditory cortex.
And one of the things that we found that was quite striking
was that if you basically mask environmental sounds
from these rat pups,
the critical period,
this sensitive period where it's open to plasticity,
it's open to change, it's open to reorganization,
that window can stay open much, much longer.
And in one way, it sounds like that's a good thing,
but on the other hand, it's also a retardation.
It actually slowed the maturation of the auditory cortex.
It was ready to close when these rat pups were really young,
but by raising them in white noise,
we found out that you could keep it open
for months beyond the time period that it normally closes.
And so I think one of the things it taught me
was that it's not just about the genetic programming
that specifies some of this sensitive period,
but it's also a little bit about the nature
of the sounds that we hear
that help keep that window for the critical period
open and closed.
- That's fascinating.
And I know it's difficult to make a direct leap
from animal research to human research,
but if we could speculate a little bit,
I can imagine that some people grow up in homes
where there's a lot of shouting and a lot of inflection,
maybe people are very verbose.
Maybe others grow up in a home
where it's quieter and more peaceful.
Some people are going to grow up in cities.
I just came back from New York City,
it's like all night long,
there's honking and sirens and it's just nonstop,
and then I return here where it's quite quiet at night.
Can we imagine that the human brain
is going to be shaped differently
depending on whether or not one grows up
in one environment or another?
And would that impact their tendency
to speak in a certain way as well as hear in a certain way?
What do we know about that?
- Well, I think that, from my perspective,
it's really clear that those sounds that we are exposed to
from the very earliest time, even in utero, in the womb,
where the sound is hearing the mother or father or friends
while in the womb
actually will influence how these things organize.
And so there's no question that the sounds that we hear
are going to have some influence,
and those sounds are going to structure
the way that those neural networks actually lay down
and will forever influence how you hear sounds,
and speech and language
is probably one of the most profound examples of that.
- I get a lot of questions
about the use of white noise during sleep.
In particular, people want to know
whether or not using a white noise machine
or a machine or a program
that makes the sound of waves, for instance,
if it assists their infant in sleeping,
is it going to be bad for them
because it's flooding the auditory system
with a bunch of essentially white noise
or disorganized noise?
Do we have an answer to that question?
- Not yet.
I think that what you're asking
is a really important question
because parents are using white noise generators
almost universally now, and for good reasons.
You know, it is hard to have kids up at night.
I've got three kids of my own and was very tempted
to think about how to use some of these tools
to just soothe them and get them to bed,
especially when I was, like, so tired and exhausted.
But I think that there is a cost,
you know, to think a little bit about.
You know, we're not exposed
to continuous white noise naturally.
There is a value to having really salient, structured sounds
that are part of our natural environment
to actually have the brain develop normally.
So whether or not that has an impact,
you know, while you're sleeping, it's not clear.
I don't think that those studies have been done.
What was really clear was that if you raise these baby rats
in continuous white noise, not super loud,
but just enough to mask the environmental sounds
that that was enough to keep, you know, the auditory cortex,
the part of the brain that hears,
in this really delayed state
which could essentially slow down the development
and maturation of the brain.
- And one could probably assume
that slowing the maturation of areas of the brain
that are responsible for hearing
might, I want to underscore,
might impact one's ability to speak, right?
Because isn't it the case that if people can't hear,
they actually have a harder time enunciating
in a particular way, is that right?
If I were to not be able to hear my own voice,
would my speech patterns change?
- Well, I think part of it
is that, over time, we develop sensitivity
to the very specific speech sounds in a given language,
and the sensitivity improves
as we hear more and more and more of it.
And then on the other hand,
we lose sensitivity to other speech sounds at the same time.
But as part of that process,
we also have a selectivity, a gain,
a specialization even for those sounds,
even relative to noise,
noisy backgrounds and things like that.
I tend to think about it
like what is the signal-to-noise ratio?
And so the brain has its own ways
of trying to increase that signal-to-noise ratio
in order to make it more clear.
Part of that is how we hear
and how it lays down a foundation
for that signal-to-noise ratio,
and so you can imagine a child
that's raised continuously in white noise
would be really deprived of those kind of sounds
that are really necessary for it to develop properly.
So I think with regard to those tools for babies,
I think we should study it,
we should try to understand this definitively.
I think what we saw in rodents
would tell us that there is potential,
you know, things that we should be concerned about.
But, again, it's not really clear,
if you're just using at night, whether it has those effects.
- I guess the critical question
that a number of people are going to be asking
is did you decide to use a white noise machine or not
to help keep any of your three children asleep?
- Well, I think the short answer is no.
I mean, I obviously did a lot of work thinking
and work on this and thought about it carefully,
but there are other kinds of noise,
or, I wouldn't even call it noise,
other sounds that you can use
that can be equally soothing to a baby.
It's just that white noise has no structure,
and what it's doing
is essentially masking out all of the natural sounds.
And I think the goal
should really be about how do we replace that
with other more natural sounds that structure the brain
in the way that we want to be more healthy.
- Well, I know that
after you finished your medical training,
you went on to, of course, specialize in neurosurgery.
And last I checked, you spend most of your days
either running your laboratory or in the clinic
or running the department,
and your clinical work and your laboratory work
involves often removing pieces of the skull of humans
and going in and either removing things
or stimulating neurons,
treating various ailments of different kinds,
but your main focus these days, of course,
is the neurobiology of speech and language.
And so for those that aren't familiar,
could you please distinguish for us speech versus language
in terms of whether or not
different brain areas control them?
And I know that there's a lot of interest
in how speech and language and hearing
all relate to one another.
And then we'll talk a bit
about, for instance, emotions
and how facial expressions could play into this,
or hand gestures, et cetera.
But for the uninformed person,
and for me, to be quite direct,
what are the brain areas that control speech and language?
What are they really,
and especially in humans, how are they different?
I mean, we have such sophisticated language
compared to a number of other species.
What does all this landscape look like in there?
- Yeah, well, that's a fascinating question,
and I'm going to just try to connect a couple of the dots here,
which is that in that earlier work during medical school,
I was doing a lot of what we call neurophysiology,
putting electrodes into the auditory cortex
and understanding how the brain responds to sounds,
and that's how we actually mapped out these things
about the sensitivity to sensitive periods.
That experience with Mike Merzenich
and thinking about how plasticity is regulated in the brain
and particular about how sound
is represented by brain activity
was something that, you know, was really formative for me.
And because I was a medical student
and was going back to my medical studies,
it was that in combination
with seeing some awake brain surgeries
that our department is really well known for.
One of my mentors, Mitch Berger,
really pioneered these methods
for taking care of patients with brain tumor
and be able to do these surgeries safely
by keeping patients awake and by mapping out language.
- So they're talking and listening,
and you're essentially in conversation with these patients
while there's a portion of their skull removed,
and you are stimulating
or, in some cases, removing areas of their brain,
is that right?
- That's exactly right, and the only thing off there
is it's not essentially, it is just that.
The only difference between the conversation
that I might have with my patient
who's undergoing awake brain surgery
is that I can't see their face and they can't see my face.
We actually have a sterile drape
that actually separates the operating field,
and they're looking and interacting
with our neuropsychologist,
but I can talk to them and they can hear my voice
and vice versa.
And it's a really, really important way
of how we can protect some of those areas
that are really critical for language,
at the same time, accomplish a mission
of getting the seizures under control
or getting a brain tumor removed.
- And is that because occasionally
you'll encounter a brain area,
maybe you're stimulating
or considering removing that brain area,
and suddenly a patient will start stuttering
or will have a hard time formulating a sentence?
Is that essentially what you're looking for?
You're looking for regions
in which it is okay or not okay to probe?
- Exactly, so the first thing that we do
is that we use a small electrical stimulator
to probe different parts of the areas
that we think might be related and important for language
or talking or even movements of your arm and leg.
That's what we call brain mapping.
And we use a small electrical current
that's delivered through a probe
that we can just put at each spot.
And the areas that we're really interested in
are, of course, the areas that are right around the part
that is pathological, the part that's injured,
or the part that has a brain tumor that we want to remove,
so we can apply that probe
and transiently, meaning temporarily, activate it.
So if you're stimulating the part of the brain
that controls the hand, the hand will move, it will jerk.
Sometimes a fist will be made, something like that.
Other times, while someone is counting
or just saying the days of the week,
you can stimulate in a different area
that stops their speech altogether.
That's what we call speech arrest.
Or if someone is looking at pictures
and they're describing the pictures
and you stimulate in a particular area,
they stop speaking or the words start coming out slurred
or they can't remember the name of the object
that they're seeing in the picture.
These are all things that we're listening really carefully
while we apply that focal stimulation.
That's what we call brain mapping.
- What are some of the more surprising,
or maybe even if you want to offer
one of the more outrageous examples of things
that people have suddenly done or failed to be able to do
as a consequence of this brain mapping?
- Well, I think the thing to me
that has been the most striking
is that, you know, some of these areas you stimulate,
and altogether, you can shut down someone's talking.
So a person says, "I wanted to say it,
but I couldn't get the words out."
And even though I've seen this thousands of times now,
it's still exciting every time that I see it because,
it's exciting because you're seeing the brain,
it's a physical organ, it's part of the body,
outside of the veins on top of it,
it doesn't look like a machine.
But when you do something like that
and you focally change the way it works,
and you see that because the person can't talk anymore
and they say, "I know what I want to say,
but I couldn't get the words out,"
you're confronted with this idea
that that organ is the basis of speech and language
and way beyond that, obviously,
you know, for all the other functions
that we have for thinking
and feeling our emotions, everything.
So that, to me, is a constant reminder
of, you know, this really special thing that the brain does
which is compute so many of the things that we do,
and in particular in the area
around speech and language, generating words,
something that is really unique to our species,
is just extraordinary to see.
Again, even though I've seen it thousands of times,
it's just having that connection
because it doesn't look like a machine,
but it is doing something that is quite complicated,
precise, and remarkable.
- Do you ever see emotional responses
from stimulation in particular areas?
And do you ever hear or see emotional responses
that are associated with particular types of speech?
Because, for instance, curse words are known to,
people with Tourette's often will curse,
not always, sometimes they'll have tics or other things.
But what I learned from a colleague of ours
is that curse words have a certain structure to them.
There's usually a heavy
or kind of a sharp consonant up front, right,
that allows people, at least as it was described to me,
to have some sort of emotional release.
It's not a word like murmur,
which has kind of a soft entry here,
I'm not using the technical language,
and you pick your favorite curse word out there, folks.
I'm not going to shout out any now or say any now,
but that certain words have a structure to them
that, because of the motor patterns
that are involved in saying that word,
you could imagine it has an emotional response unto itself.
So when stimulating
or when blocking these different brain areas,
do you ever see people get angry or sad
or happy or more relaxed?
- Oh, well, definitely I've seen cases
where you can invoke anxiety, stress,
and I think that there are also areas that you can stimulate
and you can also evoke the opposite of that,
sort of like a calm state.
- I think that brain area is slightly hyperactive in you,
or at least more than me.
In all the years I've known you,
you've always been, at least externally, a very calm person.
I mean, I always find it amusing
that you work on speech and language
and you have a very calming voice.
And I'm being really serious.
I think that there's a huge variation in that, right,
in terms of how people speak and how they accent words.
- Absolutely, yeah.
So there are areas,
for example, the orbitofrontal cortex that we showed
that if you stimulate there...
The orbitofrontal cortex is a part of the brain
that's above the eyes.
That's why they call it orbitofrontal,
meaning it's above the eye or the orbit
and in the frontal lobe, and it's this area right in here.
It has really complex functions.
It's really important for learning and memory.
But one of the things that we observed
is when you stimulate in there,
people tended to have a reduction in their stress,
and it was very much related to their state of being,
meaning that if someone was already kind of feeling normal
and you stimulate there, it didn't do much.
But if someone was in a very anxious state,
it actually relieved that.
And then we've seen the corollary of that
which is true, too,
which is that there are other areas
like the amygdala or parts of the insula
that if you stimulate,
you can cause an acute temporary anxiety,
a nervous feeling,
or if you stimulate the insula,
people can have an acute feeling of disgust.
So, you know, the brain has different functions
and these different nodes that help process the way we feel.
Certainly, I think that, to some degree,
neuropsychiatric conditions reflect an imbalance
of the electrical activities in these areas.
One of the things that was something I will never forget
was taking care of a young woman with uncontrolled seizures.
We call that epilepsy.
It's a medical condition where someone
has uncontrolled electrical activity in the brain.
Sometimes you can see that as convulsions
where people are shaking and lose consciousness.
There are other kind of seizures that people can have
where they don't lose consciousness,
but they can have experiences that just come out of nowhere
just as a result of electrical activity
coming from the brain.
And about six years ago, I took care of a young woman
who was diagnosed psychiatrically
with anxiety disorder for several years.
It turns out that it wasn't really an anxiety disorder.
It was actually that she had underlying seizures,
an epilepsy activating a part of her brain
that evokes, you know, anxious feelings.
- How was that discovered?
Because I know a lot of people out there have anxiety.
I mean, in the absence of a brain scan,
how or why would one suspect that maybe they have a tumor
or some other condition
that was causing those neurons to become hyperactive?
- Yeah, that's really important
because so many people have anxiety,
and the vast, vast majority are not having that
because they're having seizures in the brain.
I think one of the ways that this was diagnosed
was that the nature
of when she was having these panic attacks
was not triggered by anything.
They would just happen spontaneously.
And that's what can happen with seizures sometimes.
They just come out of nowhere.
We don't fully understand what can trigger them,
but they weren't things
that were typically anxiety-provoking.
This is something that just happened all of a sudden.
And because you brought it up,
this is not something that you can see on an MRI.
We could not see and look at the structure of her brain,
with an MRI, that she was having seizures.
The only way that we could actually prove this
was actually putting electrodes into her brain
and proving that these attacks that she was having
were localized to a part called the amygdala,
it's a medial part of the temporal lobe, which is here,
and associating the electrical activity
that we were seeing on those electrodes
with the symptoms that she had,
and she ultimately needed a kind of surgery
where she was awake in order to remove this safely.
- Speaking of epilepsy,
a number of people out there have epilepsy
or know people who do.
Are the drugs for epilepsy satisfactory?
You know, I think about things like Depakote,
you know, and adjusting the excitation
and inhibition of the brain.
I mean, are there good drugs for epilepsy?
We know there are not great drugs
for a lot of other conditions.
And how often does one need neurosurgery
in order to treat epilepsy?
Or can it be treated most often just using pharmacology?
- Yeah, great question.
Well, a lot of people have seizures
that can be completely controlled
by their medications, a lot.
But there's about a one-third of people who have epilepsy,
which we define as anyone who's had three or more seizures
that, you know, about a third of them
actually don't have control
with all of the modern medications that we have nowadays.
And some of the data suggests
that if you have two or three medications,
it actually doesn't matter necessarily
which of the anti-seizure medications it is,
but there is data suggests
if you've just tried two or three,
the fourth, fifth, sixth, and beyond
is not likely to help control it.
So we are in a situation, unfortunately,
where a lot of the medications are great for some people,
but for another subset, they can't control it,
and it comes from a particular part of the brain.
Now, fortunately, in that subset,
there's another part of that group
that can benefit from a surgery
that actually either removes that part of the brain,
and nowadays, we'll use stimulators now
to sometimes put electrical stimulation
in that part of the brain to help reduce the seizures.
- And you said a third of people with epilepsy
might need neurosurgery?
- Well, what I mean by that
is, like, they continue to have seizures
that are not controlled by all medications,
and there's going to be another subset of those
that may benefit from a surgery.
It's probably not that whole third, it's a subset of that.
It's just to say that epilepsy
can be really hard to get fixed.
And for people where the seizures come from one spot
or, you know, an area, then surgery can do great.
If it comes from multiple areas
or if it comes from the whole brain,
then we have to think about other methods to control it.
Fortunately, nowadays, there's actually other ways.
Surgery now, to us,
doesn't just mean removing part of the brain.
Half of what we do now is use stimulators
that modulate the state of the brain
that can help reduce the seizures.
- I've heard before that the ketogenic diet
was originally formulated in order to treat epilepsy
and, in particular, in kids.
Is that true, and why would being in a ketogenic state
with low blood glucose reduce seizures?
- That's a great question.
And to be honest, I don't know actually
if it was originally designed to treat seizures,
but I can tell you for sure that for some people,
just like with some medications,
it can be a life-changing thing.
It can completely change the way that the brain works.
And it's not something that's for everybody,
but for some people, there's no question,
and it has some very beneficial effects.
I think it's to be determined still,
like why and how that works.
- I've heard similar things about the ketogenic diet
for people with Alzheimer's dementia,
that there's nothing particularly relevant
about ketosis to Alzheimer's per se,
but because Alzheimer's changes the way
that neurons metabolize energy
that shifting to an alternate fuel source
can sometimes make people feel better,
and so a number of people are now trying it.
But it's not as if blood glucose and having carbohydrates
is causing Alzheimer's.
And people get confused often
that just because something can help
doesn't mean that the opposite is harming somebody.
So I find this really interesting.
Sometime I'll check back with you about what's happening
in terms of ketogenic diets and epilepsy.
But you said that in some cases, it can help.
Has that observation been made
both for children and for adults?
Because I thought that, originally,
the ketogenic diet for epilepsy
was really for pediatric epilepsy.
- Yeah, that's right.
So a lot of its focus has really been on kids with epilepsy,
but certainly it's a safe thing to try,
so a lot of adults, you know, will try it as well.
- Interesting.
I'd like to take a quick break
and acknowledge one of our sponsors, Athletic Greens.
Athletic Greens, now called AG1,
is a vitamin mineral probiotic drink
that covers all of your foundational nutritional needs.
I've been taking Athletic Greens since 2012,
so I'm delighted that they're sponsoring the podcast.
The reason I started taking Athletic Greens,
and the reason I still take Athletic Greens
once or usually twice a day,
is that it gets me the probiotics
that I need for gut health.
Our gut is very important.
It's populated by gut microbiota
that communicate with the brain, the immune system,
and basically all the biological systems of our body
to strongly impact our immediate and long-term health,
and those probiotics in Athletic Greens
are optimal and vital for microbiotic health.
In addition, Athletic Greens contains a number
of adaptogens, vitamins, and minerals that make sure
that all of my foundational nutritional needs are met,
and it tastes great.
If you'd like to try Athletic Greens,
you can go to athleticgreens.com/huberman,
and they'll give you five free travel packs
that make it really easy to mix up Athletic Greens
while you're on the road, in the car,
on the plane, et cetera.
And they'll give you a year's supply of Vitamin D3+K2.
Again, that's athleticgreens.com/huberman
to get the five free travel packs
and the year's supply of Vitamin D3+K2.
I'm curious about epilepsy for another reason.
I was taught that epilepsy is an imbalance
in the excitation and inhibition in the brain.
So you think about these electrical storms
that give people either grand mal,
you know, shaking and kind of convulsions.
But years ago, I was reading a book,
a wonderful book actually,
called "Einstein in Love" by Dennis Overbye.
It was about Einstein and I guess his personal life.
People who knew him
claimed that he would sometimes walk along,
and then every once in a while would just stop
and kind of stare off into space
for anywhere from a minute to three to five minutes,
and it was speculated that he had absence seizures.
What is an absence seizure?
And the reason I ask is I occasionally will be walking along
and I'll be thinking about something and I'll stop.
But in my mind, I'm thinking during that time,
but I realize that if I were to see myself from the outside,
it might appear that I was just kind of absent.
What is an absence seizure?
Because it's so strikingly different in its description
from, say, a grand mal convulsive seizure.
- Sure, well, like I mentioned before,
depending on how the seizure activity spreads in the brain
or how it actually propagates,
if it stays in one particular spot
and doesn't spread to the entire brain,
it can have really different manifestation.
It can represent really differently.
So absence seizure is just one category
of different kind of seizures
where you can lose consciousness basically,
and what I mean by that is that you're not fully aware
of what's going on in your environment, okay?
So you're sort of taken offline
temporarily from consciousness,
but you could still be, for example, standing,
and to people who are not paying attention,
they may not even be aware that that's happening.
- What are some other types of seizures?
- Well, you know, I think some of the other kinds,
the classic ones are temporal lobe seizures.
So these are ones that come from the medial structures
like the amygdala and hippocampus.
Oftentimes people, when they have seizures coming from that,
they may taste something very unusual like a metallic taste
or smell something like the smell of burning toast,
something like that.
There are some people, with temporal lobe seizures,
will have deja vu.
They will have that experience
that you've been somewhere before,
but that's just a precursor to the seizure.
And it just highlights that when people have seizures
coming from these areas,
they sometimes hijack what that part of the brain
is really for.
So the amygdala and hippocampus, for example,
are really important for learning and memory.
It's not surprising that when people have seizures there
that it can evoke a feeling of deja vu
or that it can evoke a feeling of anxiety.
And the areas that are right next to it, for example,
these areas are really important for processing smell.
So these areas are right next to each other
so you can have these kind of complex set of symptoms,
the weird taste, the smell of toast,
and then a feeling of deja vu,
that's classic for a temporal lobe seizure,
and it's because those parts of the brain
that process those functions are right next to each other.
- I'm told that I've had nocturnal seizures,
and I've woken up sometimes from sleep
having felt as if I was having a convulsion,
the sort of sense of buzzing in the back of the head.
This happened to me two or three times in college.
Well, I woke up and my girlfriend was very distraught,
like, "You were having a seizure."
I was having full convulsion in my sleep.
Is that correct?
Is there such a thing as nocturnal seizures?
What do they reflect?
They eventually stopped happening,
and I couldn't tether them to any kind of life event.
I wasn't doing any kind of combat sport
or anything at the time,
I wasn't drinking alcohol much,
it's never really been my thing.
What are nocturnal seizures about?
- [Eddie] Oh, well-
- And do I need brain surgery?
[Andrew laughs] [Eddie laughs]
- Nocturnal seizures are just another form.
Like, again, epilepsy and seizures
can have so many different forms
and not just, like, where in the brain,
but also when they happen.
And there are some people who, for whatever reason,
it's very timed to the circadian rhythm.
It's actually not just happening at night,
but a certain period at night
when people are in a certain stage of sleep
that the brain is in a state
that it's vulnerable to having a seizure,
and so that's basically just one form of that.
Again, it's not just about where it's coming from,
but also when it's happening and how that's timed
with other things that are happening with the body.
- Interesting.
Well, it eventually stopped happening
so I stopped worrying about it.
I haven't had seizures since.
Returning to speech and language,
when I was getting weaned in neuroscience,
I learned that we have an area of the brain
for producing speech
and we have an area of the brain for comprehending speech.
What's the story there?
Is it still true
that we have a Broca's and a Wernicke's area?
Those are names of neurologists, presumably,
or neurosurgeons that discovered
these different brain areas.
Maybe you could familiarize us
with some of the sort of textbook version
of how speech and language are organized in the brain,
maybe share with us a little bit of the lesion studies
that led to that understanding,
and then I would love to hear a bit
about what your laboratory is discovering
about how things are actually organized,
because from some discussions you and I have had
over the last year or so,
it seems like, well, let's just be blunt,
it seems that much of what we know from the textbooks
could be wrong.
- Well, I love that question
because, for me, it's very central to the research we do,
and it's where the intersection
between what we do in the laboratory in our research
interfaces with what I see in patients.
And one of the things that fascinated me
early on in my medical training
was in doing some of these brain mapping
or watching them with my mentor
or taking care of patients that had, you know, brain tumors
in a certain part of the brain
was that, a lot of times, what I was seeing in a patient
did not correlate with what I was taught in medical school.
And, you know, some people will think,
well, this might be an exception,
but after you see it for a couple times
and if you're kind of interested in this problem,
you know, it poses a serious challenge
to what you've learned
and how you think about how these things operate.
And that actually got me really interested
in trying to figure this out
because, earlier, we talked about
just this extraordinary thing that the brain is doing
to create words and sentences,
and that's the process by which I'm getting ideas out
from my mind into yours.
It's an incredible thing, right?
It's the basis of communication,
high information communication between two individuals
that's really unique to humans.
So in historical times,
how this works has been very controversial
from day one of neuroscience.
A long time ago, people thought the bumps on your head
corresponded to the different faculties of the mind.
So for example, if you had a bump here,
it might be corresponding to intelligence
or another one over here, you know, to vision
and these kind of things.
That's what we nowadays call phrenology,
and that was kind of the starting point.
A lot of that has been, of course, debunked,
but when you see those little statues
of different brain partitions on someone's head,
that's essentially how people
were thinking about how the brain worked back then,
a couple hundred years ago.
Modern neuroscience began when,
actually, it was very much related
to the discovery of language.
So modern neuroscience,
meaning moving beyond this idea that the bumps on the scalp
corresponded to the faculties of the mind,
but there were things
that actually were in the brain themselves,
and they weren't corresponding to things
that you could see superficially,
like on the scalp or externally,
that it was something about the brain itself.
I mean, it seems so obvious now,
but back then, this was the big academic, you know, debate.
And the first observation
that I think was really impactful in the area of language
was an observation by a French neurosurgeon
named Pierre Broca.
And what he observed was that in a patient,
not that he did surgery in,
but that he had seen and taken care of,
that the person couldn't talk.
And, in particular, they called this individual Tan
because the only words that he could produce was tan, tan.
For the most part,
he could generally understand the kind of things
that people were asking him about,
but the only thing that he could utter from his mouth
were these words, tan, tan.
And what eventually had happened
was this individual passed away,
and the way that neuroscience was done back then
was basically to wait until that happened
and then to remove the brain
and to see what part of the brain was affected
in this patient that they called Tan.
And what Broca found was that there was a part
in the left frontal lobe,
so the frontal lobe is this area like I described earlier,
which is, you know, up behind our forehead, up here,
and in the back of that frontal lobe,
he claimed that this was the seat of articulation
in the brain.
He literally used something like that in French,
the seat of articulation,
meaning that this is the part of the brain
that is responsible for us to generate words.
About 50 years later, the story becomes more complicated
with a German neurologist named Carl Wernicke.
And what Wernicke described
was a different set of symptoms
in patients that he observed a different phenomenon
where people could produce words,
but a lot of the words,
and they were fluent in the sense
that, like, they sound like they could be real words
but from a different language, for example.
And some of us call that, like, word salad or jargon.
They were essentially making up words,
but it was not intentional.
It was just the way that the words came out.
But in addition to that, he observed that these people
also could not understand what was being said to them.
So we could be having a conversation,
and I'd be asking you, "Am I a woman?"
And you might nod your head,
you know, just because you're not processing the question.
And so here are two observations.
One is that the frontal lobe is important
for articulating speech,
creating the words and expressing them fluently.
And then a different part of the brain
called the left temporal lobe,
which is this area right above my ear,
that is an area that I think was claimed
to be really important for understanding.
So the two major functions in language,
to speak and to understand,
were kind of pinned down to that,
and we've had that basic idea in the textbooks
for, you know, over 200 years.
- It's certainly what I was taught.
- [Eddie] Is that right?
- Oh, yeah, and certainly what we still,
we still teach undergraduates, graduate students,
and medical students that.
- Well, that's what I learned, too, in medical school.
And what I saw in reality
when I started taking care of patients
was that it's not so simple.
In fact, part of it is fundamentally wrong.
So just in a nutshell,
nowadays, after, you know, looking at this very carefully
over hundreds of patients,
we've shown that surgeries,
for example, in the posterior part of the frontal lobe,
a lot of times, people have no problem talking at all
whatsoever after those kind of surgeries,
and that it's a different part of the brain
that we call the precentral gyrus.
The precentral gyrus is a part of the brain
that is intimately associated with the motor cortex.
The motor cortex is the part of the brain
that has a map of your entire body
so that it has a part that corresponds to your feet,
it has a part that corresponds to your hands.
But then there's another part
that comes out more laterally on the side of the brain
that corresponds to your lips, your jaw, your larynx,
and we have seen that when patients have surgeries
or injuries to that part of the brain,
it actually can really interrupt language,
so it's not as simple
as just moving the muscles of the vocal tract,
but it's also important for formulating
and expressing words.
So that's Broca's area
that I think the field now recognizes
not just because of our work,
but many other people that have studied this
in stroke and beyond,
is that the idea
that that is the basis of speaking in Broca's area
is fundamentally wrong right now,
and we have to figure out how to correct the textbooks
that we kind of understand that
so that we can continue to make progress.
Now, in terms of the other major area
that we call Wernicke's area in the posterior temporal lobe,
that has held,
I think, quite legitimately for some time.
So that is an area that you have to be super careful
when you do surgery there.
That's an area where,
if you have a mistake there and you cause a stroke
or you remove too much of the tumor there,
you go too far beyond it,
then the person can be really, really hurt.
Like, they'll have a condition that we call aphasia
where they may not be able to understand words,
they may not be able to remember the word
that they're trying to say.
They know what they're trying to say,
but they can't remember the precise word
that goes with the object that they're trying to think of.
They may even produce words
that I described before are like word salad or very jargony.
So, you know, they might say something like tamiranai.
That's not a real word,
but it sounds like it could be, you know?
And that's just because that part of the brain
has some role not just in understanding what we hear,
but also actually has a really important role
in sending the commands to different parts of the brain
to control what we say.
- Not long ago, you and me
and my good friend Rick Rubin
were having a conversation about medicine and science,
and Rick asked the question,
"What percentage of what you learned
in graduate and/or medical school do you think is correct?"
And you had a very interesting answer.
Would you share it with us?
- I don't know. I don't remember the exact.
But I would say
that with regard to the brain in particular,
I would say about 50%
gets it right and accurate and is helpful,
but another 50% is just the approximation
and oversimplification of what's going on.
The example that we talked about,
language is just an example of that.
It's just there are things that make it easier to learn
and easier to teach and easier to even think about,
and that's probably why we continue teaching
in the way that we do.
But I think as time goes on,
the complexity of reality of how the brain works
is, well, first of all,
we're still trying to figure it out,
and second of all, it is complex
and it's still incomplete story.
- It's early days.
And we'll get into some of the technical advances
that are allowing some correction of the errors
that the field has made.
And, look, no disrespect to the brain explorers
that came before us,
and the ones that come after us will correct us, right?
That's the way the game is played.
But what I'm hearing
is that there are certain truths that people accept,
and then there's about half of the information
that is still open for debate
and maybe even for complete revision.
One thing that I learned about language
and the neural circuits underlying language
is that it's heavily lateralized,
that these structures, Broca's and Wernicke's
and other structures in the brain
responsible for speech and comprehension of speech
sit mainly on one side of the brain,
but they do not have a mirror representation
or another equivalent area
on the opposite side of the brain.
And for those that haven't poked around in a lot of brains,
certainly you, Eddie,
have done far more of that than I have,
but I've done my fair share in nonhuman species
and a little bit in humans,
almost every structure,
almost every structure has a matching structure
on the other side of the brain,
so when we say the hippocampus,
we really mean two hippocampi,
one on each side of the brain.
But language, I was taught, is heavily lateralized,
that is, that there's only one.
So that raises two questions. One, is that true?
And if it is true, then what is the equivalent real estate
on the opposite side of the brain doing
if it's not doing the same function
that the one on, say, the left side is performing?
- Well, that's one of those things
that is, again, like mostly true, not 100%.
And what I mean by that is that it's complicated.
So for people who are right-handed,
99% of the time, the language part of the brain
is on the left side.
- And what is the equivalent brain area
on the right side doing if it's not doing language?
- Well, you know, the thing that's incredible
is if you look at the right side
and you look at it very carefully,
either under an MRI
or you actually look at the brain
under slides in a microscope,
it looks very, very similar.
It's not identical, but it looks very, very similar.
All the gyri, which are the bumps on the brain
that, you know, have the different contours
and the valleys that we call sulci,
those all look basically the same.
Like, there is a mirror anatomy on the left and right side,
and so it's not been so clear
what's so special actually about the left side
to house language.
But what we do know,
and this is what we use all the time in assessing
and figuring out, you know, this before surgery,
is if you're right-handed,
99% of the time, the language is going to be
on the left side of the brain.
- Is handedness genetic in any way?
I mean, when I grew up, - Yes.
- a pen or a pencil or crayon
was placed into my hand presumably, or I started using...
My father was left-handed,
and then where he grew up in South America,
they forced him to force himself to become right-handed.
They actually used to restrict the movement of his left hand
so he was forced to write...
And then you have hook lefties and hook righties.
And I know this is a deep dive
and we probably don't want to go
into every derivation of this,
but so for somebody who's left-handed,
naturally just starts writing with the left hand,
there's some genetic predisposition to being left-handed?
- Absolutely. No question about it.
Handedness is not entirely but strongly genetic.
So there is something that ties all of this,
and what does handedness, for example, have to do
with the part of your brain that controls language?
Well, it turns out that the parts that control the hand
are very close to the areas
that really are responsible for the vocal tract.
Again, part of the motor cortex
and part of this brain area called the precentral gyrus.
And there are some theories
that, because of their proximity,
that these parts of the brain
might develop together early in utero
and they might have a head start compared to the right side,
and because they have a head start
that things solidify there.
This is one theory of why this happens.
In people who are left-handed,
it still turns out that the vast majority of people
have language on the left side,
but it's not 99%, it's more like 70%.
So if you're left-handed,
it's still more likely that the language part of your brain
is going to be on the left side,
but there's going to be a greater proportion, maybe 20, 30%,
where it's either in both hemispheres or on the right side.
And just to make this a little bit more interesting
is that when people have strokes on the left side,
and if they are lucky enough to recover from those strokes,
sometimes that involves reorganization,
this term that we call plasticity earlier,
where the areas around where the stroke
take on that new function
in a way that they didn't have before.
That can certainly happen in the left hemisphere,
but there are also instances where the right hemisphere
can also start to take on the function of language
where it was once on left and then transfers to the right.
So the thing that I think about a lot
is that the machinery
probably exists on both sides,
but we don't use them together all the time.
In fact, we may strongly bias one side or the other.
Just like we use our two hands in very, very different ways,
it's a little bit the same with the brain.
Well, it's because of what we do with the brain
that actually is why we use the hands in different ways.
And the same thing goes for language,
which is that, again, the substrates, the organ,
the language organ, the part of the brain that process it
probably has very similar machinery
on the left side as the right,
and the right may have the capability to do it,
but in real, everyday use,
the brain specializes one of the sides
in order for us to use it functionally.
That's a theory.
- You're bilingual, correct?
- Yeah.
- [Andrew] You speak English and Chinese?
- Yeah.
- For people that are bilingual and that learn two or more,
well, bilingual is two, obviously,
but learn both languages
or let's say more languages from an early time in life,
do they use the same brain area to generate that language?
Or perhaps they use the left side to speak English
and the right side to speak Chinese?
Do we know anything about bilingualism in the brain?
- Well, I think we know a lot
about bilingualism in the brain.
The answers are still out there, the final answers on it,
and part of the answer is yes, absolutely,
we use some parts of the brain very similarly.
We actually have a study in the lab right now
where we're looking at this
where people who speak one language or another
or are bilingual,
and we're looking at how the brain activity patterns occur
when they're hearing one language versus the other.
And what's striking to see, actually,
is how overlapping they really can be.
Even though the person may have no idea
of the language that they're hearing,
the English part of the brain is still processing that
and maybe trying to interpret it
through an English lens, for example.
So the short answer is that with bilingualism,
there are shared circuitry,
there's this shared machinery in the brain
that allows us to process both, but it's not identical.
It's the same part of the brain,
but what it's doing with the signals
can be very, very different.
And what I mean by that precisely
is not the instantaneous detecting of one sound to the next,
but the memory of the sequences of those particular sounds
that give rise to things like words and meaning,
that can be highly variable from one individual to the next,
and those neurons are very, very sensitive
to the sequences of the sounds,
even though the sounds themselves
might have some overlap between languages.
- Fascinating.
Okay, so we've talked about brain areas
and a little bit about lateralization.
I want to get back to the hands
and some things related to emotion in a little bit.
But maybe now we could go into those brain areas
and start to ask the question,
what exactly is represented or mapped there?
And for people who perhaps aren't familiar
with brain mapping and representation and receptive fields,
perhaps the simplest analogy might be the visual system
where I look at your face, I know you, I recognize you,
and certainly there are brain areas
that are responsible for face recognition.
But the fact that I know that that's your face,
and for those listening, I'm looking into Eddie's face,
the fact that I know that that's your face at all
is because we are well aware that there are cells
that represent edges and that represent dark and light,
and those all combine
in what we call a hierarchical structure.
They sort of build up from basic elements
as simple as little dots,
but then lines and things that move, et cetera,
to give a coherent representation of the face.
When I think about language,
I think about words and just talking.
If I sit down and do a long podcast
or I think about asking you a question,
I don't even think about the words I want to say very much.
I mean, I have to think about them a little bit,
one would hope,
but I don't think about individual syllables
unless I'm trying to, you know, accent something
or it's a word that I have a particular difficulty saying
or I want to change the cadence, et cetera.
So what's represented in the neurons,
the nerve cells in these areas?
Are they representing vowels, consonants?
And how do things like inflection...
Like I occasionally will poke fun at upspeak,
but there's, I think, a healthy, a normal version of upspeak
where somebody's asking a question,
like, for instance, what is that?
That's an appropriate use of upspeak
as opposed to saying something that is not a question
and putting a lilt at the end of the sentence,
then we call that upspeak,
which doesn't fit with what the person is saying.
So what in the world is contained in these brain areas,
what is represented,
to me, is perhaps one of the most interesting questions,
and I know this lands square in your wheelhouse.
- Sure, let's get into this, Andrew,
because this is one of the most exciting stuff
that's happening right now is understanding
how the brain processes these exact questions.
And you asked me earlier,
you know, what is difference between speech and language?
Speech corresponds to the communication signal.
It corresponds to me moving my mouth and my vocal tract
to generate words,
and you're hearing these as an auditory signal.
Language is something much broader.
So it refers to what you're extracting
from the words that I'm saying,
we call that pragmatics
and sort of are you getting the gist of what I'm saying?
There's another aspect of it that we call semantics.
Do you understand the meaning
of these words and the sentences?
There's another part that we call syntax,
which refers to how the words are assembled
in a grammatical form.
So those are all really critical parts of language,
and speech is just one form of language.
There's many other forms like sign language, reading.
Those are all important modalities for reading.
Our research really focuses on this area
that we're calling speech,
again, the production of this audio signal
which you can't see but your microphones are picking up.
There are these vibrations in the air
that are created by my vocal tract
that are picked up by the microphone
in the case of this recording,
but also picked up by the sensors in your ear.
The very tiny vibrations in your ear are picking that up
and translating that into electrical activity.
And what the ear does at the periphery
is translates all sounds into different frequencies.
So its main thing to do is to take a speech signal
or any other kind of sound and decompose it,
meaning separate that sound into different kind of signals.
And in the case of hearing, what it's doing
is separating it out into low, middle, high frequencies
at a very, very high resolution.
It's doing it very quickly,
and it's doing it in a really fine way
to separate all of those different sounds.
So if you look at the periphery
near the nerve that goes to your ear,
those nerve fibers,
some of them are tuned to low frequencies,
some of them are tuned to high frequencies,
some of them are tuned to the middle frequencies,
and that is what your ear is doing.
It's taking these words
and splitting them up into different frequencies.
- And for those of you out there
that aren't familiar with thinking about things
in the so-called frequency space,
bass tones would be lower frequencies
and high-pitched tones would be higher frequencies,
just to make sure everyone's on the same page.
So the sound of my voice, the sound of your voice,
or any sound in the environment
is being broken down into these frequencies.
Are they being broken down
into very narrow channels of frequency,
or are they, I want to avoid nomenclature here,
or are they being binned as fairly broad frequencies?
'Cause we know low, medium, and high,
but, for instance, I can detect
whether or not something's approaching me
or moving away from me
depending on whether or not it sweeps louder
[imitates sound approaching]
or [imitates sound receding], right, towards or away.
It's subtle, and of course it's combined
with what I see and my own movement,
But how finely sliced
is our perception of the auditory world?
- Oh, extraordinarily precise.
I mean, we take these millisecond cues,
the millisecond differences
between the sound coming to one ear,
let's say your right ear versus your left,
to understand what direction that sound came from.
Those are only millisecond differences,
and that's how precise this works.
But on the other hand,
it does a lot of computation on this
it does a lot of analysis as you go up,
and a lot of our work is focused on the part of the brain
that we call the cortex.
The cortex is the outermost part of brain where we believe
that sounds are actually converted into words and language.
So there's this transformation
where, at the ear, words are decomposed
and turned into these elemental frequency channels,
and then as it goes up through the auditory system,
hits the cortex.
There are some things that happen obviously
before it gets to the cortex,
but when it gets to cortex,
there's something special going on,
which is that that part of the brain
is looking for specific sounds.
And specifically what I mean by that
is the sounds of human language,
so the ones that are the different consonants and vowels
in a different language.
One of the ways that we have studied this
is looking in patients who have epilepsy.
And in a lot of these cases
where the MRI looks completely normal,
we have to put electrodes surgically on a part of the brain.
The temporal lobe is a very, very common place,
so we've done a lot of our work
looking at how the temporal lobe processes speech sounds
because we're looking for where the seizures start,
but then we're also doing brain mapping
for language and speech so we can protect those areas.
We want to identify the areas that we want to remove
to cure someone's seizures,
but we also want to figure out the areas
that are important for speech and language
to protect those so that we can do a surgery
that's effective and safe.
And so in our research,
and why it's become a really important addition
to our knowledge
is that we have electrodes directly recording
from the human brain surface.
A lot of the technology we work with right now
is recording on the order of millimeters,
and they can record millisecond time resolution
of neural activity,
and what we see is extraordinary patterns of activity
when people hear words and sentences.
If you look at that part of the brain
that we call Wernicke's area
in this part of the temporal lobe,
this whole area lights up when you hear words or speech.
And it's not in a way
that is like a general light bulb warming up
and it's generally lit up,
but what you actually see
is something much, much more complicated,
which is a pattern of activity,
and what we've done in the last 10 years
is try to understand what does that pattern come from?
And if we were to look at each individual site
from that part of the brain, what would we see?
What parts of words are being coded by electrical activity
in those parts of the brain?
Remember, the cortex is using electrical activity
to transmit information and do analysis,
and what we're doing is we're eavesdropping
on this part of the brain as it's processing speech
to try to understand what each individual site is doing.
- And what are those sites doing,
or could you give us some examples
of what those sites are doing?
So, for instance, are they sites that are specific for,
or we could say even listening for consonants or for vowels
or for inflection or for emotionality?
What's in there?
- [Eddie] Okay, well-
- What makes these, what makes these cells fire?
- Yeah, what gets them excited?
- Yeah. - What gets them going
is hearing speech.
In particular, there are some of these really focal sites,
again, just on the order of millimeter
or, at some level, single neurons
that are tuned to consonants, some are tuned to vowels,
some are tuned to particular features of consonants.
What I mean by that are different categories of consonants.
There's a class of consonants
that we call plosive consonants.
This is a little bit of linguistic jargon,
but I'm going to make a point here with that
is that certain classes of sounds, when you make them,
it requires you to actually close your mouth temporarily.
- Hmm. Now I'm going to be thinking about this.
So plosive, like plosive,
like saying the word plosive requires that.
- Exactly, so what's cool about that
is that we actually have no idea
what's going on in our mouth when we speak.
We really have no idea.
- Some people definitely have no idea.
[Andrew laughs] [Eddie laughs]
- Well, not just like in terms
of what you're saying sometimes,
but actually like how you're actually moving,
you know, the different parts of vocal tract.
And I have a feeling if we actually required understanding,
we would never be able to speak 'cause it's so complex.
It's such a complex feat.
Some people would say it's the most complex motor thing
that we do as a species is just speaking.
Not, you know, the extreme feats of acrobatics
or athleticism but speaking.
- And especially when one observes, you know, opera
or people who, you know, freestyle rappers.
And of course it's not just the lips. It's the tongue.
And you've mentioned two other structures
Pharynx and larynx are the main ones.
Can you tell us, just educate us at a superficial level
what the pharynx and larynx do differentially?
'Cause I think most people aren't going to
be familiar with that. - Okay, sure.
So I'll talk primarily
about the larynx here for a second,
which is that if you think about when we're speaking,
really, what we're doing is we're shaping the breath.
So even before you get to the larynx,
you got to start with the expiration.
So we fill up our lungs and then we push the air out.
That's a normal part of breathing.
And what is really amazing about speech and language
is that we evolved to take advantage
of that normal physiologic thing, add a larynx,
and what the larynx does is that when you're exhaling,
it brings the vocal folds together.
Some people call them vocal cords.
They're not really cords. They're really vocal folds.
They're two pieces of tissue that come together,
and a muscle brings them together.
And then what happens
is when the air comes through the vocal folds
when they're together,
they vibrate at really high frequencies,
like 100 to 200 hertz.
Yours is probably about 100 hertz.
The average- - Whereas yours is 200.
[Andrew laughs]
- No, no. Most male voices are around 100, okay?
And then the average female voice around 200 hertz.
- Well, and as you know, I've always had the same voice.
- Yes, yes, the same- - This was a point of shame
when I was a kid.
Folks, my voice never changed. I always had the same voice.
This is a discussion for another time.
- Yeah, well, it's a great voice,
you know, a great baritone voice,
but I know in your voice, it's a low-frequency voice.
And the reason why men and women
generally have different voice qualities
is it has to do with the size of the larynx
and the shape of it, okay?
So in general, men have a larger voice box or larynx,
and the vibrating frequency, the resonance frequency
of the vocal folds when the air comes through them
is about 100 hertz for men and about 200 for women.
So what happens is,
okay, so you take a breath in,
and then as the air is coming out,
the vocal folds come together and the air goes through.
That creates the sound of the voice that we call voicing,
and that's the energy of your voice.
It's not just your voice characteristic,
it's the energy of your voice.
It's coming from the larynx there, it's a noise.
And then it's the source of the voice.
And then what happens is that energy,
that sound goes up through the parts of the vocal tract,
like the pharynx into the oral cavity,
which is your mouth and your tongue and your lips.
And what those things are doing
is that they're shaping the air in particular ways
that create consonants and vowels.
So that's what I mean by shaping the breath.
It just starts with this exhalation.
You generate the voice in the larynx,
and then everything above the larynx is moving around,
just like the way my mouth is doing right now,
to shape that air into particular patterns
that you can hear as words.
- Fascinating, and immediately makes me wonder
about more primitive or non-learned vocalizations
like crying or laughter.
Babies will cry, babies will show laughter.
Are those sorts of vocalizations
produced by the language areas like Wernicke's,
or do they have their own unique neural structures?
- Yeah, interesting question.
So we call those vocalizations.
A vocalization is basically where someone
can create a sound, like a cry or a moan,
that kind of sound,
and it also involves the exhalation of air.
It also involves some phonation at the level larynx
where the vocal folds come together
to create that audible sound.
But it turns out that those are actually different areas,
so people who have injuries in the speech and language areas
oftentimes can still moan, they can still vocalize,
and it is a different part of the brain,
I would say an area that even nonhuman primates have
that can be specialized, you know, for vocalization.
It's a different form of communication
than words, for example.
- The intricacy of these circuits in the brain
and their connections to the pharynx and larynx is just,
it's almost overwhelming
in terms of thinking about just how complicated it must be,
and yet some general features and principles
are starting to emerge from your work
and from the work of others.
If we think about that work
and we think about, for instance, Wernicke's area,
if I were to record from neurons in Wernicke's area
at different locations,
would I find that there's any kind of systematic layout?
For instance, in terms of, you talked about sound frequency,
we know that low frequencies
are represented at one end of a structure
and high frequencies at the other.
This is true actually, at least from my earlier training,
within the ear itself, within the cochlea,
the early work of von Bekesy and from cadavers, right?
They actually figured this out from dead people,
which is incredible.
A fascinating literature people should look up.
And in the visual system,
we know that, for instance, you know, visual position,
where things are is mapped systematically.
In other words,
neurons that sit next to each other in the brain
represent portions of visual space
that are next to each other in the real world.
What is the organization of language
in areas like Wernicke's and Broca's?
For instance, I think of the vowels, A, E, I, O, U,
as kind of a coherent unit,
but do I find the A neurons are next to the E neurons
or next to the A, E, I, O, U.
Is that vowel representation also laid out in order,
or is it kind of salt and pepper, is it random?
- That's been one of the, like, most important questions
we've been trying to answer for the past decade.
So there is a part of the brain
that we call the primary auditory cortex,
and the primary auditory cortex
is deep in the temporal lobe.
And if you looked at that part of the brain,
there is a map of different sound frequencies.
So if you look at the front of that primary auditory cortex,
you'll find low-frequency sounds,
and then as you march backwards in that cortex,
it goes from low to medium to high frequencies.
It's organized in this really nice and orderly way.
And it turns out there's not just one.
There's, like, mirrors of that tone frequency map
in the primary auditory cortex.
The areas that are really important for speech
are on the side of that.
And we now think that speech
can go straight to the speech cortex
without having to go through the primary auditory cortex,
that it has its own pathway to get to the part of the brain
that processes speech.
And when we've looked at that question about is there a map,
the short answer is yes, there is a map,
but it is not structured universally across all people
in a way that we can clearly see right now.
It is like a salt and pepper map
of the different features in speech.
So before, we talked about these sounds
that are called plosives.
You make a plosive when the mouth
or something in the oral cavity closes temporarily,
and when it opens,
that creates that fast plosive sound.
So when you say dad
or, you know, like the B in ball,
that kind of thing,
you will notice that your lips actually close,
and then it's the release of that
that creates that particular sounds, okay?
So those are the sounds that we call plosive.
Those are like ba da ga, pa ta ka,
Those are a certain class of consonants
that we call plosive sounds.
There is another class of sounds
that we call fricatives in linguistics.
Fricatives are created by turbulence
in the airstream as it comes out through the mouth,
and the way that we make that turbulence
is getting the mouth and the lips to close
almost until they're completely shut
or putting the tongue to near the teeth
to almost get it completely shut
but just have a narrow aperture,
that creates a turbulence in the airflow
that we perceive as a high-frequency sound.
So those are the sounds like sha and tha,
those kind of things.
Those are, if you look at the frequencies,
they're higher frequencies,
and those are created by specific movements
that you constrict the airflow to create turbulence,
and we hear it as sha, sa, tha.
- So if I say that.
- [Eddie] Exactly.
- And as opposed to a plosive where I'd say explosive.
- [Eddie] Right.
- Of course, I'm emphasizing here.
Well, this explains something and solves a mystery,
which is recently I've been fascinated by the work
of a physician scientist back east,
Dr. Shanna Swan, who's done a lot of work
on things that are contained in pesticides and foods
that are changing hormone levels,
and she refers to phthalates, which is spelled...
So it's both a plosive and a tha,
so it's combining the two,
and it's one of the most difficult words
in the English language to pronounce,
second only perhaps
to the correct pronunciation of ophthalmology.
[Eddie laughs] [Andrew laughs]
So it's a combination of a plosive
and one of these tha sounds,
and that's probably why it's difficult.
- That's exactly right.
In fact, we have a term for that.
That's called a consonant cluster.
So sometimes syllables will just have one consonant,
but when we start stacking certain syllables in a sequence,
and there's rules that actually govern which consonants
can be in a particular sequence for a given language,
that makes it more complicated.
And certain languages
have a lot more constant clusters than others.
- For instance- - So for instance,
Russian, for example, has a lot of consonant clusters.
English has a lot of them.
There are other languages that have very, very few.
For example, Hawaiian.
Hawaiian has an inventory
of about 12 to 14 different phonemes,
14 different consonants and vowels.
English, on contrast,
has about 40 different consonants and vowels.
So languages have different inventories.
They can overlap for sure,
but different languages use different sound elements,
combine and recombine those elements
to give rise to different words and meanings.
- Can we say that there is a most complicated language
out there, or among the most complicated?
Would it be Russian?
- It's definitely high up there.
English is up there, too, actually. Yeah, German as well.
- And in terms of learning multiple languages
during development,
my understanding is that if one
wants to become bilingual or trilingual,
best to learn those languages simultaneously
during development, ideally before age 12,
if one hopes to not have an accent in speaking them later.
Is that correct, or do you want to revise that?
- Well, basically, the earlier,
and the earlier is better,
the more intense it is and the more immersive it is,
the longer, you know, that you can be exposed to that
is really important.
A lot of people can get exposed to it early
and basically lose it.
Even though it's, quote, unquote,
during that sensitive period,
unless it's maintained, it can be very easily lost.
Then I think another aspect of it that's very interesting
is some of the social requirements for it too.
It's pretty clear that you can only go so far
just listening to these sounds from a tape recording
or something like that.
There's something extra about real human interactions
that activates the brain's sensitivity
to different speech sounds
and allows us to become specialized for them
for a given language.
- So returning to what's mapped,
what the representations are in the brain,
I'm starting to get a picture now
based on these plosives and these tha sounds.
And what I find so interesting and logical about that
is it maps to the motor structures
and the actual pronunciation of the sounds,
not necessarily to the meaning of the individual words.
Now, of course, it's related to the meaning
of the individual words,
but it makes good sense to me
why something as complex as language,
both to understand and to generate,
would map to something that is essentially motor in design
because, as you point out, I have to generate these sounds
and I have to hear them generated from others.
However, there's reading and there's writing,
and writing is certainly motor,
reading involves some motor commands
of the eyes and et cetera.
Where do reading and writing come into this picture?
Are they in parallel with, as we would say in neuroscience,
or are they embedded within the same structures?
Are they part of the same series of computations?
- Yeah, so to address the first part
is that we've got this map
of these different parts of consonants and vowels,
and when we look at how they lay out
in this part of the brain that we call Wernicke's area,
we've spent a lot of time really just dissecting this
millimeter by millimeter.
The term that you used is very apropos.
It's salt and pepper. It's not random.
There is this kind of selectivity
to these individual speech sounds.
And one point I want to make about it is this,
is that in English, for example,
there are about 40 different phonemes.
Phonemes are just consonants or vowels
or individual speech segments.
But these articulatory features that you refer to,
for example, the characteristic sounds
that are generated by specific movements in the mouth,
you can more or less reduce that
to about 12 different features.
Okay, these are specific movements of the tongue,
the jaw, the lips, the larynx.
There are about 12 of these movements,
and just like you said, Andrew,
by themselves, they have no meaning.
They're just movements.
But what's incredible about it
is that you take these 12 movements
and you put them in combinations
and you start putting them in sequence.
We as humans use those 12 set features
to generate all words.
And because we can generate
nearly an infinite number of words
with that code of just 12 features,
we have something
that generates essentially all possible meaning
because that's what we do as humans, we generate meanings,
I'm trying to communicate one idea to another,
which, to me, is extraordinary.
A parallel would be, for example, DNA.
There's four base pairs in DNA,
but with those four base pairs in a specific sequence
can generate an entire code for life.
And speech is the same way.
It's like you've got these fundamental elements
that, by themselves, have no meaning,
but when you put them together
give rise to every possible meaning.
So with regard to your second point
about reading and writing, it's a fascinating question.
Speech and language is part of who we are as humans.
That's part of how we evolved,
and it's hardwired
and, you know, molded by experience.
Reading and writing are a human invention.
It's something that was added on
to the architecture of the brain.
And because reading and writing
are fairly recent in human evolution,
it's essentially too quick
for anything to, like, have a dramatic change
in, let's say, a new brain area
or some kind of specialization.
Instead, what happens is that whenever any kind of behavior
becomes ultra specialized in any of us or any organism,
we can sort of take some areas
that are normally involved with vision, for example,
and specialize it for the purpose of reading.
So all of us have a part of our brain
in the back of the temporal lobe
that interfaces with the occipital visual cortex
that we call a visual word form area.
There's actually a part of the brain that is very sensitive
to seeing words, like, either typed or handwritten.
There's a part of the brain that also is sensitive
to seeing things like faces.
So these are things that are all conditioned
on what's important, you know, to survive.
So reading and writing are an invention,
and there are things that have mapped
to functions that the brain already has.
And one of the really important things
about reading and writing
is that when we learn to read and write,
especially with the reading part,
it maps to the part of the brain
that we've been talking about,
which is the part that's processing speech sounds.
So some of us kind of think about it,
these are two different things.
One is hearing sounds through your ears,
the other is reading
where you're actually seeing things through your eyes
and then getting into the language system.
Well, it turns out that the auditory speech cortex
is the primal and primitive fundamental area
that's really important for speech,
and what happens with the reading
is once it gets through that visual cortex,
it's going to try to map those reading signals
to the part of the brain
that's trying to make sense of sounds,
the sounds of words, what we call phonology.
Now, why is this important?
It has a lot of relevance to how we learn to write.
And in some kids with dyslexia...
Dyslexia is a neurological condition
where a child, in some cases, an adult,
has trouble reading, for example.
And in many of those cases,
it's because that mapping between how we see the words
to the way that the brain processes the sounds
is something different,
it's a little bit different
than people who can read really well.
So when you're reading, a lot of times,
you're actually activating the part of the brain
that is processing the words that you hear.
- What is the current treatment for dyslexia?
I've heard that it's a deficit
in some of the motion processing systems
of the visual system.
You know, people, their eyes are jumping
as opposed to more linear reading across,
or I suppose if it were Chinese it would be...
You know, I don't want to presume
people are always reading English.
Or I suppose if it's Hebrew,
they're going from the opposite side of the page.
What can be done for dyslexia?
And do any of the modern treatments for dyslexia
involve changing things from the speech side
as opposed to just the, quote, unquote, reading side,
given that speech and reading are interconnected?
- Yeah, absolutely.
So, again, I think in the beginning,
people might have thought
this was purely a visual abstraction
or something really just about the visual system,
but there's been more recognition
that it could be both or it could be either,
depending on the particular instance.
It's very clear that there are many kids with dyslexia
where the problem is a problem of a phonological awareness.
So, you know, it can be very hard to detect
because they may understand the words that you were saying,
but because the brain is so good at pattern recognition,
sometimes even if the individual speech sounds
are not crystal clear, it can compensate that,
so that you can have an individual who can hear the words
but not be able to essentially hear them
when they're reading those same words.
And so what can happen with that
is that you can have this disconnection
between what they're seeing
and what they need in order to hear it as words
and process it as language.
And so skilled readers
usually need that route first.
They've got to map the vision to the sound
in order to get that sort of like foundation.
But then over time, the reading has its direct connection
to the language parts of the brain,
and we don't necessarily always need to map to sounds.
You know, you can basically develop a parallel route,
and we, as readers, actually use both all the time.
So for example, if it's a new word
that you've never seen before,
sometimes you try to, like, pronounce it in your mind,
you know, and try to hear what that word is.
Even though you're not actually saying it,
you're trying to just generate
what those sounds might be like.
And that's the part where we're kind of relying
on how we learn to read in the first place,
which is mapping those word images
to the sounds that, you know, go along with them.
But in other times, if you're a really proficient reader,
you're just seeing the words
and you can map them directly to meaning
without having to go through that process.
- Yeah, I'm a big fan of listening to audiobooks,
and of course I also listen to podcasts quite a lot,
but I also am a strong believer,
based on the research that I've seen,
that reading books, physical books,
it could be on a Kindle, I suppose,
but reading a physical book is useful
for being able to articulate well and structure sentences
and build what are essentially paragraphs,
which is what I'm required to do
when I do solo episodes of the podcast.
I've noticed over the years
as text messaging has become more popular
and there's essentially an erosion of punctuation
or the need to have complete sentences,
and now that's sort of transferred to email as well.
It's become acceptable
to just say, you know, fragmented sentences in email.
It seems likely that it's starting to impact
the way that people speak as well.
And I don't think this has anything to do with intelligence
or education level,
but are you aware of any evidence
that how we read and what we read
and whether or not we consume information
purely through reading or mainly through auditory sources,
does it change the way that we speak?
Because, after all, Wernicke's and Broca's area
and the other auditory and speech production areas
are heavily intermeshed,
and so it would make perfect sense to me that what we hear
and the patterns of sound that are being communicated to us
would also change the way that we speak.
- Yeah, that's a really fascinating point.
There is this idea
that there's, like, this proper way to speak,
like that there's the right way, for example,
what are the appropriate, you know...
Like, for example, in school, you're oftentimes told, like,
"You should say it like this,
not say it like that," you know?
And every language kind of has that.
It turns out that that's really unnatural.
Languages, and speech in particular, change over time,
it evolves, and it can happen very quickly.
You know, the things that we call dialects, for example,
are just different ways of speaking,
and someone can just be in one environment
and change from one dialect to another,
and some people, it kind of is really fixed.
And there is this idea that, you know, like in school
that we're told that there's this right way,
but in reality, that's not true.
Like, language change and speech change
is completely normal and happens all the time,
and it can be really dramatic.
Like, certain cultures and communities,
if they are isolated,
they can develop a whole new language,
a whole new set of words, for example,
and new ways and dialects that are independent from people
to the point where it's unintelligible even to others.
And so the basic idea is that sound change
is part of the way it works,
and the brain is very sensitive to those kind of changes.
- Speaking of learning new languages,
I'm assuming it's possible to learn new languages
throughout the lifespan, correct?
- [Eddie] Yeah.
- I've also heard these kind of fantastical stories
of somebody has a stroke
and then suddenly, spontaneously, can speak French fluently,
whereas prior to the stroke, they could not.
Is there any merit to those stories whatsoever?
[Eddie laughs]
I find it very hard to believe
that there was a complete map representation
of a language in somebody's brain
that they were completely unaware of,
and then because of damage to a brain area,
that capacity to speak that language was somehow unveiled.
It just seems too wild,
and I don't want to say good to be true
because nobody wants a stroke,
but it just seems out outrageously implausible.
- Well, there are aspects of that
that certainly are implausible.
So I don't know of any true case that I've ever seen
or experienced myself or even read about
where, for example, there was an injury to the brain
that resulted in loss of,
well, essentially a gain of function,
meaning, like, just all of a sudden
started speaking another language.
So for example, if you had a stroke
and you never spoke French,
and then you had it
and then all of a sudden you're speaking,
that, I've never heard of and never seen.
However, there is a condition that is well acknowledged
and I have seen one case of this
called a foreign accent syndrome,
which is peculiar because there are people
who have an injury to the part of the brain
where it sounds like they're starting
to speak this other language,
but they're not actually speaking the language,
it just sounds like it.
And this goes back to what we were talking about earlier
about these areas that are really important
for speech control of the vocal tract,
this area in the precentral gyrus.
People have documented
where, you know, patients have had strokes there,
and after that, it sounds like they're speaking Spanish
as opposed to English
or it sounds like they have the intonational properties
of French or Russian
as compared to their original native language.
They're not learning all the rest of it,
like the meaning and the grammar, et cetera,
but they're adopting some of the phonology,
and part of that is just because it's not working
the way it normally does.
So there is something
actually called a foreign accent syndrome
that people can have after a stroke.
- Interesting. I'm curious about auditory memory.
When I was a kid, I used to get into bed at night,
and I'd close my eyes and I would replay conversations
that I had heard during the day or people's voices.
I actually can remember calling your house
when we were young kids,
and because I don't speak any Chinese
but I'd have to ask for you,
I'd say, I think it was, Eddie [speaking Chinese].
- Yeah.
- Yeah, and then whoever answered the phone
would go get you, and then I'd say [speaking Chinese],
which I believe means thank you, right?
That's the total of the Chinese that I speak, by the way,
but I will never forget that.
I'll just never forget it, I hope.
I suppose if I have a stroke or something of that sort,
at some point, I'll forget it
and I won't know that I've forgotten it.
But in all seriousness, I remember that to this day.
I couldn't spell that out, I wouldn't know how,
certainly not in Chinese,
but even a transliteration,
I couldn't do using English letters.
Where are memories of sounds stored?
Because within our days and across our lives,
we have an infinite number of auditory experiences,
just like we have an infinite number of visual experiences.
Where are they stored,
and what is the structure of their storage?
What am I calling upon,
besides, of course, the motor commands
that are required to say what I just said in Chinese,
which I won't repeat again,
'cause I somehow managed to get it right the first time,
or at least not terribly wrong,
then I don't want to botch it the second time.
Where is that stored, and how does that work?
And, more importantly,
as I speak my native language, English,
am I pulling from a memory bank?
Because it doesn't feel like it.
I'm just telling you what I want to say.
I'm doing my best to communicate clearly and succinctly.
I'm usually not so good at the succinct part.
But where is the bank of information?
On my keyboard on my computer, I have the letters,
and I have certain elements of punctuation and the space.
What am I pulling from? Am I pulling from those plosives?
But if so, how can I do it so quickly?
Even for people that speak slowly,
it appears more or less fluid.
This, to me, is overwhelmingly impressive
that the brain can do that.
How does it do that?
- Well, first of all,
I am impressed that 35 years later... [laughs]
- Well, I had to get ahold of you. [laughs]
- Yeah, so I am impressed, 35 years later,
that you can still remember that.
- But only that.
- That's fine, but I'm still very impressed.
But it clearly was something important to you.
So the short answer is that memory is very distributed.
So it's almost like the question that you asked me
is ill posed 'cause you asked me where?
Well, it's not one specific area.
It's actually really distributed.
It's not just one particular area.
In fact, I'm fairly certain
that if we were to injure that part of the brain
called the Wernicke's area,
you may still even have memories of that.
People can have injuries of Broca's area
or certainly the precentral gyrus
and be able to sing "Happy Birthday," for example,
when it's embedded in melody
or highly rehearsed things like counting
despite not being able to speak, which is incredible, right?
It's like you can see a patient,
for example, who can't really put together a sentence.
You ask them, "How are you feeling today?"
And they can't even utter a word.
But then you ask them to count sometimes,
and they'll get up to any number really.
And so there are some things
that are really built into our motor memory
and it's distributed.
It's not one particular part of the brain,
it's actually multiple areas
where that memory is distributed.
And thank God that's the way it is
because it's very rare
in the kind of surgeries that I do
where you go in, you remove a piece of the brain,
that someone forgets these kind of long-term memories
or these long-term motor skills that they have.
That's very, very rare.
It's the number one question a patient will ask me,
like, "Am I going to be the same?
And am I going to remember, you know, my wife?
Or am I going to remember,
you know, these thoughts of my birthday
when I was 10 years old?"
And I've never really seen that kind of severe amnesia
unless it's a very, very severe injury
that involves almost the entire brain, and thank God.
So a lot of that information
is really distributed across the entire brain.
- Speaking of storage of and ability to speak,
you are doing some amazing work
and have achieved some pretty incredible,
well-deserved recognition for your work
in bringing language out of paralyzed people,
essentially allowing people
who are locked into a paralyzed state
or otherwise unable to articulate speech
using brain-machine interfaces,
essentially translating the neural activity
of areas of the brain that would produce speech
into hardware,
wires and things of that sort,
artificial, non-biological tools
in order to allow paralyzed people to communicate.
We will provide a link
to some of the popular press coverage of that work
and the original papers.
But if you would be so kind
as to tell us what those experiments look like,
who these people are who are locked in
and that you allow to communicate,
and then especially interesting to me,
some of the directions that you're taking this now,
which is beyond just, you know, people being able to think
about what they want to say
and words coming out on a screen or through a microphone,
but actually making the interactions
between these people and the real world
more elaborate and more real.
If that seems mysterious to people,
I'm going to let Eddie tell you what they're doing with this
rather than put any more detail on it.
- Oh, okay. Well, thanks for asking about this.
This has really been some of the exciting recent work
from the lab.
So for the last decade,
we've really been focusing on the basic science,
meaning trying to understand
how the brain extracts and produces speech sounds and words.
We've done a lot of work trying to figure out
how these parts of the brain
control these individual elements
that give rise to all words and meanings.
And so it was about six years ago
where we realized we actually have a pretty good idea
of how this code works.
We had identified all of these different elements
that we could decode in epilepsy patients, for example,
when they had electrodes on the brain
as part of their surgeries,
we could decode all of the different consonants
and vowels of English.
That was about six years ago.
So a natural question was this,
which is if we understand that electrical code,
can we use that to help someone who is paralyzed
and can't get those signals out of the brain
to speak normally?
And that's in the setting of people who are paralyzed.
So there are a series of conditions,
they include things like brain stem stroke.
The brain stem is the part of the brain
that connects the cerebrum, which is the top part,
does our thinking and a lot of the motor control,
speech, language, everything,
and the brain stem is what connects that
to the spinal cord and the nerves
that go out to the face and vocal tract.
So if you have a stroke there,
basically, you could be thinking all the wild, creative,
intelligent thoughts you have in the mind and the cerebrum,
but you can't get them out into words
or you can't get them out to your hand to write them down.
So that's a very severe form of paralysis
called brain stem stroke.
There's another kind of conditions
that we call neurodegenerative
where the nerve cells die basically, or atrophy,
in a condition called ALS,
and that's a very severe form of paralysis.
In its extreme form,
people essentially lose all voluntary movement.
- So Stephen Hawking would be a good example
of someone with ALS, Lou Gehrig's disease?
- He's an example of someone who had ALS
but not a great example of what typical course of ALS.
So for reasons not clear,
the progression of his disease largely stabilized
to the point where he could twitch, you know, a cheek muscle
or move his eyes, let's say.
In most people, it's very rapid,
and many people, they die from it, actually,
you know, within a couple of years of diagnosis, so-
- Yeah, he lived a long time in that-
- He lived a long time-
- That slanted-over state
in his wheelchair. - Right, exactly.
But he wasn't breathing,
you know, through a tube in his throat, for example,
because people with severe ALS,
the muscles to their diaphragm and their lungs
essentially give out as well.
They get weakness there and then they can't breathe anymore.
So that's another form of paralysis.
And so in our field,
these are kind of like the most devastating things
that can happen.
I'm not going to really try to compare, like, what's worse,
you know, having a brain tumor or a stroke, it's all bad.
But this condition of what we call being locked in
refers to this idea
that you can have completely intact cognition
and awareness but have no way to express that,
no voluntary movement, no ability to speak.
And that is devastating
because psychologically and socially,
you know, you're completely isolated.
That's what we call locked-in syndrome,
and it's devastating.
I've seen that throughout my career,
and it's really heartbreaking
because you know that the person is there
but you can't see, they can't communicate.
So we've been studying this patterning
of electrical activity for consonants and vowels,
and essentially, once we figured out a lot of these codes
for the individual phonetic elements,
we took a little bit of a detour,
or at least part of the lab
started to focus on this very specific question.
For people who have these kind of paralysis,
could we intercept those signals
from the brain, the cerebral cortex,
as someone is trying to say those words,
and then can we intercept them
and then have them taken out of the brain through wires
to a computer that are going to interpret those signals
and translate them into words?
So about three years ago,
we started a clinical trial.
It's called the BRAVO trial. It's still underway.
And the first participant in the BRAVO trial
was a man who had been paralyzed for 15 years.
When he was about 20 years old,
he came to the United States,
was actually working in the Sonoma area
and he was in a car accident,
and he actually walked out of the hospital
day after that car accident,
but the next day, had a complication related to it
where he had a very large stroke in the brain stem,
and that turned out to be devastating.
He didn't wake up from that stroke for about a week.
He was in a coma for about a week.
And when he woke up from that coma,
he realized that he couldn't speak or move his arms or legs.
And as he told me or communicated to us,
that was absolutely devastating.
He wanted really to die at that time.
- Could he blink his eyes or move his mouth in any way?
- He could blink his eyes,
he had some limited mouth movements
but couldn't produce any intelligible speech.
It was, like, completely slurred and incomprehensible.
And he survived this injury.
A lot of people who have that kind of stroke
just don't survive.
But he survived.
And I also realized that he's just an incredible person,
like a force of nature in terms of his optimism,
in terms of his ability to make friends
despite his condition.
The way he actually communicates,
because he has a little bit of residual neck movements,
is that he improvised and had his friends
basically put a stick attached to his baseball cap.
And because he could move his neck,
he would essentially type out letters on a keyboard screen
to get out words.
In fact, this is how he communicated was through a device
that he would essentially peck out letters one by one
by moving his neck
to control this stick attached to his baseball cap.
- How many years did he use that method of communication?
- [Eddie] For about 15 years.
He hadn't really spoken for about 15 years.
- Oh, goodness.
- Yeah. So it was a devastating injury.
But, you know, there's something to be said
about the human spirit,
and if there's anyone who embodies it, it is Pancho,
that's his nickname, the first participant in our trial.
He has that human spirit, he persevered,
and, in fact, you know, could thrive
in his community, basically, and friends,
being able to communicate
in this very slow and inefficient way.
Maybe part of that spirit is why he volunteered
to be the first person in this trial.
It was a clinical trial, an experiment. It was a study.
This is not an approved therapy by any means.
This was really something that had not been done before,
and we had a lot of ideas about it, but we didn't know.
You know, we had proven a lot of this could be true
in some people who are normally speaking,
but to actually put it into someone who's paralyzed,
number one, where we don't know the code is the same.
Number two is someone who's not been speaking for 15 years,
whether those signals are actually still there or not.
So it was part of a clinical trial.
It was, you know, something that our hospital
and also the FDA, you know, had to approve
and looked at very carefully,
but given a lot of the work that we had done,
there was some basis for why this might work.
And so about two and a half years ago,
we did a surgery where we implanted electrodes
onto the parts of the brain that we've been talking about,
these areas that control the vocal tract,
the areas that control the larynx,
the areas that control the lips and tongue and jaw movements
when we normally speak.
These are areas that presumably may be active,
that was our hope, in his brain,
but he just couldn't get those out
to control his mouth in a normal way.
And he underwent a surgery,
a brain surgery where we put an electrode array
and we connected it to a port that was screwed to his skull.
And the port actually goes through his scalp,
and he's lived with this now for the last three years.
It is a risk of infection.
These ports eventually have to become wireless
in the future.
But we've figured out a way to keep that port there
where we can essentially connect him to a computer
through that port.
So he has an electrode array that's implanted
over the part of his brain that's important for speech,
it's connected to a port,
and then we connect a wire to that port that translates
those what we call analog, you know, brainwaves
and converts them into digital signals.
And then a computer takes those digital signals
from those individual sites from the speech cortex
and translates those into words.
- Can you describe for us the first time that Pancho spoke
through this engineered device?
What was that experience like for you?
And, at least from what he conveyed to you,
what was that experience like for him?
Because as somebody who was essentially locked-in,
except for this, you know, rather crude pecking device,
although I'm thoroughly impressed by how adaptive
or adaptable Pancho was,
and his friends engineering that device for him
is really nothing short of clever
because otherwise he would be truly locked in, right?
But what was that moment like? I can only imagine.
- That moment was incredible.
It was truly incredible
to be able to see him try to get out a word
that was, for all practical purposes, unintelligible,
but to be able to take the brain activity
and to translate it into text on a screen.
And that's what we did.
We took those brainwaves,
we put them through a machine learning
or artificial intelligence algorithm
that can pick up these very, very subtle patterns,
you can't actually see them with your eye,
in the brain activity and translate those into words.
And I remember seeing this happening for the first time.
You know, it doesn't happen, like, immediately.
This is something that took weeks to train the algorithm
to interpret it correctly.
But what was incredible about it was to see how he reacted.
And he would be prompted to say a given word,
like, you know, outside, for example,
and then he would think about it, try to say it,
and finally those words would appear on the screen.
And what was really amazing about it
was you could really tell
that he, like, got a kick out of that
because he would start to giggle,
you know, his body would shake in a way
and his head would shake in a way
that he would start to giggle, and that was cool to see.
But then I also realized that when he was giggling,
it kind of screwed up the next word's decoding.
[Andrew laughs] [Eddie laughs]
- Is that a bug you've since fixed?
- No, we haven't fixed that.
- Interesting.
- We haven't fixed that,
so it's easier just to tell him to stop giggling.
[Andrew laughs] [Eddie laughs]
- So what was the first word that he said?
- Well, I think one of the first sentences
that he put together
was, you know, can you get my family outside?
- Meaning get them out of the room? [laughs]
- [Eddie] No, no.
- All these years, he wanted to get away from his family?
- No, I think what he meant was, can you get them-
- Bring them in, yeah. - Bring them in.
And so the way this worked
was we trained this computer to recognize 50 words.
We started with a very small vocabulary
that's expanding as we speak.
I think that this is just a matter of time
before these vocabularies become much, much larger.
But we started with a 50-set of words.
We created essentially all the possible sentences
that you could generate from those 50 words.
Why that was important
was you can use all those possible sentences
to create a computational model, computer model
of all the different word combinations
to give different sentences given those 50 words,
and then you can essentially do what we call autocorrect.
It's the same kind of thing that we do
when you're texting, for example.
You get the wrong letter in there,
but your phone actually knows,
you know, because of its context what to correct it.
So because the decoding's not 100% correct all the time,
in fact, it's far from that,
it's really helpful
to have these other features like autocorrect,
the stuff that we use routinely now with texting
that makes it correct and then updates it.
So it's a combination of a lot of things.
It's the AI that is translating
those brain activity patterns,
but it's also things that we've learned
from speech and speech technologies
that, you know, we put all together
and then all of a sudden, it starts to work.
And so we were really excited
because that was the first time that someone was paralyzed
and could create words and sentences
that was just decoded from the brain activity.
- Incredible, and I know you're very humble,
but I'm going to embarrass you by saying
I always knew you were destined for great things
since the early age of nine when we first became friends.
But when I read the news coverage of your work with Pancho
and the release of this language
from this locked-in patient,
it literally, you know, it brought tears to my eyes
because, you know, it's an interesting thing
as fellow neuroscientists, right,
we explore the brain and we try and find mechanisms,
and we try and compare those to what other people find
and find truths and principles and build up from those,
but pretty rarely is there a case
where that route of exploration
leads to something of clinical significance
within one's own lifetime.
I mean, that's the reality of science,
and oftentimes it's a very distributed process.
But in this case, it's been a magnificent thing
to see you move along this trajectory
parsing these language and speech areas
and then to also do the clinical work in parallel.
Speaking of which, these days,
we hear a lot about Neuralink, Elon Musk's company.
A neurosurgeon that came up briefly through my lab,
but I can't take any credit for what he knows or does,
which is Matt MacDougall is the neurosurgeon at Neuralink.
There's some other excellent neuroscientists there
and engineers there.
We hear a lot about Neuralink
because while brain-machine interface
of the sort that you do and that other laboratories do
has been going on for a long time,
there's been some press around Neuralink
about the promise of what brain-machine interface could do.
For instance, early in our discussion,
you talked about how, you know, language is constrained
by these sound waves,
and typically, it's a few people communicating
or one person with many people
through a podcast, for instance, or a speech.
But the idea has been thrown out there
that through the use of stimulating chips
or through other brain-machine devicing
that perhaps one could internalize
50 conversations in parallel, right?
50X communication,
or that the memory systems could be augmented
to remember 10 times as much information
or even twice as much information in a given period of time.
My understanding of what they're doing at Neuralink,
which is admittedly crude and from the outside,
a few discussions with people there,
is that they too are going to pursue clinical goals first,
things like trying to generate smooth movement
in a Parkinsonian patient,
trying to adjust movement patterns
in someone with Huntington's disease, for instance,
things of that sort
before they embark on the more sci-fi-like explorations
of 50Xing communication or doubling memory capacity
and these kinds of things.
Although, I don't know,
they may be doing all of those things in parallel.
What are your thoughts about super capabilities of the brain
or, I don't even know what word to use,
you know, supercharging the brain,
you know, giving the brain functions
for which we've never observed before
in human history, right?
We have our Einsteins and our Feynmans and our Merzenichs
and, you know, it's unclear
who to put in along that line side by side,
but there are some, the Michael Jordans and et cetera,
but we've never heard of or seen somebody
who can jump 20 feet in the air.
Or we've heard of people who have photographic memories,
but I don't know that we are aware
of any human being in history
who could memorize the entire Library of Congress
or all the works within the Vatican within an hour.
Anyway, you get the idea.
What are your thoughts about manipulating neural circuitry
to achieve suprahuman or superhuman
or supraphysiological functions?
Are we there, or should we even be thinking about that?
Is it possible given that neurons simply communicate
through electrical activity
and electrical activity can be engineered
outside of the brain?
How do you think about it?
And here, we don't even have to think
about Neuralink in particular.
It's just but one example of companies
and people in laboratories
that are quite understandably considering all this.
- Well, it's a really interesting time right now.
The science has been going on for decades.
The work that we've done
in this field that you call brain-machine interface
has been going on for a while.
And a lot of the early work
was just trying to restore things like arm movement
or having people or monkeys
control a computer cursor, for example, on the screen.
That's been going on for decades.
What's been really new is that industry is now involved
and some of this is now becoming commercialized,
and we're starting to see us now cross over to this field
where it's no longer just research,
that we're talking about medical products that are designed
to be, you know, surgically implanted in some cases.
You know, there's people doing this kind of work
non-invasively as well that don't require surgery.
The specific question that you are asking about
is an area that we call augmentation.
So can you build a device
that essentially enhances someone's ability
beyond supranormal, super memory,
super communication speeds beyond speech, for example,
I guess superior precision athletic abilities?
I think that these are very serious kind of questions
to be asking now
because, as you mentioned, the pathway so far
is really to focus on these medical applications.
I personally don't think that we've thought enough actually
about what these kind of scenarios are going to look like,
and I don't think we've thought through
all the ethical implications of what this means
for augmentation in particular.
There's part of this that is not new at all.
Humans throughout history have been doing things
to augment our function,
coffee, nicotine, all kinds of things,
all kinds of medications that cross over
from medical to consumer.
That is everywhere.
So the pursuit of augmentation or performance or enhancement
is really not a new thing.
The questions really,
as they relate to neurotechnologies, for example,
have to do with the invasive nature.
For example, if these technologies
require surgery, for example,
to do something that is not for a medical application.
Again, there, that is not exactly new territory either.
People do that routinely for cosmetic kind of procedures
for physical appearance, not necessarily cognitive.
So I do think that, provided the technology
continues to emerge the way that it does,
that it's going to be around the corner,
and it probably is not going to be in ways
that are super obvious.
I don't think it's going to be like,
can we easily memorize every fact in the world,
but in forms that are going to be much more incremental
and maybe more subtle.
In many ways, we already have that now.
Like, for example, you don't have to have a neural interface
embedded in your brain to get information,
essentially access to all information in the world.
You just have to have, you know, your iPhone.
Whether you could do it faster through a brain interface,
I definitely wouldn't rule that out.
But think about this,
that the systems that we have already
to speak and to communicate
have evolved over, you know, thousands and millions of years
and they're supported by neural structures
that have bandwidth of millions of neurons.
There is no technology that exists right now
that people are thinking about that are in commercial forms,
certainly not even in research labs,
that come anywhere close
to what has been evolved for those natural purposes.
So I'm essentially saying two sides of this,
which is we're already getting into this now,
this is not new territory,
this topic of augmentation, both physical and cognitive,
we've already surpassed that.
That's part of what humans do in general.
But we are entering this area of, like, enhanced cognition,
these areas that I think the technology
is going to be the rate-limiting step in how far it can go,
and we have not had the full conversations
about, number one, is this what we actually want?
Is this going to be good for society?
Who gets access to this technology?
These are all things
that are going to become real-world problems.
- There's certainly a lot to consider.
In thinking about augmentation
and another theme that I've yet to ask you about
but I'm extremely curious about,
which is facial expressions.
Before we talk about the relationship
between the musculature of the face and language
and the communication of emotion,
I'd love for you to, if you would, touch on a little bit
of what you're doing with patients like Pancho
to move beyond somebody who's locked in
being able to type out words on a screen
with their thoughts.
There's a rich array of information
contained within the face and facial expression.
And while somebody like Pancho
going from having to, you know, be completely locked in
to being able to peck out letters on a keyboard
to being able to just think of those letters
and having them spelled out,
that's a tremendous set of leaps forward towards normalcy.
It's still far and away different
than Pancho speaking with his mouth,
which I think, knowing some people who are restricted,
who are quadriplegic,
you know, a lot of what they struggle with in the real world
is actually a height difference sometimes
because they're seated while other people are standing.
We don't often think about this,
but to always have to look up to communicate with people
is a very different interface in the world.
They manage quite well, of course.
But could you tell us what you're doing
in terms of merging the brain-machine interface
with extraction of speech signals
from people who are locked-in like Pancho
with facial expressions?
- Sure, yeah.
Well, like we described before, progress is being made.
The proof of principle is out there
that you can decode speech.
That will continue to optimize,
and I'm very confident that that's going to improve
very, very quickly in the coming years
to the point where it's like,
you know, not just a small vocabulary,
but a large vocabulary and at reasonable rates,
at a level that's going to be really helpful.
I'm very optimistic about that.
I think it's the right time
to start really thinking about a broader vision
of what communication really is.
So for example, I'm here with you in person.
We could have done this virtually, probably,
it's pretty easy to do that.
We could've recorded this really separate,
but there is something
about being able to actually see your expressions
and to understand other forms of communication.
So another really important one is nonverbal,
the expressions that you're making.
You know, for example,
if you have a quizzical look on your face
if I'm saying something not clear,
that's a sign to me that I need to rephrase it
or to say it in a different way
or to slow down, for example.
Or if there's something that really excites you,
I want to continue to say more about it
and talk more in detail,
you know, essentially about a given thing.
So facial expressions actually are a really important part
of the way we speak, and there's two things.
It's not just the expressions
of, like, how you're feeling and perceiving what I'm saying,
but it's also seeing my mouth move
and your eyes actually seeing my mouth move
and my jaw move in a particular way
that actually allows you to hear those sounds better.
So having both the visual information
but also the sounds go into your brain
is going to improve intelligibility, also make it more natural.
- And memory for what is spoken?
- Perhaps.
- So here's a call for people not just listening to podcasts
but watching them and listening to them on YouTube,
I suppose, if we were to sort of translate this
to the real world. - Exactly, exactly.
And the reason why we're also very interested
in this idea of not just having text on a screen,
but essentially a fully computer-animated face
like an avatar of the person's speech movements
and their facial expressions
is going to be a more complete form of expression.
Now, you can imagine right now,
that might just be someone looking at a computer screen
interpreting these signals,
but I think the way things are going,
in the next couple of years,
a lot more of our social interactions, more than even now,
are going to move into this digital virtual space.
And, of course, most people are thinking
about what that means for most consumers,
but it also has really important implications
for people who are disabled, right,
and how are they going to participate in that.
And so we are thinking really about,
for people like Pancho and other people who are paralyzed,
what other forms of BCI can we do
in order to help improve their ability to communicate?
So one is essentially building out more holistic avatars,
you know, things that can essentially decode,
you know, essentially their expressions
or the movements associated with their mouth and jaw
when they actually speak to improve that communication.
- So do you envision a time not too long from now
where instead of tweeting out something in text,
my avatar will,
I'll type it out, but my avatar will just say it,
it'll be an image of my avatar saying whatever it is
I happen to be tweeting at that moment.
- That's what we're working on, yeah.
So I don't think that...
That is going to happen and it's going to happen soon,
and there's a lot of progress in that.
And, again, we're just trying to enrich the field,
you know, of communication expression
to make it more normal.
And we actually think that having that kind of avatar
is a way of getting feedback to people learning how to speak
through a speech neuroprosthetic,
that's the device that we call,
it's a speech neuroprosthetic,
that that is going to be the way that can help people
learn how to do it the quickest.
Not necessarily, like, trying to say words
and having it come on a screen,
but actually have people embody,
feel like it's part of themselves
or that they are directly controlling
that illustration or animation.
- This idea of an avatar
speaking out what we would otherwise write
is fascinating to me.
On Instagram, I post videos, I don't filter them,
but I know there's a lot of discussion nowadays
about people using filters to make their skin look different
or the lighting look different,
a lot of filtering and also the use of captions,
so that essentially what you end up with
is somewhere between an actual raw video of what was spoken
and an avatar version of it.
- Yeah. - I mean, if the mismatch
between what's spoken and what's in the caption
is too dramatic, then it doesn't quite work.
But I watch these carefully when people use captions,
and oftentimes there's a smoothing
of what was said into the captions
so it seems much more succinct and accurate.
Oftentimes, the reverse is also true
where the caption is inaccurate
and then it creates this kind of jarring mismatch.
In any case, I think this aspect in the clinical realm
of using an avatar to allow people like Pancho
to essentially be a face
that communicates through spoken language
from an avatar that looks like them
is fascinating and indeed important,
and I think how avatars emerge in social spaces
is going to be really fascinating.
I get a lot of questions about stutter.
I think that for people who have a stutter,
it is itself anxiety-provoking.
Is stutter related to anxiety?
If one has a stutter, what can they do?
Does stutter reflect some underlying neurologic phenomenon
that might distinguish
between one kind of stutter and another?
What can people with stutter do
if they'd like to relieve their stutter?
- Yeah, great question.
Stutter is a condition
where the words can't come out fluently.
So you have all the ideas, you've got the language intact.
You know, remember, we talked about this distinction
between language and speech.
Stuttering is a problem of speech, right?
So the ideas, the meanings, the grammar, it's all there,
and people stutter
but they can't get the words out fluently.
So that's a speech condition,
and, in particular, it's a condition
that affects articulation,
specifically about controlling the production of words
in this really coordinated kind of movements
that have to happen in the vocal tract
to produce fluent speech.
And stuttering is a condition
where people have a predisposition to it,
so there is an aspect of stuttering,
you are a stutterer or you're not a stutterer, right?
But people who stutter don't stutter all the time either,
so you could be a stutterer
who stutters at some times but not others.
And, really, the main link between stuttering and anxiety
is that anxiety can provoke it and make it worse.
That's certainly true,
but it's not necessarily caused by anxiety.
It can essentially trigger it or make it worse,
but it's not the cause of it, per se.
So the cause of it is still really not clear,
but it does have to do with these kind of brain functions
that we've been talking about earlier,
which is that in order to produce normal fluent speech,
we're not even conscious of what is going on
in our mouths, in our larynx.
We're not conscious,
and if we were, we would not be able to speak
because it's too complex, it's too precise.
It's something that we have really developed
the abilities to do, and we do it naturally, right?
It's part of our programming
and part of what we learn inherently,
you know, just through exposure.
So stuttering is essentially a breakdown
at certain times
in that machinery being able to work
in a really coordinated way.
You can think about, you know, the operations of these areas
that are controlling the vocal tract.
Let's say speech is like a symphony.
In order for it to come out normally,
you've got to have not just one part, the larynx,
but the lips, the jaw.
They can't be doing their own thing.
They have to be very, very precisely activated
and very, very precisely controlled
in a way to actually create words.
And so in stuttering,
there's a breakdown of that coordination.
- If somebody has a stutter,
is it better to address that early in life
when there's still neuroplasticity that is very robust?
And if so, what's the typical route for treatment?
I have to imagine it's not brain surgery typically.
I'm guessing there are speech therapists
that people can talk to,
and they can help them work out where they're getting stuck
and the relationship to anxiety.
- Yeah, exactly.
I mean, part of it is about that anxiety,
but a lot of it really has to do with therapy
to sort of like work through
and think of tricks basically sometimes to create conditions
where you can actually get the words to come out.
Some forms of stuttering are really initiation problems.
Just getting started itself is very hard.
You want to start with initial vowel or consonant,
but it won't emit.
And so a lot of the therapy is really just focusing
on, like, how do you create the conditions,
you know, for that to happen?
There's another aspect to it that I find very interesting
is that the feedback,
essentially, what we hear ourselves say, for example,
and every time that I say a word,
I'm also hearing what I'm saying,
so that's what we call auditory feedback.
That turns out to be very important,
and sometimes when you change that,
it can actually change the amount someone stutters
for better or for worse.
And it's giving us a clue that the brain
is not just focused on sending the commands out,
but it's also possibly interacting
with the part that is hearing the sounds,
and there's something might be going on in that connection
that breaks down when stuttering occurs.
So there are individuals that are stutterers
but they don't stutter all the time.
In those instances,
there's something happening in those particular moments
where this very, very precise coordination
needs to happen in the brain
in order to get the words out fluently.
- We've talked a little bit about caffeine
and why you avoid it.
Because your work requires such precision and calm,
and, frankly, to me,
it seems like you're running a lot of operations,
no pun intended, in parallel when you're doing surgery,
not just thinking about where to direct the instruments,
but also thinking like a chess player
several steps down the line
what could happen, what if, if-then type thinking.
What are some of the other practices and tools that you use
to put yourself into state for optimal neurosurgery
or for, you know, thinking about scientific problems
for that matter?
We keep threatening to go running together,
but I know you run, correct?
- Yeah.
- Do you find running to be an essential part
of your state regulation?
- Absolutely, yeah.
So for me, most exercise that I do,
I really don't do for physical reasons.
I do it for mental reasons.
I can tell, for example,
if I don't go on a run or a swim just after a day or two,
and it can have translation,
for example, in the way I feel in the operating room
or even the way I interact with other people.
So there's no question
that, you know, the mind and body are deeply connected,
and for me personally,
being able to have opportunity to disconnect for a while,
it turns out to be really, really important.
Now, the operating room, for me, is another space,
kind of like running or swimming,
where I'm disconnected from the rest of the world.
I don't bring my cell phone into the operating room.
I'm disconnected from the external world
for that time that I'm in the surgery,
and all I am doing is just focusing.
Now, that doesn't mean that I'm having complex thoughts
or doing something very complicated.
Sometimes it is like that, but it's not always like that.
There are things that we do in surgery
that are, like, routine and rote
and are from muscle memory.
So, for example, suturing skin
or doing certain kinds of dissection
or drilling part of the bone, for example,
these are all things that become very rote after a time.
So, for me, even being in the operating room
actually can sometimes fulfill that purpose.
So I really look forward to being in the operating room
because that intense focus
allows me to sort of disconnect
from all the other things that I'm worrying about,
you know, that are happening out in the outside world.
You know, we all have those kind of things that happen,
and I'm certainly no exception to that.
But, strangely, the operating room, for me, is a sanctuary.
I love being there
because we have some control over the environment.
I know what is there, I know the anatomy of the brain,
my motions are going through routines.
And so for me, that's not actually very different
than going on a run
and letting my, you know, legs move in specific ways.
It's just the same thing for my hands.
- Do you listen to music or audiobooks when you run,
or are you divorced from technology when you run?
- Well, music helps me, like, just stay motivated
and distracted from being out of breath and other things.
And for me, it's a way to just catch up
with, like, the world.
So sometimes I do, but I do notice
that, like, I don't run as well, for example.
In the operating room, it's a little different.
You know, different surgeons have preferences.
I'm more of the camp
where I don't like any distraction whatsoever.
I like people to be able to hear the words that I'm saying
without having background noise.
I don't really think about relying on music or other things
to try to put me in a state of mind.
You know, I think just being there alone
and just, you know, trying to treat it the way it is,
it's a sacred moment where someone's life
is really directly under your hands,
That enough kind of focuses me very quickly,
and I like that.
It really detaches me from a lot of the things
that are preoccupying me,
and for those couple of hours that we have a surgery,
we're just focused on one thing only.
- That's fantastic.
Again, I think of, in the range of brain explorers,
the neurosurgeons, those of your profession,
are, to me, like the astronauts of neuroscience
because they're really going
to the farthest reaches possible,
and they're testing and probing
and really at the front edge of discovering from the species
that we arguably care about the most, which is humans.
Eddie, I have to say,
from the first time we became friends 38 years ago...
- Something like that.
- Something like that.
I'm almost reluctant to say,
so I only reveal it in part that Eddie and I became friends
because both he and I shared a love of birds,
and we had a club at our school
of which there were only two members, Eddie and I.
- Small club.
- Small club.
There was one honorary member,
and there were certain requirements for being in this club
that we won't reveal.
We took a pact of secrecy,
and we're going to obey that pact of secrecy.
But to be sitting here with you today,
for me, is a absolute thrill,
not just because we've been friends for that long
or that we got reacquainted
through literally the halls of medicine and science,
but because I really do see what you're doing
as really representing that front, absolute cutting edge
of exploration and application.
I mean, the story of Pancho is but one of your many patients
that has derived tremendous benefit from your work.
And now as the chair of a department,
you, of course, work alongside individuals
who are also doing incredible work
in the spinal cord, et cetera.
So on behalf of myself and everyone listening,
I just really want to thank you for joining us today
to share this information.
We will certainly have you back
because there's an entire list of other questions
we didn't have time to get to,
but also just for the work you do, it's truly spectacular.
- Andrew, thanks so much.
You know, I'm very humbled basically by what you just said,
and I feel that it's really an extraordinary honor actually
and privilege, you know, to be here with you
and reconnect and talk about all these ideas.
It's probably not random,
you know, that we ended up in similar spots and interests.
I think when we were kids,
you know, it starts with some deep interests
and kind of nerding out on topics,
and it's probably not a coincidence,
you know, that we have such deep interest in this work now.
I just feel really lucky to be able to do what I do.
It's fun every day, almost every day,
be able to go to work and take care of folks
and learn at the same time and then just close the loop,
you know, how do we apply the knowledge
that we learn one day to someone who comes in next week?
It's really fun.
And we don't know everything, we're not even close to it,
but the journey to figure this out,
it's really extraordinary.
I mean, it's like you said,
it's exploring new lands.
Literally in the operating room
when I'm looking at the exposed cortex trying to understand
is it safe to walk down this part of the cortical landscape
or this other trail?
You know, which one is going to be the one
that is going to be safe
versus the other that results in paralysis
and inability to talk?
Well, maybe I shouldn't call it fun,
but it's very important, too,
in addition to being really intellectually important
for how we understand how the brain works.
And so, yeah, I feel just really lucky
to be in that opportunity.
- And we're lucky to have you
being one of the people doing it, so thank you ever so much.
- Thanks.
- Thank you for joining me today
for my discussion with Dr. Eddie Chang.
If you'd like to learn more about his research
into the neuroscience of speech and language
and bioengineering, his treatment of epilepsy
and other aspects and diseases and disorders of the brain,
please check out the links in our show note captions.
We have links to his laboratory website,
his clinical website, and other resources
related to his critical research as well.
If you're learning from
and are enjoying the Huberman Lab podcast,
please subscribe to our YouTube channel.
That's a terrific zero-cost way to support us.
In addition, please subscribe to the Huberman Lab podcast
on Spotify and Apple.
And on both Spotify and Apple, you also have the opportunity
to leave us up to a five star review.
If you have questions for us
or comments about the information we've covered
or suggestions about future guests,
please put those in the comments section on YouTube.
We do read all the comments.
Please also check out the sponsors
mentioned at the beginning of today's episode.
That's the best way to support the Huberman Lab podcast.
Not so much today,
but in many previous episodes of the Huberman Lab podcast,
we talk about supplements.
While supplements aren't necessary for everybody,
many people derive tremendous benefit from them
for things like enhancing sleep and focus
and hormone optimization.
The Huberman Lab podcast
has partnered with Momentous Supplements.
If you'd like to see the supplements
that the Huberman Lab podcast
has partnered with Momentous on,
you can go to livemomentous, spelled O-U-S,
so livemomentous.com/huberman,
and there, you'll see a number of the supplements
that we talk about regularly on the podcast.
I should just mention that that catalog of supplements
is constantly being updated.
As mentioned at the beginning of today's episode,
the Huberman Lab podcast has now launched a premium channel.
That premium channel will feature monthly AMAs,
or Ask Me Anythings, where I answer your questions in depth,
as well as other premium resources.
If you'd like to subscribe to the premium channel,
you can simply go to hubermanlab.com/premium.
I should mention that the proceeds from the premium channel
go to support the standard Huberman Lab podcast,
which will continue to be released every Monday per usual,
as well as supporting various research projects
done on humans to create the sorts of tools
for mental health, physical health, and performance
that you hear about on the Huberman Lab podcast.
Again, it's hubermanlab.com/premium to subscribe.
It's $10 a month or $100 per year.
If you haven't already subscribed
to our zero-cost newsletter,
we have what is called the Neural Network Newsletter.
You can subscribe by going to hubermanlab.com,
go to the menu and click on Newsletter.
Those newsletters include summaries of podcast episodes,
lists of tools from the Huberman Lab podcast.
And if you'd like to see
previous newsletters we've released,
you can also just go to hubermanlab.com,
click on Newsletter in the menu,
and you'll see various downloadable PDFs.
If you want to sign up for the newsletter,
we just ask for your email,
we do not share your email with anybody,
and, again, it's completely zero-cost.
If you're not already following me on social media,
it's hubermanlab on Twitter, on Facebook, and on Instagram,
and at all three of those places,
I cover topics and subject matter
that are sometimes overlapping
with the information covered on the Huberman Lab podcast
but that's often distinct from information
on the Huberman Lab podcast.
Again, it's hubermanlab on all social media channels.
Thank you once again for joining me today
for the discussion about the neuroscience of speech,
language, epilepsy, and much more with Dr. Eddie Chang.
And as always, thank you for your interest in science.
[upbeat music]