Mark Zuckerberg & Dr. Priscilla Chan: Curing All Human Diseases & the Future of Health & Technology
ANDREW HUBERMAN: Welcome to the Huberman Lab podcast,
where we discuss science and science-based tools
for everyday life.
[MUSIC PLAYING]
I'm Andrew Huberman.
And I'm a professor of neurobiology and ophthalmology
at Stanford School of Medicine.
My guests today are Mark Zuckerberg and Dr. Priscilla
Chan.
Mark Zuckerberg, as everybody knows,
founded the company Facebook.
He is now the CEO of Meta, which includes Facebook, Instagram,
WhatsApp, and other technology platforms.
Dr. Priscilla Chan graduated from Harvard
and went on to do her medical degree at the University
of California San Francisco.
Mark Zuckerberg and Dr. Priscilla Chan
are married and the co-founders of the CZI,
or Chan Zuckerberg Initiative, a philanthropic organization
whose stated goal is to cure all human diseases.
The Chan Zuckerberg Initiative is accomplishing that
by providing critical funding not available elsewhere,
as well as a novel framework for discovery
of the basic functioning of cells,
cataloging all the different human cell
types, as well as providing AI, or artificial intelligence,
platforms to mine all of that data
to discover new pathways and cures for all human diseases.
The first hour of today's discussion
is held with both Dr. Priscilla Chan and Mark Zuckerberg,
during which we discuss the CZI and what it really
means to try and cure all human diseases.
We talk about the motivational backbone for the CZI
that extends well into each of their personal histories.
Indeed, you'll learn quite a lot about Dr. Priscilla Chan, who
has, I must say, an absolutely incredible family story leading
up to her role as a physician and her motivations
for the CZI and beyond.
And you'll learn from Mark, how he is bringing an engineering
and AI perspective to the discovery
of new cures for human disease.
The second half of today's discussion
is just between Mark Zuckerberg and me, during which we discuss
various Meta Platforms, including, of course,
social media platforms, and their effects on mental health
in children and adults.
We also discuss VR, Virtual Reality, as well as
augmented and mixed reality.
And we discuss AI, Artificial Intelligence,
and how it stands to transform not just our online experiences
with social media and other technologies,
but how it stands to potentially transform
every aspect of everyday life.
Before we begin, I'd like to emphasize
that this podcast is separate from my teaching and research
roles at Stanford.
It is, however, part of my desire and effort
to bring zero cost to consumer information
about science and science-related tools
to the general public.
In keeping with that theme, I'd like
to thank the sponsors of today's podcast.
Our first sponsor is Eight Sleep Eight
Sleep makes smart mattress covers with cooling, heating,
and sleep tracking capacity.
I've spoken many times before on this podcast about the fact
that getting a great night's sleep
really is the foundation of mental health, physical health
and performance.
One of the key things to getting a great night's sleep
is to make sure that the temperature of your sleeping
environment is correct.
And that's because in order to fall and stay deeply asleep,
your body temperature actually has
to drop by about 1 to 3 degrees.
And in order to wake up feeling refreshed and energized,
your body temperature actually has
to increase by about 1 to 3 degrees.
With Eight Sleep, you can program the temperature
of your sleeping environment in the beginning, middle,
and end of your night.
It has a number of other features,
like tracking the amount of rapid eye movement
and slow wave sleep that you get,
things that are essential to really dialing
in the perfect night's sleep for you.
I've been sleeping on an Eight Sleep mattress
cover for well over two years now.
And it has greatly improved my sleep.
I fall asleep far more quickly.
I wake up far less often in the middle of the night.
And I wake up feeling far more refreshed
than I ever did prior to using an Eight Sleep mattress cover.
If you'd like to try Eight Sleep,
you can go to eightsleep.com/huberman to save
$150 off their Pod 3 cover.
Eight Sleep currently ships to the USA,
Canada, UK, select countries in the EU, and Australia.
Again, that's eightsleep.com/huberman.
Today's episode is also brought to us by LMNT.
LMNT is an electrolyte drink that has everything you need
and nothing you don't.
That means plenty of electrolytes-- sodium,
magnesium and potassium-- and no sugar.
The electrolytes are absolutely essential for the functioning
of every cell in your body.
And your neurons, your nerve cells,
rely on sodium, magnesium and potassium
in order to communicate with one another electrically and
chemically.
LMNT contains the optimal ratio of electrolytes
for the functioning of neurons and the other cells
of your body.
Every morning, I drink a packet of LMNT dissolved
in about 32 ounces of water.
I do that just for general hydration
and to make sure that I have adequate electrolytes
for any activities that day.
I'll often also have an LMNT packet, or even two packets,
in 32 to 60 ounces of water if I'm exercising very hard
and certainly if I'm sweating a lot, in order
to make sure that I replace those electrolytes.
If you'd like to try LMNT, you can go
to drinklmnt.com/huberman to get a free sample pack with
your purchase.
Again, that's drinklmnt.com/huberman.
I'm pleased to announce that we will
be hosting four live events in Australia, each of which
is entitled The Brain Body Contract, during which I will
share science and science-related tools
for mental health, physical health, and performance.
There will also be a live question and answer session.
We have limited tickets still available
for the event in Melbourne on February 10,
as well as the event in Brisbane on February 24.
Our event in Sydney, at the Sydney Opera House,
sold out very quickly.
So as a consequence, we've now scheduled
a second event in Sydney at the Aware Super Theatre
on February 18.
To access tickets to any of these events,
you can go to hubermanlab.com/events and use
the code Huberman at checkout.
I hope to see you there.
And as always, thank you for your interest in science.
And now, for my discussion with Mark Zuckerberg
and Dr. Priscilla Chan.
Priscilla, Mark, so great to meet you.
And thank you for having me here in your home.
MARK ZUCKERBERG: Oh, Thanks for having us on the podcast.
PRISCILLA CHAN: Yeah.
ANDREW HUBERMAN: I'd like to talk about the CZI, the Chan
Zuckerberg Initiative.
I learned about this a few years ago,
when my lab was-- and still is now-- at Stanford,
as a very exciting philanthropic effort
that has a truly big mission.
I can't imagine a bigger mission.
So maybe you could tell us what that big mission is.
And then we can get into some of the mechanics of how
that big mission can become a reality.
PRISCILLA CHAN: So like you're mentioning, in 2015,
we launched the Chan Zuckerberg Initiative.
And what we were hoping to do at CZI
was think about how do we build a better future for everyone
and looking for ways where we can contribute
the resources that we have to bring philanthropically
and the experiences that Mark and I have had,
for me as a physician and educator,
for Mark as an engineer, and then
our ability to bring teams together to build the builders.
Mark has been a builder throughout his career.
And what could we do if we actually
put together a team to build tools, do great science?
And so within our science portfolio,
we've really been focused on what some people think
is either an incredibly audacious goal
or an inevitable goal.
But I think about it as something
that will happen if we continue focusing on it, which
is to be able to cure, prevent, or manage
all disease by the end of the century.
ANDREW HUBERMAN: All disease?
PRISCILLA CHAN: All disease.
So that's important, right?
And so a lot of times, people ask like, which disease?
And the whole point is that there is not one disease.
And it's really about taking a step back to where I always
found the most hope as a physician, which
is new discoveries and new opportunities
and new ways of understanding how to keep people well come
from basic science.
So our strategy at CZI is really to build tools, fund science,
change the way basic scientists can see the world
and how they can move quickly in their discoveries.
And so that's what we launched in 2015.
We do work in three ways.
We fund great scientists.
We build tools-- right now, software tools
to help move science along and make it easier for scientists
to do their work.
And we do science.
You mentioned Stanford being an important pillar
for our science work.
We've built what we call biohubs, institutes where teams
can take on grand challenges to do work that
wouldn't be possible in a single lab
or within a single discipline.
And our first biohub was launched
in San Francisco, a collaboration between Stanford,
UC Berkeley, and UCSF.
ANDREW HUBERMAN: Amazing.
Curing all diseases implies that there will either
be a ton of knowledge gleaned from this effort, which
I'm certain there will be-- and there already has been.
We can talk about some of those early successes in a moment.
But it also sort of implies that if we can understand
some basic operations of diseases and cells
that transcend autism, Huntington's, Parkinson's,
cancer and any other disease that perhaps there
are some core principles that would make the big mission
a real reality, so to speak.
What I'm basically saying is, how are you attacking this?
My belief is that the cell sits at the center of all discussion
about disease, given that our body is made up of cells
and different types of cells.
So maybe you could just illuminate for us
a little bit of what the cell is, in your mind,
as it relates to disease and how one goes about understanding
disease in the context of cells because, ultimately, that's
what we're made up of.
MARK ZUCKERBERG: Yeah.
Well, let's get to the cell thing in a moment.
But just even taking a step back from that,
we don't think, at CZI, that we're
going to cure, prevent or manage all diseases.
The goal is to basically give the scientific community
and scientists around the world the tools
to accelerate the pace of science.
And we spent a lot of time, when we
were getting started with this, looking
at the history of science and trying to understand the trends
and how they've played out over time.
And if you look over this very long-term arc,
most large-scale discoveries are preceded
by the invention of a new tool or a new way to see something.
And it's not just in biology, right?
It's like having a telescope came
before a lot of discoveries in astronomy and astrophysics.
But similarly, the microscope and just different ways
to observe things or different platforms,
like the ability to do vaccines preceded the ability
to cure a lot of different things.
So this is the engineering part that you were talking about,
about building tools.
We view our goal is to try to bring together
some scientific and engineering knowledge to build tools
that empower the whole field.
And that's the big arc and a lot of the things
that we're focused on, including the work in single cell
and cell understanding, which you can jump in and get
into that if you want.
But yeah, I think I think we generally
agree with the premise that if you
want to understand this stuff from first principles--
people study organs a lot right.
You study how things present across the body.
But there's not a very widespread understanding
of how each cell operates.
And this is a big part of some of the initial work
that we tried to do on the Human Cell Atlas and understanding
what are the different cells.
And there's a bunch more work that we want
to do to carry that forward.
But overall, I think, when we think about the next 10 years
here of this long arc to try to empower the community
to be able to cure, prevent or manage all diseases,
we think that the next 10 years should really
be primarily about being able to measure and observe
more things in human biology.
There are a lot of limits to that.
It's like you want to look at something through a microscope,
you can't usually see living tissues
because it's hard to see through skin or things like that.
So there are a lot of different techniques
that will help us observe different things.
And this is where the engineering background
comes in a bit because--
I mean, when I think about this is from the perspective of how
you'd write code or something, the idea of trying
to debug or fix a code base, but not be able to step
through the code line by line, it's
not going to happen, right?
And at the beginning of any big project that we do at Meta,
we like to spend a bunch of the time up front just trying
to instrument things and understand
what are we going to look at and how are we
going to measure things so we know we're making progress
and know what to optimize.
And this is such a long-term journey
that we think that it actually makes sense to take the next 10
years to build those kinds of tools for biology
and understanding just how the human body works in action.
And a big part of that is, cells.
I don't know.
Do you want to jump and talk about some of the efforts?
PRISCILLA CHAN: Sure.
ANDREW HUBERMAN: Could I just interrupt briefly and just ask
about the different interventions, so to speak,
that CZI is in a unique position to bring to the quest
to cure all diseases?
So I can think of--
I mean, I know, as a scientist, that money is necessary but not
sufficient, right?
When you have money, you can hire more people.
You can try different things.
So that's critical.
But a lot of philanthropy includes money.
The other component is you want to be able to see things,
as you pointed out.
So you want to know that normal disease process--
like, what is a healthy cell?
What's a diseased cell?
Are cells constantly being bombarded with challenges
and then repairing those?
And then what we call cancer is just
a runaway train of those challenges
not being met by the cell itself or something like that?
So better imaging tools.
And then it sounds like there's not just a hardware component,
but a software component.
This is where AI comes in.
So maybe, at some point, we can break this up
into two, three different avenues.
One is understanding disease processes
and healthy processes.
We'll lump those together.
Then there's hardware-- so microscopes,
lenses, digital deconvolution, ways
of seeing things in bolder relief and more precision.
And then there's how to manage all the data.
And then I love the idea that maybe AI
could do what human brains can't do alone,
like manage understanding of the data
because it's one thing to organize data.
It's another to say, oh, this as you point out
in the analogy with code, that this particular gene
and that particular gene are potentially interesting,
whereas a human being would never
make that potential connection.
MARK ZUCKERBERG: Yeah.
PRISCILLA CHAN: So the tools that CZI
can bring to the table--
we fund science, like you're talking about.
There's lots of ways to fund science.
And just to be clear, what we fund
is a tiny fraction of what the NIH funds, for instance.
ANDREW HUBERMAN: So you guys have been generous enough
that it definitely holds wait to NIH's contribution.
PRISCILLA CHAN: Yeah.
But I think every funder has its own role in the ecosystem.
And for us, it's really, how do we
incentivize new points of view?
How do we incentivize collaboration?
How do we incentivize open science?
And so a lot of our grants include inviting people
to look at different fields.
Our first neuroscience RFA was aimed towards incentivizing
people from different backgrounds-- immunologists,
microbiologists-- to come and look
at how our nervous system works and how to keep it healthy.
Or we ask that our grantees participate
in the pre-print movement to accelerate
the rate of sharing knowledge and actually others being
able to build upon science.
So that's the funding that we do.
In terms of building, we build software and hardware,
like you mentioned.
We put together teams that can build
tools that are more durable and scalable than someone
in a single lab might be incentivized to do.
There's a ton of great ideas.
And nowadays, most scientists can tinker and build
something useful for their lab.
But it's really hard for them to be
able to share that tool sometimes
beyond their own laptop or forget the next Lab
over or across the globe.
So we partner with scientists to see what is useful,
what kinds of tools.
In imaging, Napari, it's a useful image annotation
tool that is born from an open source community.
And how can we contribute to that?
Or a CELLxGENE, which works on single cell data sets.
And how can we make it build a useful tool so that scientists
can share data sets, analyze their own
and contribute to a larger corpus of information?
So we have software teams that are building, collaborating
with scientists to make sure that we're building
easy to use, durable, translatable tools
across the scientific community in the areas that we work in.
We also have institutes-- this is where the imaging work comes
in-- where we are proud owners of an electron microscope
right now.
It's going to be installed at our imaging institute.
And that will really contribute to the way
where we can see work differently.
But more hardware does need to be developed.
We're partnering with the fantastic scientists
in the biohub network to build a mini-phase plate to increase
to align the electrons through the electron microscope
to be able to increase the resolution,
so we can see in sharper detail.
So there's a lot of innovative work within the network that's
happening.
And these institutes have grand challenges
that they're working on.
Back to your question about cells,
cells are just the smallest unit that are alive.
And your body, all of our bodies,
have many, many, many cells.
Some estimate of like 37 trillion cells,
different cells in your body.
And what are they all doing?
And what do they look like when you're healthy?
What do they look like when you're sick?
And where we're at right now with our understanding of cells
and what happens when you get sick
is basically we've gotten pretty good at, from the Human Genome
Project, looking at how different mutations
in your genetic code lead for you
to be more susceptible to get sick or directly
cause you to get sick.
So we go from a mutation in your DNA to, wow,
you now have Huntington's disease, for instance.
And there's a lot that happens in the middle.
And that's one of the questions that we're going after at CZI,
is what actually happens.
So an analogy that I like to use to share with my friends
is, right now, say we have a recipe for a cake.
We know there's a typo in the recipe.
And then the cake is awful.
That's all we know.
We don't know how the chef interprets the typo.
We don't know what happens in the oven.
And we don't actually know how it's exactly
connected to how the cake didn't turn out
or how you had expected it.
A lot of that is unknown.
But we can actually systematically try
to break this down.
And one segment of that journey that we're looking at
is how that mutation gets translated and acted
upon in your cells.
And all of your cells have what's called mRNA.
mRNA are the actual instructions that are taken from the DNA.
And our work in Single-Cell is looking
at how every cell in your body is actually interpreting
your DNA slightly differently and what
happens when healthy cells are interpreting the DNA
instructions and when sick cells are
interpreting those directions.
And that is a ton of data.
I just told you, there's 37 trillion cells.
There's different large sets of mRNA in each cell.
But the work that we've been funding is looking at how--
first of all, gathering that information.
We've been incredibly lucky to be
part of a very fast-moving field where we've gone from,
in 2017, funding some methods work to now
having really not complete, but nearly complete
atlases of how the human body works, how flies work, how mice
work at the single-cell level and being
able to then try to piece together
how does that all come together when you're healthy
and when you're sick.
And the neat thing about the inflection point
where we're at in AI is that I can't look at this data
and make sense of it.
There's just too much of it.
And biology is complex.
Human bodies are complex.
We need this much information.
But the use of large language models
can help us actually look at that data
and gain insights, look at what trends
are consistent with health and what trends are unsuspected.
And eventually, our hope, through the use
of these data sets that we've helped curate
and the application of large language models,
is to be able to formulate a virtual cell, a cell that's
completely built off of the data sets of what
we know about the human body, but allows us to manipulate,
and learn faster and try new things to help
move science and then medicine along.
ANDREW HUBERMAN: Do you think we've
cataloged the total number of different cell types?
Every week, I look at great journals
like Cell Nature and Science.
And for instance, I saw recently that, using single cell
sequencing, they've categorized 18 plus different types
of fat cells.
We always think of like a fat cell versus a muscle cell.
So now, you've got 18 types.
Each one is going to express many, many different genes
and mRNAs.
And perhaps one of them is responsible
for what we see in advanced type 2 diabetes,
or in other forms of obesity, or where people can't lay down
fat cells, which turns out to be just as detrimental
in those extreme cases.
So now, you've got all these lists of genes.
But I always thought of single cell sequencing as necessary,
but not sufficient, right?
You need the information, but it doesn't resolve the problem.
And I think of it more as a hypothesis-generating
experiment.
OK, so you have all these genes.
And you can say, well, this gene is particularly
elevated in the diabetic cell type of, let's say,
one of these fat cells or muscle cells for that matter,
whereas it's not in non-diabetics.
So then of the millions of different cells,
maybe only five of them differ dramatically.
So then you generate a hypothesis.
Oh, it's the ones that differ dramatically
that are important.
But maybe one of those genes, when it's only 50% changed,
has a huge effect because of some network biology effect.
And so I guess what I'm trying to get to here
is how does one meet that challenge.
And can AI help resolve that challenge
by essentially placing those lists of genes
into 10,000 hypotheses?
Because I'll tell you that the graduate students
and postdocs in my lab get a chance to test one
hypothesis at a time.
PRISCILLA CHAN: I know.
ANDREW HUBERMAN: And that's really the challenge,
let alone one lab.
And so for those that are listening
to this-- and hopefully, it's not
getting outside the scope of standard understanding
or the understanding we've generated here.
But what I'm basically saying is,
you have to pick at some point.
More data always sounds great.
But then how do you decide what to test?
PRISCILLA CHAN: So no, we don't know all the cell types.
I think one thing that was really exciting when we first
launched this work was cystic fibrosis.
Cystic fibrosis is caused by mutation in CFTR.
That's pretty well known.
It affects a certain channel that makes it hard for mucus
to be cleared.
That's the basics of cystic fibrosis.
When I went to medical school, it was taught as fact.
ANDREW HUBERMAN: So their lungs fill up with fluid.
These are people who are carrying around
sacks of fluid filling up.
PRISCILLA CHAN: Yep.
ANDREW HUBERMAN: I've worked with people like that.
And they have to literally dump the fluid out.
PRISCILLA CHAN: Exactly.
ANDREW HUBERMAN: They can't run or do intense exercise.
Life is shorter.
PRISCILLA CHAN: Life is shorter.
And when we applied single-cell methodologies to the lungs,
they discovered an entirely new cell type
that actually is affected by a mutation in the CF mutation,
in cystic fibrosis mutation, that
actually changes the paradigm of how
we think about cystic fibrosis.
ANDREW HUBERMAN: Amazing.
PRISCILLA CHAN: [? Just ?] [? unknown. ?] So I don't think
we know all the cell types.
I think we'll continue to discover them.
And we'll continue to discover new relationships between cell
and disease, which leads me to the second example I want
to bring up, is this large data set
that the entire scientific community has
built around single cell.
It's starting to allow us to say this mutation, where is it
expressed?
What types of cell types it's expressed in?
And we actually have built a tool
at CZI called CELLxGENE, where you can put in the mutation
that you're interested in.
And it gives you a heat map of cross cell types
of which cell types are expressing the gene that you're
interested in.
And so then you can start looking at, OK,
if I look at gene X and I know it's related to heart disease--
but if you look at the heat map, it's
also spiking in the pancreas.
That allows you to generate a hypothesis.
Why?
And what happens when this gene is mutated
and the function of your pancreas?
Really exciting way to look and ask questions differently.
And you can also imagine a world where
if you're trying to develop a therapy, a drug, and the goal
is to treat the function in the heart,
but you know that it's also really
active in the pancreas again.
So is there going to be an unexpected side effect
that you should think about as you're bringing
this drug to clinical trials?
So it's an incredibly exciting tool
and one that's only going to get better
as we get more and more sophisticated
ways to analyze the data.
ANDREW HUBERMAN: I must say, I love
that because if I look at the advances in neuroscience
over the last 15 years, most of them
didn't necessarily come from looking at the nervous system.
They came from the understanding that the immune system
impacts the brain.
Everyone prior to that talked about the brain
as an immune-privileged organ.
What you just said also bridges the divide
between single cells, organs and systems, right?
Because ultimately, cells make up organs.
Organs make up systems.
And they're all talking to one another.
And everyone nowadays is familiar with gut-brain axis
or the microbiome being so important.
But rarely is the discussion between organs discussed,
so to speak.
So I think it's wonderful.
So that tool was generated by CZI.
Or CCI funded that tool?
MARK ZUCKERBERG: We built that.
PRISCILLA CHAN: We built it.
ANDREW HUBERMAN: You built it.
So is it built by Meta?
Is this Meta?
MARK ZUCKERBERG: No, no, it has its own engineers.
ANDREW HUBERMAN: Got it.
MARK ZUCKERBERG: Yeah.
They're completely different organizations.
ANDREW HUBERMAN: Incredible.
And so a graduate student or postdoc
who's interested in a particular mutation
could put this mutation into this database.
That graduate student or postdoc might
be in a laboratory known for working on heart,
but suddenly find that they're collaborating
with other scientists that work on the pancreas, which also
is wonderful because it bridges the divide
between these fields.
Fields are so siloed in science--
not just different buildings, but people
rarely talk, unless things like this are happening.
PRISCILLA CHAN: I mean, the graduate student is someone
that we want to empower because, one, they're
the future of science, as you know.
And within CELLxGENE, if you put in the gene
you're interested in and it shows you the heat map,
we also will pull up the most relevant papers to that gene.
And so read these things.
ANDREW HUBERMAN: That's fantastic.
As we all know, quality nutrition
influences, of course, our physical health, but also
our mental health and our cognitive functioning--
our memory, our ability to learn new things and to focus.
And we know that one of the most important features
of high quality nutrition is making sure
that we get enough vitamins and minerals from high quality,
unprocessed, or minimally processed
sources, as well as enough probiotics, and prebiotics
and fiber to support basically all
the cellular functions in our body,
including the gut microbiome.
Now, I, like most everybody try to get optimal nutrition
from whole foods, ideally mostly from minimally processed or non
processed foods.
However, one of the challenges that I and so many other people
face is getting enough servings of high quality fruits
and vegetables per day, as well as
fiber and probiotics that often accompany those fruits
and vegetables.
That's why, way back in 2012, long before I ever
had a podcast, I started drinking AG1.
And so I'm delighted that AG1 is sponsoring the Huberman Lab
podcast.
The reason I started taking AG1 and the reason I still
drink AG1 once or twice a day is that it
provides all of my foundational nutritional needs.
That is, it provides insurance that I
get the proper amounts of those vitamins, minerals, probiotics
and fiber to ensure optimal mental health, physical
health and performance.
If you'd like to try AG1, you can go to drinkag1.com/huberman
to claim a special offer.
They're giving away five free travel
packs plus a year's supply of vitamin D3 K2.
Again, that's drinkag1.com/huberman to claim
that special offer.
MARK ZUCKERBERG: I just think going back to your question
from before are there going to be more cell types that
get discovered?
I mean, I assume so, right?
I mean, no catalog of this stuff is ever--
it doesn't seem like we're ever done.
we keep on finding more.
But I think that that gets to one of the things
that I think are the strengths of modern LLMs,
is the ability to imagine different states that things
can be in.
So from all the work that we've done and funded
on the Human Cell Atlas, there is a large corpus of data
that you can now train a kind of large-scale model on.
And one of the things that we're doing at CZI,
which I think is pretty exciting,
is building what we think is one of the largest non-profit life
sciences AI clusters.
It's on the order of 1,000 GPUs.
And it's larger than what most people have access
to in academia that you can do serious engineering work on.
And by basically training a model
with all of the Human Cell Atlas Data
and a bunch of other inputs as well,
we think you'll be able to basically imagine
all of the different types of cells and all
the different states that they can be in, and when they're
healthy and diseased, and how they'll
interact with different--
interact with each other, interact
with different potential drugs.
But I think the state of LLMs, I think
this is where it's helpful to understand--
have a good understanding and be grounded
in the modern state of AI.
I mean, these things are not foolproof.
I mean, one of the flaws of modern LLMs
is they hallucinate.
So the question is, how do you make it
so that that can be an advantage rather than a disadvantage?
And I think the way that it ends up being an advantage
is when they help you imagine a bunch of states
that someone could be in, but then you, as the scientist
or engineer, go and validate that those are true,
whether they're solutions to how a protein can
be folded or possible states that a cell could
be in when it's interacting with other things.
But we're not yet at the state with AI
that you can just take the outputs of these things
as gospel and run from there.
But they are very good, I think as you said,
hypothesis generators or possible solution generators
that then you can go validate.
So I think that that's a very powerful thing
that we can basically--
building on the first five years of science work
around the Human Cell Atlas and all the data that's
been built out-- carry that forward into something
that I think is going to be a very novel tool going forward.
And that's the type of thing that I
think we're set up to do well.
I mean, you had this exchange a little while back about funding
levels and how CZI is just a drop in the bucket compared
to NIH.
The thing that I think we can do that's different
is funding some of these longer term, bigger projects.
It is hard to galvanize the and pull together
the energy to do that.
And it's a lot of what most science funding is, relatively
small projects that are exploring
things over relatively short time horizons.
And one of the things that we try to do
is build these tools over 5, 10, 15-year periods.
They're often projects that require
hundreds of millions of dollars of funding
and world-class engineering teams and infrastructure to do.
And that, I think, is a pretty cool contribution to the field
that I think is--
there aren't as many other folks who
are doing that kind of thing.
But that's one of the reasons why
I'm personally excited about the virtual cell stuff
because it just this perfect intersection of all the
stuff that we've done in single cell,
the previous collaborations that we've done with the field
and bringing together the industry and AI
expertise around this.
ANDREW HUBERMAN: Yeah, I completely
agree that the model of science that you're putting together
with CZI isn't just unique from NIH,
but it's extremely important that
the independent investigator model is what's
driven the progression of Science in this country
and, to some extent, in Northern Europe for the last 100 years.
And it's wonderful, on the one hand,
because it allows for that image we have of a scientist
tinkering away or the people in their lab, and then
the eurekas.
And that hopefully translates to better human health.
But I think, in my opinion, we've moved past that model
as the most effective model or the only model that
should be explored.
MARK ZUCKERBERG: Yeah, I just think it's a balance.
You want that.
But you want to empower those people.
I think that that's these tools empower those folks.
ANDREW HUBERMAN: Sure.
And there are mechanisms to do that, like NIH.
But it's hard to do collaborative science.
It's interesting that we're sitting here not far--
because I grew up right near here as well.
I'm not far from the garage model of tech, right?
The Hewlett-Packard model, not far from here at all.
And the idea was the tinkerer in the garage, the inventor.
And then people often forget that to implement
all the technologies they discovered
took enormous factories and warehouses.
So there's a similarity there to Facebook, Meta, et cetera.
But I think, in science, we imagine
that the scientists alone in their laboratory
and those eureka moments.
But I think, nowadays, the big questions really require
extensive collaboration and certainly tool development.
And one of the tools that you keep coming back to
is these LLMs, these large language models.
And maybe you could just elaborate,
for those that aren't familiar.
What is a large language model?
For the uninformed, what is it?
And what does it allow us to do that different, other types
of AI don't allow?
Or more importantly, perhaps what
does it allow us to do that a bunch of really smart people,
highly informed in a given area of science,
staring at the data--
what can it do that they can't do?
MARK ZUCKERBERG: Sure.
So I think a lot of the progression of machine learning
has been about building systems, neural networks or otherwise,
that can basically make sense and find patterns in larger
and larger amounts of data.
And there was a breakthrough a number of years
back that some folks at Google actually made
called this transformer model architecture.
And it was this huge breakthrough
because before then there was somewhat of a cap
where if you fed more data into a Neural Network
past some point, it didn't really
glean more insights from it, whereas transformers
just-- we haven't seen the end of how big that
can scale to yet.
I mean, I think that there's a chance
that we run into some ceiling.
ANDREW HUBERMAN: So it never asymptotes?
MARK ZUCKERBERG: We haven't observed it yet.
But we just haven't built big enough systems yet.
So I would guess that--
I don't know.
I think that this is actually one
of the big questions in the AI field today,
is basically, are transformers and are the current model
architectures sufficient?
If you just build larger and larger clusters,
do you eventually get something that's
like human intelligence or super intelligence?
Or is there some kind of fundamental limit
to this architecture that we just haven't reached yet?
And once we get a little bit further in building them out,
then we'll reach that.
And then we'll need a few more leaps
before we get to the level of AI that I
think will unlock a ton of really
futuristic and amazing things.
But there's no doubt that even just being
able to process the amount of data
that we can now with this model architecture
has unlocked a lot of new use cases.
And the reason why they're called large language models is
because one of the first uses of them is people basically
feed in all of the language from, basically, the world
wide web.
And you can think about them as basically prediction machines.
You put in a prompt.
And it can basically predict a version
of what should come next.
So you type in a headline for a news story.
And it can predict what it thinks the story should be.
Or you could train it so that it could
be a chat, bot where, OK, if you're
prompted with this question, you, can get this response.
But one of the interesting things
is it turns out that there's actually nothing specific
to using human language in it.
So if instead of feeding it human language, if you
use that model architecture for a network and instead
you feed it all of the Human Cell Atlas Data,
then if you prompt it with a state of a cell,
it can spit out different versions
of how that cell can interact or different states
that the cell could be in next when it interacts
with different things.
ANDREW HUBERMAN: Does it have to take a genetics class?
So for instance, if you give it a bunch of genetics data,
do you have to say, hey, by the way, and then
you give it a genetics class so it understands that you've got
DNA, RNA, mRNA, and proteins?
MARK ZUCKERBERG: No, I think that the basic nature of all
these machine learning techniques is they're
basically pattern recognition systems.
So there are these very deep statistical machines
that are very efficient at finding patterns.
So it's not actually--
you don't need to teach a language model that's
trying to speak a language a lot of specific things
about that language either.
You just feed it in a bunch of examples.
And then let's say you teach it about something in English,
but then you also give it a bunch of examples
of people speaking Italian.
It'll actually be able to explain the thing that it
learned in English in Italian.
So the crossover and just the pattern recognition
is the thing that is pretty profound and powerful
about this.
But it really does apply to a lot of different things.
Another example in the scientific community
has been the work that AlphaFold,
basically the folks at DeepMind, have done on protein folding.
It's just basically a lot of the same model architecture.
But instead of language, there they
fold they fed in all of these protein data.
And you can give it a state.
And it can spit out solutions to how those proteins get folded.
So it's very powerful.
I don't think we know yet, as an industry, what
the natural limits of it are.
I think that that's one of the things that's
pretty exciting about the current state.
But it's certainly allows you to solve problems
that just weren't solved with the generation of machine
learning that came before it.
ANDREW HUBERMAN: It sounds like CZI
is moving a lot of work that was just done in vitro, in dishes,
and in vivo, in living organisms,
model organisms are humans, to in silico, as we say.
So do you foresee a future where a lot of biomedical research,
certainly the work of CZI included, is done by machines?
I mean, obviously, it's much lower cost.
And you can run millions of experiments, which,
of course, is not to say that humans are not
going to be involved.
But I love the idea that we can run experiments in silico
en masse.
PRISCILLA CHAN: I think in silico experiments are
going to be incredibly helpful to test things quickly,
cheaply and just unleash a lot of creativity.
I do think you need to be very careful about making
sure it still translates and matches the humans.
One thing that's funny in basic science
is we've basically cured every single disease in mice.
We know what's going on when they have a number of diseases
because they're used as a model organism.
But they are not humans.
And a lot of times, that research
is relevant, but not directly one-to-one
translatable to humans.
So you just have to be really careful about making sure
that it actually works for humans.
ANDREW HUBERMAN: Sounds like what CZI is doing
is actually creating a new field.
As I'm hearing all of this, I'm thinking, OK,
this transcends immunology department, cardiothoracic
surgery, I mean neuroscience.
I mean, the idea of a new field, where you certainly embrace
the realities of universities and laboratories
because that's where most of the work that you're funding
is done.
Is that right?
MARK ZUCKERBERG: Mm-hmm.
ANDREW HUBERMAN: So maybe we need
to think about what it means to do science differently.
And I think that's one of the things that's most exciting.
Along those lines, it seems that bringing together
a lot of different types of people
at different major institutions is going
to be especially important.
So I know that the initial CZI Biohub, gratefully,
included Stanford.
We'll put that first in the list,
but also UCSF, forgive me.
I have many friends at UCSF and also Berkeley.
But there are now some additional institutions
involved.
So maybe you could talk about that,
and what motivated the decision to branch outside the Bay Area
and why you selected those particular additional
institutions to be included.
MARK ZUCKERBERG: Well, I'll just say it.
A big part of why we wanted to create additional biohubs
is we were just so impressed by the work
that the folks who were running the first biohub did.
PRISCILLA CHAN: Yeah.
And you should walk through the work
of the Chicago Biohub and the New York Biohub
that we just announced.
But I think it's actually an interesting set of examples
that balance the limits of what you want
to do with physical material engineering
and where things are purely biological
because the Chicago team is really building more
sensors to be able to understand what's going on in your body.
But that's more of a physical kind of engineering challenge,
whereas the New York team-- we basically
talk about this as like a cellular endoscope of being
able to have an immune cell or something that
can go and understand, what's the thing that's
going on in your body?
But it's not a physical piece of hardware.
It's a cell that you can basically have just go report
out on different things that are happening inside the body.
ANDREW HUBERMAN: Oh, so making the cell the the microscope.
PRISCILLA CHAN: Totally.
MARK ZUCKERBERG: And then eventually actually
being able to act on it.
But I mean, you should go into more detail on all this.
PRISCILLA CHAN: So a core principle
of how we think about biohubs is that it has to be--
when we invited proposals, it has
to be at least three institutions,
so really breaking down the barrier of a single university,
oftentimes asking for the people designing the research
aim to come from all different backgrounds and to explain why
that the problem that they want to solve
requires interdisciplinary, inter-university, institution
collaboration to actually make happen.
We just put that request for proposal
out there with our San Francisco Biohub
as an example, where they've done
incredible work in single cell biology and infectious disease.
And we got--
I want to say-- like 57 proposals
from over 150 institutions.
A lot of ideas came together.
And we were so, so excited that we've
been able to launch Chicago and New York.
Chicago is a collaboration between UIUC,
University of Illinois Urbana-Champaign,
and University of Chicago and Northwestern.
Obviously, these universities are multifaceted.
But if I were to describe them by their stereotypical
strength, Northwestern has an incredible medical system
and hospital system.
University of Chicago brings to the table
incredible basic science strengths.
University of Illinois is a computing powerhouse.
And so they came together and proposed
that they were going to start thinking
about cells in tissue, so one of the layers
that you just alluded to.
So how do the cells that we know behave and act differently when
they come together as a tissue?
And one of the first tissues that they're starting with
is skin.
So they've already been able to, as a collaboration
under the leadership, of Shana Kelly design engineered
skin tissue.
The architecture looks the same as what's in you and I.
And what they've done is built these super, super thin
sensors.
And they embed these sensors throughout the layers
of this engineered tissue.
And they read out the data.
They want to see what these cells are secreting,
how these cells talk to each other
and what happens when these cells get inflamed.
Inflammation is an incredibly important process
that drives 50% of all deaths.
And so this is another disease-agnostic approach.
We want to understand inflammation.
And they're going to get a ton of information
out from these sensors that tell you what happens when something
goes awry because right now we can say,
when you have an allergic reaction,
your skin gets red and puffy.
But what is the earliest signal of that?
And these sensors can look at the behaviors
of these cells over time.
And then you can apply a large language model
to look at the earliest statistically significant
changes that can allow you to intervene as early as possible.
So that's what Chicago's doing.
They're starting in the skin cells.
They're also looking at the neuromuscular junction, which
is the connection between where a neuron attaches to a muscle
and tells the muscle how to behave--
super important in things like ALS, but also in aging.
The slowed transmission of information
across that neuromuscular junction
is what causes old people to fall.
Their brain cannot trigger their muscles to react fast enough.
And so we want to be able to embed
these sensors to understand how these different, interconnected
systems within our bodies work together.
In New York, they're doing a related, but equally exciting
project where they're engineering individual cells
to be able to go in and identify changes in a human body.
So what they'll do is--
they're calling it--
ANDREW HUBERMAN: It's wild.
I mean, I love that.
I mean, this is--
I don't want to go on a tangent.
But for those that want to look it up adaptive optics,
there's a lot of distortion and interference
when you try and look at something really
small or really far away.
And really smart physicists figured out,
well, use the interference as part of the microscope.
Make those actually lenses of the microscope.
MARK ZUCKERBERG: We should talk about imaging
separately after you talk about the New York Biohub.
ANDREW HUBERMAN: It's extremely clever, along those lines.
It's not intuitive.
But then when you hear it, it's like it makes so much sense.
It's not immediately intuitive.
Make the cells that already can navigate to tissues
or embed themselves in tissues be the microscope
within that tissue.
I love it.
PRISCILLA CHAN: Totally.
The way that I explain this to my friends
and my family is this is Fantastic Voyage,
but real life.
We are going into the human body.
And we're using the immune cells, which are privileged
and already working to keep your body healthy,
and being able to target them to examine certain things.
So you can engineer an immune cell to go in your body
and look inside your coronary arteries and say,
are these arteries healthy?
Or are there plaques?
Because plaques lead to blockage,
which lead to heart attacks.
And the cell can then record that information
and report it back out.
That's the first half of what the New York
Biohub is going to do.
ANDREW HUBERMAN: Fantastic.
PRISCILLA CHAN: The second half is can you
then engineer the cells to go do something about it.
Can I then tell a different cell,
immune cell that is able to transport in your body
to go in and clean that up in a targeted way?
And so it's incredibly exciting.
They're going to study things that
are immune privilege, that your immune system normally
doesn't have access to--
things like ovarian and pancreatic cancer.
They'll also look at a number of neurodegenerative diseases,
since the immune system doesn't presently have a ton of access
into the nervous system.
But it's both mind blowing and it feels like sci-fi.
But science is actually in a place
where if you really push a group of incredibly
qualified scientists say, could you do this
if given the chance, the answer is like probably.
Give us enough time, the bright team and resources.
It's doable.
MARK ZUCKERBERG: Yeah.
I mean, it's a 10 to 15-year project.
But it's awesome, engineered cells, yeah.
ANDREW HUBERMAN: I love the optimism.
And the moment you said make the cell the microscope,
so to speak, I was like yes, yes and yes.
It just makes so much sense.
What motivated the decision to do the work of CZI
in the context of existing universities as opposed to--
there's still some real estate up in Redwood City
where there's a bunch of space to put biotech companies
and just hiring people from all backgrounds
and saying, hey, have at it and doing this stuff from scratch?
I mean, it's a very interesting decision
to do this in the context of an existing
framework of graduate students that need to do their thesis
and get a first author paper because there's
a whole set of structures within academia
that I think both facilitate, but also limit
the progression of science.
That independent investigator model
that we talked about a little bit earlier,
it's so core to the way science has been done.
This is very different and frankly sounds
far more efficient, if I'm to be completely honest.
And we'll see if I renew my NIH funding after saying that.
But I think we all want the same thing.
As scientists and as humans, we want
to understand the way we work.
And we want healthy people to persist to be healthy.
And we want sick people to get healthy.
I mean, that's really ultimately the goal.
It's not super complicated.
It's just hard to do.
PRISCILLA CHAN: So the teams at the biohub
are actually independent of the universities.
ANDREW HUBERMAN: Got it.
PRISCILLA CHAN: So each biohub will probably
have in total maybe 50 people working on deep efforts.
However, it's an acknowledgment that not all of the best
scientists who can contribute to this area
are actually going to, one, want to leave a university
or want to take on the full-time scope of this project.
So it's the ability to partner with universities
and to have the faculty at all the universities
be able to contribute to the overall project,
is how the biohub is structured.
ANDREW HUBERMAN: Got it.
MARK ZUCKERBERG: But a lot of the way that we're approaching
CZI is this long-term, iterative project
to figure out-- try a bunch of different things,
figure out which things produce the most interesting results,
and then double down on those in the next five-year push.
So we just went through this period
where we wrapped up the first five
years of the science program.
And we tried a lot of different models,
all kinds of different things.
And it's not that the biohub model--
we don't think it's the best or only model.
But we found that it was a really interesting way
to unlock a bunch of collaboration
and bring some technical resources that
allow for this longer term development.
And it's not something that is widely being pursued
across the rest of the field.
So we figured, OK, this is an interesting thing
that we can help push on.
But I mean, yeah, we do believe in the collaboration.
But I also think that we come at this with--
we don't think that the way that we're pursuing this
is the only way to do this or the way
that everyone should do it.
We're pretty aware of what is the rest of the ecosystem
and how we can play a unique role in it.
ANDREW HUBERMAN: It feels very synergistic
with the way science is already done
and also fills an incredibly important niche that,
frankly, wasn't filled before.
Along the lines of implementation--
so let's say your large language models combined with imaging
tools reveal that a particular set of genes acting
in a cluster--
I don't know-- set up an organ crash.
Let's say the pancreas crashes at a particular stage
of pancreatic cancer.
I mean, it's still one of the most deadliest of the cancers.
And there are others that you certainly wouldn't want to get.
But that's among the ones you wouldn't want to get the most.
So you discover that.
And then and the idea is that, OK,
then AI reveals some potential drug
targets that then bear out in vitro, in a dish
and in a mouse model.
How is the actual implementation to drug discovery?
Or maybe this target is druggable, maybe it's not.
Maybe it requires some other approach--
laser ablation approach or something.
We don't know.
But ultimately, is CZI going to be
involved in the implementation of new therapeutics?
Is that the idea?
MARK ZUCKERBERG: Less so.
PRISCILLA CHAN: Less so.
This is where it's important to work in an ecosystem
and to know your own limitations.
There are groups, and startups and companies
that take that and bring it to translation very effectively.
I would say the place where we have
a small window into that world is actually
our work with rare disease groups.
We have, through our Rare As One portfolio,
funded patient advocates to create rare disease
organizations where patients come together and actually pool
their collective experience.
They build bioregistries, registries
of their natural history.
And they both partner with researchers
to do the research about their disease
and with drug developers to incentivize drug developers
to focus on what they may need for their disease.
And one thing that's important to point out
is that rare diseases aren't rare.
There are over 7,000 rare diseases
and collectively impact many, many individuals.
And I think the thing that's, from a basic science
perspective, the incredibly fascinating thing
about rare diseases is that they're actually windows to how
the body normally should work.
And so there are often mutations that when
genes that when they're mutated cause very specific diseases,
but that tell you how the normal biology works as well.
ANDREW HUBERMAN: Got it.
So you discussed basically the major goals and initiatives
of the CZI for the next, say, 5 to 10 years.
And then beyond that, the targets
will be explored by biotech companies.
They'll grab those targets, and test them and implement them.
MARK ZUCKERBERG: There's also, I think,
been a couple of teams from the initial biohub that
were interested in spinning out ideas into startups.
So even though it's not a thing that we're
going to pursue because we're a philanthropy,
we want to enable the work that gets
done to be able to get turned into companies and things
that other people go take and run
towards building ultimately therapeutics.
So that's another zone.
But that's not a thing that we're going to do.
ANDREW HUBERMAN: Got it.
I gather you're both optimists.
Yeah?
Is that part of what brought you together?
Forgive me for switching to a personal question.
But I love the optimism that seems
to sit at the root of the CZI.
PRISCILLA CHAN: I will say that we
are incredibly hopeful people.
But it manifests in different ways between the two of us.
MARK ZUCKERBERG: Yeah.
PRISCILLA CHAN: How would you describe
your optimism versus mine?
It's not a loaded question.
MARK ZUCKERBERG: I don't know.
Huh.
I mean, I think I'm more probably technologically
optimistic about what can be built.
And I think you, because of your focus as an actual doctor,
have more of a sense of how that's
going to affect actual people in their lives,
whereas, for me, it's like--
I mean, a lot of my work is we touch a lot
of people around the world.
And the scale is immense.
And I think, for you, it's like being
able to improve the lives of individuals,
whether it's students at any of the schools that you've started
or any of the stuff that we've supported through the education
work, which isn't the goal here, or just
being able to improve people's lives in that way I think
is the thing that I've seen be super passionate about.
I don't know.
Do you agree with that characterization?
I'm trying I'm trying to--
PRISCILLA CHAN: Yeah, I agree with that.
I think that's very fair.
And I'm sort of giggling to myself
because in day-to-day life, as life partners,
our relative optimism comes through
as Mark just is overly optimistic about his time
management and will get engrossed in interesting ideas.
MARK ZUCKERBERG: I'm late.
PRISCILLA CHAN: And he's late.
ANDREW HUBERMAN: Physicians are very punctual, yeah.
PRISCILLA CHAN: And because he's late,
I have to channel Mark is an optimist whenever
I'm waiting for him.
MARK ZUCKERBERG: That's such a nice way of--
OK, I'll start using that.
PRISCILLA CHAN: That's what I think
when I'm in the driveway with the kids waiting for you.
I'm like, Mark is an optimist.
And so his optimism translates to some tardiness,
whereas I'm a how is this going to happen like.
I'm going to open a spreadsheet.
I'm going to start putting together a plan
and pulling together all the pieces,
calling people to bring something to life.
MARK ZUCKERBERG: But it is one of my favorite quotes, that
is optimists tend to be successful
and pessimists tend to be right.
And yeah, I mean, I think it's true in a lot
of different aspects of life.
ANDREW HUBERMAN: Who said that?
Did you say that, Mark Zuckerberg?
MARK ZUCKERBERG: No, I did not.
PRISCILLA CHAN: Absolutely not.
MARK ZUCKERBERG: No, no, no.
I like it.
I did not invent it.
ANDREW HUBERMAN: We'll give it to you.
We'll put it out there.
MARK ZUCKERBERG: No, no, no.
ANDREW HUBERMAN: Just kidding, just kidding.
MARK ZUCKERBERG: But I do think that there's really
something to it, right?
I mean, if you're discussing any idea,
there's all these reasons why it might not work.
And those reasons are probably true.
The people who are stating them probably have some validity
to it.
But the question is, is that the most productive way to view
the world?
Across the board, I think the people
who tend to be the most productive
and get the most done--
you kind of need to be optimistic
because if you don't believe that something can get done,
then why would you go work on it?
ANDREW HUBERMAN: The reason I ask
the question is that these days we hear a lot about the future
is looking so dark in these various ways.
And you have children.
So you have families.
And you are a family, excuse me.
And you also have families independently
that are now merged.
But I love the optimism behind the CZI
because, behind all this, there's
a set of big statements on the wall.
One, the future can be better than the present,
in terms of treating disease, maybe even, you said,
eliminating diseases, all diseases.
I love that optimism.
And there's a tractable path to do it.
We're going to put literally money, and time, and energy,
and people, and technology and AI behind that.
And so I have to ask, was having children
a significant modifier in terms of your view of the future?
Like wow, you hear all this doom and gloom.
What's the future going to be like for them?
Did you sit back and think, what would it
look like if there was a future with no diseases?
Is that the future, we want our children in?
I mean, I'm voting a big yes.
So we're not we're not going to debate that at all.
But was having children an inspiration
for the CZI in some way?
MARK ZUCKERBERG: Yeah.
So
PRISCILLA CHAN: I think my answer to that--
I would dial backwards for me.
And I'll just tell a very brief story about my family.
I'm the daughter of Chinese-Vietnamese refugees.
My parents and grandparents were boat people,
if you remember people left Vietnam
during the war in these small boats into the South China Sea.
And there were stories about how these boats would sink
with whole families on them.
And so my grandparents, both sets
of grandparents who knew each other,
decided that there was a better future out there.
And they were willing to take risks for it.
But they were afraid of losing all of their kids.
My dad is one of six.
My mom is one of 10.
And so they decided that there was something
out there in this bleak time.
And they paired up their kids, one from each family,
and sent them out on these little boats
before the internet, before cell phones, and just said,
we'll see you on the other side.
ANDREW HUBERMAN: Wow.
PRISCILLA CHAN: And the kids were
between the ages of like 10 to 25, so young kids.
My mom was a teenager, early teen when this happened.
And everyone made it.
And I get to sit here and talk to you.
So how could I not believe that better is possible?
And like I hope that that's in my epigenetics somewhere
and that I carry on.
ANDREW HUBERMAN: That is a spectacular story.
PRISCILLA CHAN: Isn't that wild?
ANDREW HUBERMAN: It is spectacular.
PRISCILLA CHAN: How can I be a pessimist with that?
ANDREW HUBERMAN: I love it.
And I so appreciate that you became a physician
because you're now bringing that optimism,
and that epigenetic understanding,
and cognitive understanding and emotional understanding
to the field of medicine.
So I'm grateful to the people that made that decision.
PRISCILLA CHAN: Yeah.
I've always known that story.
But you don't understand how wild that feels
until you have your own child.
And you're like, well, I can't even--
I refuse to let her use glass bottles only or something
like that.
And you're like, oh my God, the risk and the willingness
of my grandparents to believe in something bigger and better
is just astounding.
And our own children give it a sense of urgency.
ANDREW HUBERMAN: Again, a spectacular story.
And you're sending knowledge out into the fields of science
and bringing knowledge into the fields of science.
And I love this.
We'll see you on the other side.
I'm confident that it will all come back.
Well, thank you so much for that.
Mark, you have the opportunity to talk about--
did having kids change your worldview?
MARK ZUCKERBERG: It's really tough to beat that story.
ANDREW HUBERMAN: It is tough to beat that story.
And they are also your children.
So in this case, you get two for the price of one, so to speak.
MARK ZUCKERBERG: Having children definitely changes
your time horizon.
So I think that that's one thing.
There are all these things that I think we had talked about,
for as long as we've known each other, that you eventually
want to go do.
But then it's like, oh, we're having kids.
We need to get on this, right?
So I think that there's--
PRISCILLA CHAN: That was actually
one of the checklists, the baby checklist before the first.
MARK ZUCKERBERG: It was like, the baby's coming.
We have to start CZI.
PRISCILLA CHAN: Truly.
MARK ZUCKERBERG: I'm like sitting in the hospital
delivery room finishing editing the letter that we
were going to publish to announce the work.
PRISCILLA CHAN: Some people think that is an exaggeration.
It was not.
We really were editing the final draft.
ANDREW HUBERMAN: Birthed CZI before you
birthed the human child.
Well, it's an incredible Initiative.
I've been following it since its inception.
And it's already been tremendously successful.
And everyone in the field of science--
and I have a lot of communication with those
folks--
feels the same way.
And the future is even brighter for it, it's clear.
And thank you for expanding to the Midwest and New York.
And we're all very excited to see where all of this goes.
I share in your optimism.
And thank you for your time today.
PRISCILLA CHAN: Yeah, thank you.
MARK ZUCKERBERG: Thank you.
A lot more to do.
ANDREW HUBERMAN: I'd like to take a quick break
and thank our sponsor, InsideTracker.
InsideTracker is a personalized nutrition platform
that analyzes data from your blood and DNA
to help you better understand your body
and help you reach your health goals.
I've long been a believer in getting regular blood work done
for the simple reason that many of the factors that impact
your immediate and long-term health
can only be analyzed from a quality blood test.
Now, a major problem with a lot of blood
tests out there, however, is that you get information
back about metabolic factors, lipids, and hormones
and so forth.
But you don't know what to do with that information.
With InsideTracker, they make it very easy
because they have a personalized platform that
allows you to see the levels of all those things--
metabolic factors, lipids, hormones, et cetera.
But it gives you specific directives
that you can follow that relate to nutrition,
behavioral modification, supplements,
et cetera that can help you bring
those numbers into the ranges that are optimal for you.
If you'd like to try InsideTracker,
you can go to insidetracker.com/huberman
to get 20% off any of InsideTracker's plans.
Again, that's insidetracker.com/huberman.
And now for my discussion with Mark Zuckerberg.
Slight shift of topic here--
you're extremely well-known for your role
in technology development.
But by virtue of your personal interests
and also where Meta technology interfaces
with mental health and physical health,
you're starting to become synonymous with health,
whether you realize it or not.
Part of that is because there's posts, footage
of you rolling jiu jitsu.
You won a jiu jitsu competition recently.
You're doing other forms of martial arts, water sports,
including surfing, and on and on.
So you're doing it yourself.
But maybe we could just start off with technology
and get this issue out of the way first, which
is that I think many people assume that technology,
especially technology that involves a screen, excuse
me, of any kind is going to be detrimental to our health.
But that doesn't necessarily have to be the case.
So could you explain how you see technology
meshing with, inhibiting, or maybe even promoting
physical and mental health?
MARK ZUCKERBERG: Sure.
I mean, I think this is a really important topic.
The research that we've done suggests that it's not
all good or all bad.
I think how you're using the technology has
a big impact on whether it is basically
a positive experience for you.
And even within technology, even within social media,
there's not one type of thing that people do.
I think, at its best, you're forming meaningful connections
with other people.
And there's a lot of research that basically suggests
that it's the relationships that we have
and the friendships that bring the most happiness in our lives
and, at some level, end up even correlating
with living a longer and healthier life
because that grounding that you have in community
ends up being important for that.
So I think that aspect of social media,
which is the ability to connect with people, to understand
what's going on in people's lives,
have empathy for them, communicate what's
going on with your life, express that, that's
generally positive.
There are ways that it can be negative,
in terms of bad interactions, things like bullying,
which we can talk about because there's a lot that we've
done to basically make sure that people can be safe from that
and give people tools and give kids the ability to have
the right parental controls.
Their parents can oversee that.
But that's the interacting with people side.
There's another side of all of this,
which I think of as just passive consumption, which,
at its best, is entertainment.
And entertainment is an important human thing, too.
But I don't think that that has quite the same association
with the long-term well-being and health benefits
as being able to help people connect with other people does.
And I think, at its worst, some of the stuff we see online--
I think, these days, a lot of the news
is just so relentlessly negative that it's just
hard to come away from an experience
where looking at the news for half an hour
and feel better about the world.
So I think that there's a mix on this.
I think the more that social media
is about connecting with people and the more
that when you're consuming and using the media
part of social media to learn about things that
enrich you and can provide inspiration or education as
opposed to things that just leave you with a more
toxic feeling, that's the balance that we try to get
right across our products.
And I think we're pretty aligned with the community
because, at the end of the day, I mean, people
don't want to use a product and come away feeling bad.
There's a lot that people talk about--
evaluate a lot of these products in terms
of information and utility.
But I think it's as important, when
you're designing a product, to think
about what kind of feeling you're creating
with the people who use it, whether that's
an aesthetic sense when you're designing hardware,
or just what do you make people feel.
And generally, people don't want to feel bad, right?
That doesn't mean that we want to shelter people
from bad things that are happening in the world.
But I don't really think that--
it's not what people want for us to just
be just showing all this super negative stuff all day long.
So we work hard on all these different problems-- making
sure that we're helping connect people as best as possible,
helping make sure that we give people good tools
to block people who might be bullying them,
or harass them, or especially for younger folks,
anyone under the age of 16 defaults into an experience
where their experience is private.
We have all these parental tools.
So that way, parents can understand what their children
are up to in a good balance.
And then on the other side, we try
to give people tools to understand how
they're spending their time.
We try to give people tools so that if you're a teen
and you're stuck in some loop of just looking
at one type of content, we'll nudge you and say, hey,
you've been looking at content of this type for a while.
How about something else?
And here's a bunch of other examples.
So I think that there are things that you
can do to push this in a positive direction.
But I think it just starts with having
a more nuanced view of this isn't all good or all bad.
And the more that you can make it a positive
thing, the better this will be for all the people
who use our products.
ANDREW HUBERMAN: That makes really good sense.
In terms of the negative experience, I agree.
I don't think anyone wants a negative experience
in the moment.
I think where some people get concerned perhaps--
and I think about my own interactions with, say,
Instagram, which I use all the time for getting information
out, but also consuming information.
And I happen to love it.
It's where I essentially launched
the non-podcast segment of my podcast and continue to.
I can think of experiences that are a little bit
like highly processed food, where
it tastes good at the time.
It's highly engrossing.
But it it's not necessarily nutritious.
And you don't feel very good afterwards.
So for me, that would be the little collage
of default options to click on in Instagram.
Occasionally, I notice-- and this just
reflects my failure, not Instagram's, that there
are a lot of street fight things,
like people beating people up on the street.
And I have to say, these have a very strong gravitational pull.
I'm not somebody that enjoys seeing violence, per se.
But you know I find myself--
I'll click on one of these, like what happened?
And I'll see someone get hit.
And there's a little melee on the street or something.
And those seem to be offered to me a lot lately.
And again, this is my fault. It reflects
my prior searching experience.
But I noticed that it has a bit of a gravitational pull, where
I didn't learn anything.
It's not teaching me any useful street self-defense
skills of any kind.
And at the same time, I also really enjoy
some of the cute animal stuff.
And so I get a lot of those also.
So there's this polarized collage
that's offered to me that reflects my prior search
behavior.
You could argue that the cute animal stuff is just
entertainment.
But actually, it fills me with a feeling,
in some cases, that truly delights me.
I delight in animals.
And we're not just talking about kittens.
I mean, animals I've never seen before,
interactions between animals I've never seen
before that truly delight me.
They energize me in a positive way
that when I leave Instagram, I do think I'm better off.
So I'm grateful for the algorithm in that sense.
But I guess, the direct question is, is the algorithm just
reflective of what one has been looking at a lot
prior to that moment where they log on?
Or is it also trying to do exactly what you described,
which is trying to give people a good-feeling experience that
leads to more good feelings?
MARK ZUCKERBERG: Yeah.
I mean, I think we try to do this in a long-term way.
I think one simple example of this
is we had this issue a number of years back
about clickbait news, so articles
that would have basically a headline that grabbed
your attention, that made you feel
like, oh, I need to click on this.
And then you click on it.
And then the article is actually about something that's
somewhat tangential to it.
But people clicked on it.
So the naive version of this stuff, the 10-year-old version
was like, oh, people seem to be clicking on this.
Maybe that's good.
But it's actually a pretty straightforward exercise
to instrument the system to realize that, hey, people
click on this, and then they don't really
spend a lot of time reading the news after clicking on it.
And after they do this a few times,
it doesn't really correlate with them saying that they're
having a good experience.
Some of how we measure this is just
by looking at how people use the services.
But I think it's also important to balance
that by having real people come in and tell us,
OK-- we show them, here are the stories that we could have
showed you, which of these are most meaningful to you,
or would make it so that you have the best experience,
and just mapping the algorithm and what
we do to that ground truth of what people say that they want.
So I think that, through a set of things like that,
we really have made large steps to minimize things
like clickbait over time.
It's not like gone from the internet.
But I think we've done a good job of minimizing it
on our services.
Within that though, I do think that we
need to be pretty careful about not
being paternalistic about what makes different people feel
good.
So I mean, I don't know that everyone
feels good about cute animals.
I mean, I can't imagine that people
would feel really bad about it.
But maybe they don't have as profound of a positive reaction
to it as you just expressed.
And I don't know.
Maybe people who are more into fighting
would look at the street fighting videos--
assuming that they're within our community standards.
I think that there's a level of violence
that we just don't want to be showing at all.
But that's a separate question.
But if they are, I mean, then it's like--
I mean, I'm pretty into MMA.
I don't get a lot of street fighting videos.
But if I did, maybe I'd feel like I was learning something
from that.
I think at various times in the company's history,
we've been a little bit too paternalistic about saying,
this is good content, this is bad, you should like this,
this is unhealthy for you.
And I think that we want to look at the long-term effects.
You don't want to get stuck in a short term
loop of like, OK, just because you
did this today doesn't mean it's what you
aspire for yourself over time.
But I think, as long as you look at the long-term of what
people both say they want and what they do, giving people
a fair amount of latitude to like the things that they like,
I just think feels like the right set of values
to bring to this.
Now, of course, that doesn't go for everything.
There are things that are truly off limits and things that--
like bullying, for example, or things that are really inciting
violence, things like that.
I mean, we have the whole community standards
around this.
But I think, except for those things
which I would hope that most people can agree, OK,
bullying is bad--
I hope that 100% of people agree with that.
And not 100%, maybe 99%.
Except for the things that kind of get that very--
that feel pretty extreme and bad like that,
I think you want to give people space
to like what they want to like.
ANDREW HUBERMAN: Yesterday, I had the very good experience
of learning from the Meta team about safety protections that
are in place for kids who are using Meta Platforms.
And frankly, I was really positively surprised
at the huge number of filter-based tools and just
ability to customize the experience so that it can stand
the best chance of enriching-- not just remaining neutral,
but enriching their mental health status.
One thing that came about in that conversation,
however, was I realized there are all these tools.
But do people really know that these tools exist?
And I think about my own experience with Instagram.
I love watching Adam Mosseri's Friday Q&As because he explains
a lot of the tools that I didn't know existed.
And if people haven't seen that, I highly
recommend they watch that.
I think he takes questions on Thursdays
and answers them most every Fridays.
So if I'm not aware of the tools without watching that, that
exists for adults, how does Meta look
at the challenge of making sure that people know that there
are all these tools--
I mean, dozens and dozens of very useful tools?
But I think most of us just know the hashtag, the tag,
the click, stories versus feed.
We now know that--
I also post to Threads.
I mean, so we know the major channels and tools.
But this is like owning a vehicle that
has incredible features that one doesn't
realize can take you off road, can allow your vehicle to fly.
I mean, there's a lot there.
So what do you think could be done
to get that information out?
Maybe this conversation could cue people to [INAUDIBLE]..
MARK ZUCKERBERG: I mean, that's part of the reason why I wanted
to talk to you about this.
I mean, I think most of the narrative around social media
is not, OK, all of the different tools
that people have to control their experience.
It's the narrative of is this just negative
for teens or something.
And I think, again, a lot of this
comes down to how is the experience being tuned.
Are people using it to connect in positive ways?
And if so, I think it's really positive.
So yeah, I mean, I think part of this
is we probably just need to get out and talk to people more
about it.
And then there's an in-product aspect,
which is if you're a teen and you sign up,
we take you through a pretty extensive experience that
tries to outline some of this.
But that has limits, too, because when you sign up
for a new thing, if you're bombarded with here's
a list of features, you're like, OK, I just signed up for this.
I don't really understand much about what the service is.
Let me go find some people to follow
who are my friends on here before I learn
about controls to prevent people from harassing me or something.
That's why I think it's really important to also show
a bunch of these tools in context.
So if you're looking at comments,
and if you go to delete a comment,
or you go to edit something, try to give people prompts in line.
It's like, hey, did that you can manage things
in these ways around that?
Or when you're in the inbox and you're filtering something,
remind people in line.
So just because of the number of people
who use the products and the level of nuance
around each of the controls, I think the vast majority
of that education, I think, needs to happen in the product.
But I do think that through conversations like this
and others that we need to be doing,
I think we can create a broader awareness that those things
exist so that way at least people are primed
so that way when those things pop up in the product people,
they're like, oh yeah, I knew that there was this control.
And here's how I would use that.
ANDREW HUBERMAN: I find the restrict function
to be very useful, more than the block function in most cases.
I do sometimes have to block people.
But the restrict function is really useful
that you could filter specific comments.
You might recognize that someone has a tendency
to be a little aggressive.
And I should point out that I actually don't really
mind what people say to me.
But I try and maintain what I call classroom rules
in my comment section, where I don't like people attacking
other people because I would never tolerate that
in the university classroom.
I'm not going to tolerate that in the comments section,
for instance.
MARK ZUCKERBERG: Yeah.
And I think that the example that you just used about
restrict versus block gets to something about product design
that's important, too, which is that block is this very
powerful tool that if someone is giving you a hard time
and you just want them to disappear from the experience,
you can do it.
But the design trade-off with that is that in order to make
it so that the person is just gone from the experience
and that you don't show up to them,
they don't show up to you--
inherent to that is that they will have
a sense that you blocked them.
And that's why I think some stuff like restrict or just
filtering, like I just don't want
to see as much stuff about this topic--
people like using different tools for very subtle reasons.
I mean, maybe you want the content to not show up,
but you don't want the person who's
posting the content to know that you don't want it to show up.
Maybe you don't want to get the messages in your main inbox,
but you don't want to tell the person actually that you're not
friends or something like that.
You actually need to give people different tools that
have different levels of power and nuance
around how the social dynamics around using them
play out in order to really allow
people to tailor the experience in the ways that they want.
ANDREW HUBERMAN: In terms of trying
to limit total amount of time on social media,
I couldn't find really good data on this.
How much time is too much?
I mean, I think it's going to depend
on what one is looking at, the age of the user, et cetera.
MARK ZUCKERBERG: I agree.
ANDREW HUBERMAN: I know that you have
tools that cue the user to how long
they've been on a given platform.
Are there tools to self-regulate--
I'm thinking about the Greek myth of the sirens and people
tying themselves to the mast and covering their eyes
so that they're not drawn in by the sirens.
Is there a function aside from deleting the app temporarily
and then reinstalling it every time you want to use it again?
Is there a true lockout, self-lockout function
where one can lock themselves out of access to the app?
MARK ZUCKERBERG: Well, I think we give people tools
that let them manage this.
And there's the tools that you get to use.
And then there's the tools that the parents
get to use to basically see how usage works.
But yeah, I think that there's different--
I think, for now, we've mostly focused
on helping people understand this,
and then give people reminders and things like that.
It's tough, though, to answer the question that you
were talking about before.
Is there an amount of time which is too much?
Because it does really get to what you're doing.
If you fast forward beyond just the
apps that we have today to an experience that
is like a social experience in the future
of the augmented reality glasses or something
that we're building, a lot of this
is going to be you're interacting with people
in the way that you would physically
as if you were like hanging out with friends
or working with people.
But now, they can show up as holograms.
And you can feel like you're present right there with them,
no matter where they actually are.
And the question is, is there too much
time to spend interacting with people like that?
Well, at the limit, if we can get
that experience to be as rich and giving you
as good of a sense of presence as you would have if you were
physically there with someone, then I
don't see why you would want to restrict the amount that people
use that technology to any less than what
would be the amount of time that you'd be comfortable
interacting with people physically,
which obviously is not going to be 24 hours a day.
You have to do other stuff.
You have work.
You need to sleep.
But I think it really gets to how you're using these things,
whereas if what you're primarily using the services for
is you're getting stuck in loops reading news or something that
is really getting you into a negative mental state,
then I don't know.
I mean, I think that there's probably
a relatively short period of time
that maybe that's a good thing that you want to be doing.
But again, even then it's not zero
because just because news might make you unhappy
doesn't mean that the answer is to be
unaware of negative things that are happening in the world.
I just think that different people
have different tolerances for what they can take on that.
And I think it's generally having
some awareness is probably good, as long as it's not more
than you're constitutionally able to take.
So I don't know.
I try not be too paternalistic about this as our approach.
But we want to empower people by giving them
the tools, both people and, if you're a teen, your parents
to have tools to understand what you're experiencing
and how you're using these things, and then go from there.
ANDREW HUBERMAN: Yeah.
I think it requires of all of us some degree of self-regulation.
I like this idea of not being too paternalistic.
I mean, it seems like the right way to go.
I find myself occasionally having
to make sure that I'm not just passively scrolling,
that I'm learning.
I like foraging for, organizing and dispersing information.
That's been my life's career.
So I've learned so much from social media.
I find great papers, great ideas.
I think comments are a great source of feedback.
And I'm not just saying that because you're sitting here.
I mean, Instagram in particular, but other Meta platforms
have been tremendously helpful for me to get science
and health information out.
One of the things that I'm really excited about,
which I only had the chance to try for the first time today,
is your new VR platform, the newest Oculus.
And then we can talk about the glasses, the Ray-Bans.
MARK ZUCKERBERG: Sure.
ANDREW HUBERMAN: Those two experiences
are still kind of blowing my mind, especially
the Ray-Ban glasses.
And I have so many questions about this.
So I'll resist.
But--
MARK ZUCKERBERG: We can get into that.
ANDREW HUBERMAN: OK.
Well, yeah, I have some experience with VR.
My Lab has used VR.
Jeremy Bailenson's Lab at Stanford
is one of the pioneering labs of VR and mixed reality.
I guess they used to call it augmented reality, but now
mixed reality.
I think what's so striking about the VR
that you guys had me try today is how well it interfaces
with the real room, let's call it, the physical room.
MARK ZUCKERBERG: Physical.
ANDREW HUBERMAN: I could still see people.
I could see where the furniture was.
So I wasn't going to bump into anything.
I could see people's smiles.
I could see my water on the table
while I was doing this what felt like a real martial arts
experience, except I wasn't getting hit.
Well, I was getting hit virtually.
But it's extremely engaging.
And yet, on the good side of things,
it really bypasses a lot of the early concerns
that Bailenson Lab--
again, Jeremy's Lab-- was early to say that, oh, there's
a limit to how much VR one can or should use each day,
even for the adult brain because it can really
disrupt your vestibular system, your sense of balance.
All of that seems to have been dealt
with in this new iteration of VR.
I didn't come out of it feeling dizzy at all.
I didn't feel like I was reentering the room in a way
that was really jarring.
Going into it is obviously, Whoa,
this is a different world.
But you can look to your left and say, oh, someone just
came in the door.
Hey, how's it going?
Hold on, I'm playing this game, just
as it was when I was a kid playing in Nintendo
and someone would walk in.
It's fully engrossing.
But you'd be like, hold on.
And you see they're there.
So first of all, bravo, incredible.
And then the next question is, what do we even
call this experience?
Because it is truly really mixed.
It's a truly mixed reality experience.
MARK ZUCKERBERG: Yeah.
I mean, mixed reality is the umbrella term
that refers to the combined experience
of virtual and augmented reality.
So augmented reality is what you're eventually
going to get with some future version of the smart glasses,
where you're primarily seeing the world,
but you can put holograms in it.
So we'll have a future where you're
going to walk into a room.
And there are going to be as many holograms
as physical objects.
If you just think about all the paper, the art, physical games,
media, your workstation--
ANDREW HUBERMAN: If we refer to, let's
say, an MMA fight, we could just draw it up on the table right
here and just see it repeat as opposed to us turning
and looking at a screen.
MARK ZUCKERBERG: Yeah.
I mean, pretty much any screen that exists
could be a hologram in the future with smart glasses.
There's nothing that actually physically needs
to be there for that when you have glasses
that can put a hologram there.
And it's an interesting thought experiment
to just go around and think about, OK, what of the things
that are physical in the world need to actually be physical.
Your chair does, right?
Because you're sitting on it.
A hologram isn't going to support you.
But like that art on the wall, I mean,
that doesn't need to physically be there.
So I think that that's the augmented reality experience
that we're moving towards.
And then we've had these headsets that historically we
think about as VR.
And that has been something that is like a fully
immersive experience.
But now, we're getting something that's
a hybrid in between the two and capable
of both, which is a headset that can do both virtual reality
and some of these augmented reality experiences.
And I think that that's really powerful,
both because you're going to get new applications that allow
people to collaborate together.
And maybe the two of us are here physically,
but someone joins us and it's their avatar there.
Or maybe it's some version in the future.
You're having a team meeting.
And you have some people there physically.
And you have some people dialing in.
And they're basically like a hologram, there virtually.
But then you also have some AI personas
that are on your team that are helping
you do different things.
And they can be embodied as avatars and around the table
meeting with you.
ANDREW HUBERMAN: Are people are going
to be doing first dates that are physically separated?
I could imagine that some people would--
is it even worth leaving the house type date?
And then they find out.
And then they meet for the first time.
MARK ZUCKERBERG: I mean, maybe.
I think dating has physical aspects to it, too.
ANDREW HUBERMAN: Right.
Some people might not be-- they want
to know whether or not it's worth
the effort to head out or not.
They want to bridge the divide, right?
MARK ZUCKERBERG: It is possible.
I mean, I know some of my friends
who are dating basically say that in order
to make sure that they have a safe experience, if they're
going on a first date, they'll schedule
something that's shorter and maybe in the middle of the day.
So maybe it's coffee.
So that way, if they don't like the person,
they can just get out before going and scheduling
a dinner or a real, full date.
So I don't know.
Maybe in the future, people will have
that experience where you can feel like you're
kind of sitting there.
And it's and it's even easier, and lighter weight and safer.
And if you're not having a good experience,
you can just teleport out of there and be gone.
But yeah, I think that this will be an interesting question
in the future.
There are clearly a lot of things that are only possible
physically that--
or are so much better physically.
And then there are all these things
that we're building up that can be digital experiences.
But it's this weird artifact of how
this stuff has been developed that the digital world
and the physical world exist in these completely
different planes.
When you want to interact with the digital world--
we do it all the time.
But we pull out a small screen.
Or we have a big screen.
And just basically, we're interacting
with the digital world through these screens.
But I think if we fast forward a decade
or more, I think one of the really interesting questions
about what is the world that we're
going to live in, I think it's going to increasingly
be this mesh of the physical and digital worlds
that will allow us to feel, A, that the world that we're in
is just a lot richer because there can be all
these things that people create that are just so much easier
to do digitally than physically.
But B, you're going to have a real physical sense of presence
with these things and not feel like interacting
in the digital world is taking you away
from the physical world, which today is just
so much viscerally richer and more powerful.
I think the digital world will be embedded in that
and will feel just as vivid in a lot of ways.
So that's why I always think-- when
you were saying before, you felt like you could look
around and see the real room.
I actually think there's an interesting kind
of philosophical distinction between the real room
and the physical room, which historically I
think people would have said those are the same thing.
But I actually think, in the future,
the real room is going to be the combination
of the physical world with all the digital artifacts
and objects that are in there that you can interact with them
and feel present, whereas the physical world is just the part
that's physically there.
And I think it's possible to build a real world that's
the sum of these two that will actually
be more profound experience than what we have today.
ANDREW HUBERMAN: Well, I was struck
by the smoothness of the interface between the VR
and the physical room.
Your team had me try a--
I guess it was an exercise class in the [INAUDIBLE]..
But it was essentially like hitting mitts boxing,
so hitting targets boxing.
MARK ZUCKERBERG: Yeah, super natural.
ANDREW HUBERMAN: Yeah, and it comes at a fairly fast pace
that then picks up.
It's got some tutorial.
It's very easy to use.
And it certainly got my heart rate up.
And I'm in at least decent shape.
And I have to be honest, I've never
once desired to do any of these on-screen fitness things.
I mean, I can't think of anything more aversive than a--
I don't want to insult any particular products,
but riding a stationary bike while looking
at a screen pretending I'm on a road outside.
I can't think of anything worse for me.
MARK ZUCKERBERG: I do like the leaderboard.
Maybe I'm just a very competitive person.
If you're going to be running on a treadmill,
at least give me a leaderboard so I can beat
the people who are ahead of me.
ANDREW HUBERMAN: I like moving outside and certainly
an exercise class or aerobics class,
as they used to call them.
But the experience I tried today was extremely engaging.
And I've done enough boxing to at least know
how to do a little bit of it.
And I really enjoyed it.
It gets your heart rate up.
And I completely forgot that I was
doing an on-screen experience in part because, I believe,
I was still in that physical room.
And I think there's something about the mesh
of the physical room and the virtual experience that
makes it neither of one world or the other.
I mean, I really felt at the interface of those.
And I certainly got presence, this feeling
of forgetting that I was in a virtual experience
and got my heart rate up pretty quickly.
We had to stop because we were going to start recording.
But I would do that for a good 45 minutes in the morning.
And there's no amount of money you could pay me truly
to look at a screen while pedaling on a bike
or running on a treadmill.
So again, bravo, I think it's going to be very useful.
It's going to get people moving their bodies more,
which certainly--
social media, up until now, and a lot of technologies
have been accused of limiting the amount of physical activity
that both children and adults are engaged in.
And we know we need physical activity.
You're a big proponent of and practitioner
of physical activity.
So is this a major goal of Meta, to get people
moving their bodies more and getting their heart
rates up and so on?
MARK ZUCKERBERG: I think we want to enable it.
And I think it's good.
But I think it comes more from a philosophical view of the world
than it is necessarily--
I mean, I don't go into building products
to try to shape people's behavior.
I believe in empowering people to do what they want
and be the best version of themselves that they can be.
ANDREW HUBERMAN: So no agenda?
MARK ZUCKERBERG: That said, I do believe that there's
the previous generation of computers
were devices for your mind.
And I think that we are not brains and tanks.
I think that there's a philosophical view of people
of like, OK, you are primarily what you think about
or your values or something.
It's like, no, you are that and you
are a physical manifestation.
And people were very physical.
And I think building a computer for your whole body and not
just for your mind is very fitting with this worldview
that the actual essence of you, if you want
to be present with another person,
if you want to be fully engaged in experience is not just--
it's not just a video conference call that looks at your face
and where you can share ideas.
It's something that you can engage your whole body.
So, yeah I mean, I think being physical
is very important to me.
I mean, that's a lot of the most fun stuff that I get to do.
It's a really important part of how I personally
balance my energy levels and just get
a diversity of experiences because I could spend all
my time running the company.
But I think it's good for people to do some different things
and compete in different areas or learn different things.
And all of that is good.
If people want to do really intense workouts with the work
that we're doing with Quest or with eventual AR glasses,
great.
But even if you don't want to do a really intense workout,
I think just having a computing environment and platform which
is inherently physical captures more of the essence of what
we are as people than any of the previous computing platforms
that we've had to date.
ANDREW HUBERMAN: I was even thinking just
of the simple task of getting better range of motion a.k.a.
flexibility.
I could imagine, inside of the VR experience,
leaning into a stretch, standard type of lunge-type stretch,
but actually seeing a meter of are you are you
approaching new levels of flexibility
in that moment where it's actually
measuring some kinesthetic elements
on the body in the joints, whereas normally, you
might have to do that in front of a camera, which then would
give you the data on a screen that you'd look at afterwards
or hire an expensive coach or looking at form and resistance
training.
So you're actually lifting physical weights.
But it's telling you whether or not you're breaking form.
I mean, there's just so much that could
be done inside of there.
And then my mind just starts to spiral
into, wow, this is very likely to transform
what we think of as, quote unquote, "exercise."
MARK ZUCKERBERG: Yeah, I think so.
I think there's still a bunch of questions
that need to get answered.
I don't think most people are going to necessarily want
to install a lot of sensors or cameras
to track their whole body.
So we're just over time getting better
from the sensors that are on the headsets of being able to do
very good hand tracking.
So we have this research demo where
you now, just with the hand tracking from the headset,
you can type.
It just projects a little keyboard onto your table.
And you can type.
And people type like 100 words a minute with that.
ANDREW HUBERMAN: With a virtual keyboard?
MARK ZUCKERBERG: Yeah.
We're starting to be able to--
using some modern AI techniques, be able to simulate
and understand where your torso's position is.
Even though you can't always see it,
you can see it a bunch of the time.
And if you fuse together what you
do see with the accelerometer and understanding
how the thing is moving, you can kind of
understand what the body position is going to be.
But some things are still going to be hard.
So you mentioned boxing.
That one works pretty well because we understand your head
position.
We understand your hands.
And now, we're increasingly understanding your body
position.
But let's say you want to expand that
to Muay Thai or kickboxing.
OK.
So legs, that's a different part of tracking.
That's harder because that's out of the field of view
more of the time.
But there's also the element of resistance.
So you can throw a punch, and retract it,
and shadow box and do that without upsetting
your physical balance that much.
But if you want to throw a roundhouse kick
and there's no one there, then, I
mean, the standard way that you do it when you're shadowboxing
is you basically do a little 360.
But I don't know.
Is that going to feel great?
I mean, I think there's a question about what
that experience should be.
And then if you want to go even further,
if you want to get grappling to work,
I'm not even sure how you would do
that without having resistance of understanding what the force
is applied to you would be.
And then you get into, OK, maybe you're
going to have some kind of bodysuit that
can apply haptics.
But I'm not even sure that even a pretty advanced haptic system
is going to be able to be quite good enough to simulate
the actual forces that would be applied to you in a grappling
scenario.
So this is part of what's fun about technology,
though, is you keep on getting new capabilities.
And then you need to figure out what things you
can do with them.
So I think it's really neat that we can do boxing.
And we can do the supernatural thing.
And there's a bunch of awesome cardio,
and dancing and things like that.
And then there's also still so much more
to do that I'm excited to get to over time.
But it's a long journey.
ANDREW HUBERMAN: And what about things like painting,
and art and music?
I imagine-- of course, different mediums--
I like to draw with pen and pencil.
But I could imagine trying to learn how to paint virtually.
And of course, you could print out a physical version
of that at the end.
This doesn't have to depart from the physical world.
It could end in the physical world.
MARK ZUCKERBERG: Did you see the demo,
the piano demo where you--
either you're there with a physical keyboard
or it could be a virtual keyboard.
But the app basically highlights what keys
you need to press in order to play the song.
So it's basically like you're looking at your piano.
And it's teaching you how to play a song that you choose.
ANDREW HUBERMAN: An actual piano?
MARK ZUCKERBERG: Yeah.
ANDREW HUBERMAN: But it's illuminating certain keys
in the virtual space.
MARK ZUCKERBERG: Yeah.
And it could either be a virtual piano or a keyboard
if you don't have a piano or keyboard.
Or it could use your actual keyboard.
So yeah, I think stuff like that is
going to be really fascinating for education and expression.
ANDREW HUBERMAN: And excuse me, but for broadening access
to expensive equipment.
I mean, a piano is no small expense.
MARK ZUCKERBERG: Exactly.
ANDREW HUBERMAN: And it takes up a lot of space
and needs to be tuned.
You can think of all these things, the kid that
has very little income or their family
has very little income could learn
to play a virtual piano at a much lower cost.
MARK ZUCKERBERG: Yeah.
And it gets back to the question I
was asking before about this thought experiment of how
many of the things that we physically have
today actually need to be physical.
The piano doesn't.
Maybe there's some premium where--
maybe it's a somewhat better, more tactile experience
to have a physical one.
But for people who don't have the space for it,
or who can't afford to buy a piano,
or just aren't sure that they would want
to make that investment at the beginning of learning how
to play piano, I think, in the future,
you'll have the option of just buying an app
or a hologram piano which will be a lot more affordable.
And I think that's going to unlock a ton of creativity too
because instead of the market for piano makers
being constrained to like a relatively small set of experts
who have perfected that craft, you're
going to have kids or developers all around the world designing
crazy designs for potential keyboards and pianos
that look nothing like what we've seen before,
but maybe bring even more joy or even more
fun into the world where you have fewer
of these physical constraints.
So I think there's going to be a lot of wild stuff to explore.
ANDREW HUBERMAN: There's definitely
going to be a lot of wild stuff to explore.
I just had this idea/image in my mind
of what you were talking about merged with our earlier
conversation when Priscilla was here.
I could imagine a time not too long from now
where you're using mixed reality to run experiments in the lab,
literally mixing virtual solutions,
getting potential outcomes, and then picking the best
one to then go actually do in the real world, which
is very both financially costly and time-wise costly.
MARK ZUCKERBERG: Yeah.
I mean, people are already using VR for surgery and education
on it.
And there's some study that was done that basically tried
to do a controlled experiment of people who learned how
to do a specific surgery through just the normal textbook
and lecture method versus you show the knee
and you have it be a large, blown-up model.
And people can manipulate it and practice
where they would make the cuts.
And like the people in that class did better.
Yeah, I think that it's going to be profound
for a lot of different areas.
ANDREW HUBERMAN: And the last example that leaps to mind--
I think social media and online culture
has been accused of creating a lot of real world--
let's call it physical world social anxiety for people.
But I could imagine practicing a social interaction.
Or a kid that has a lot of social anxiety
or that needs to advocate for themselves better
learning how to do that progressively
through a virtual interaction, and then taking
that to the real world because, in my very recent experience
today, it's so blended now with real experience
that the kid that feels terrified
of advocating for themselves, or just talking
to another human being, or an adult,
or being in a new circumstance of a room full of kids, you
could really experience that in silico
first and get comfortable, let the nervous system
attenuate a bit, and then take it into the, quote unquote,
"physical world."
MARK ZUCKERBERG: Yeah, I think we'll
see experiences like that.
I mean, I also think that some of the social dynamics
around how people interact in this kind
of blended digital world will be more nuanced in other ways.
So I'm sure that there will be new anxieties that people
develop too, just like teens today need to navigate dynamics
around texting constantly that we just
didn't have when we were kids.
So I think it will help with some things.
I think that there will be new issues that hopefully we can
help people work through too.
But overall, yeah, I think it's going to be
really powerful and positive.
ANDREW HUBERMAN: Let's talk about the glasses.
MARK ZUCKERBERG: Sure.
ANDREW HUBERMAN: This was wild.
Put on a Ray-Bans--
I like the way they look.
They're clear.
They look like any other Ray-Ban glasses,
except that I could call out to the glasses.
I could just say, hey Meta, I want
to listen to the Bach variations--
the Goldberg Variations of Bach.
And Meta responded.
And no one around me could hear.
But I could hear with exquisite clarity.
And by the way, I'm not getting paid to say any of this.
I'm just still blown away by this.
Folks, I want a these very badly.
I could hear, OK, I'm selecting those now--
or that music now.
And then I could hear it in the background.
But then I could still have a conversation.
So this was neither headphones in nor headphones out.
And I could say, wait, pause the music.
And it would pause.
And the best part was I didn't have to, quote unquote,
"leave the room" mentally.
I didn't even have to take out a phone.
It was all interfaced through this very local environment
in and around the head.
And as a neuroscientist, I'm fascinated by this
because, of course, all of our perceptions-- auditory,
visual et cetera--
are occurring inside the casing of this thing we call a skull.
But maybe you could comment on the origin
of that design for you, the ideas behind that,
and where you think it could go because I'm sure
I'm just scratching the surface.
MARK ZUCKERBERG: The real product
that we want to eventually get to is
this full augmented reality product
in a stylish and comfortable normal glasses form factor.
ANDREW HUBERMAN: Not a dorky VR headset, so to speak?
MARK ZUCKERBERG: No, I mean--
ANDREW HUBERMAN: Because the VR headset does
feel kind of big on the face.
MARK ZUCKERBERG: There's going to be a place for that,
too, just like you have your laptop
and you have your workstation.
Or maybe the better analogy is you have your phone
and you have your workstation.
These AR glasses are going to be like your phone in that you
have something on your face.
And you will, I think, be able to, if you want,
wear it for a lot of the day and interact
with it very frequently.
I don't think that people are going
to be walking around the world wearing VR headsets.
ANDREW HUBERMAN: Let's hope.
MARK ZUCKERBERG: But yeah, that's certainly not the future
that I'm hoping we get to.
But I do think that there is a place for having--
because it's a bigger form factor,
it has more compute power.
So just like your workstation or your bigger computer
can do more than your phone can do,
there's a place for that when you want
to settle into an intense task.
If you have a doctor who's doing a surgery,
I would want them doing it through the headset
not through the phone equivalent or the lower powered glasses.
But just like phones are powerful enough
to do a lot of things, I think the glasses will eventually
get there, too.
Now, that said, there's a bunch of really hard technology
problems to address in order to be able to get to this point
where you can put full holograms in the world.
You're basically miniaturizing a supercomputer
and putting it into a glasses so that the glasses still
look stylish and normal.
And that's a really hard technology problem.
Making things small is really hard.
A holographic display is different from what
our industry has optimized for for 30 or 40 years now,
building screens.
There's a whole industrial process
around that goes into phones, and TVs, and computers,
and increasingly so many things that have different screens.
There's a whole pipeline that's gotten very good
at making that kind of screen.
And the holographic displays are just
a completely different thing because it's not a screen.
It's a thing that you can shoot light
into through a laser or some other kind of projector.
And it can place that as an object in the world.
So that's going to need to be this whole other industrial
process that gets built up to doing that in an efficient way.
So all that said, we're basically
taking two different approaches towards building this at once.
One is we are trying to keep in mind what is the long-term
thing that--
it's not super far off.
Within a few years, I think we'll
have something that's a first version of this full vision
that I'm talking about.
I mean, we have something that's working internally
that we use as a dev kit.
But that one, that's a big challenge.
It's going to be more expensive.
And it's harder to get all the pieces working.
The other approach has been, all right, let's
start with what we know we can put
into a pair of stylish sunglasses
today and just make them as smart as we can.
So for the first version, we worked with--
we did this collaboration with Ray-Ban
because that's well-accepted.
These are well-designed glasses.
They're classic.
People have used them for decades.
For the first version, we got a sensor on the front,
so you could capture moments without having to take
your phone out of your pocket.
So you got photos and videos.
You had the speaker and the microphone,
so you can listen to music.
You could communicate with it.
But that was the first version of it.
We had a lot of the basics there.
But we saw how people used it.
And we tuned it.
We made the camera twice as good for this new version
that we made.
The audio is a lot crisper for the use cases
that we saw that people actually used, which is-- some of it
is listening to music.
But a lot of it is people want to take calls on their glasses.
They want to listen to podcasts.
But the biggest thing that I think is interesting
is the ability to get AI running on it, which it doesn't just
run on the glasses.
It also kind of proxies through your phone.
But I mean, with all the advances in LLMs--
we talked about this a bit in the first part
of the conversation.
Having the ability to have your Meta AI assistant
that you can just talk to and basically
ask any question throughout the day is--
I think it'd be really fascinating.
And like you were saying about how
we process the world as people, eventually, I
think you're going to want your AI
assistant to be able to see what you see and hear what you hear.
Maybe not all the time.
But you're going to want to be able to tell
it to go into a mode where it can see what you see and hear
what you hear.
And what's the device design that
best positions an AI assistant to be
able to see what you see and hear
what you hear so it can best help you?
Well, that's glasses, where it basically
has a sensor to be able to see what you see
and a microphone that is close to your ears that
can hear what you hear.
The other design goal is, like you said,
to keep you present in the world.
So I think one of the issues with phones
is they pull you away from what's physically happening
around you.
And I don't think that the next generation of computing
will do that.
ANDREW HUBERMAN: I'm chuckling to myself
because I have a friend.
He's a very well known photographer.
And he was laughing about how people go to a concert.
And everyone's filming the concert on their phone
so that they can be the person that posts the thing.
But there are literally millions of other people
who posted the exact same thing.
But somehow, it feels important to post our unique experience.
With glasses, that would essentially
smooth that gap completely.
You could just worry about it later, download it then.
There are issues, I realize, with glasses
because they are so seamless with everyday experience,
even though you and I aren't wearing them now.
It's very common for people to wear glasses--
issues of recording and consent.
[INTERPOSING VOICES]
ANDREW HUBERMAN: Like if I go to a locker room at my gym,
I'm assuming that the people with glasses aren't filming.
Whereas right now, because there's a sharp transition when
there's a phone in the room and someone's pointing it,
people generally say, no phones in locker rooms and recording.
So that's just one instance.
I mean, there are other instances.
MARK ZUCKERBERG: We have the whole privacy light.
Did you get--
ANDREW HUBERMAN: I didn't get a chance to explore that.
MARK ZUCKERBERG: Yeah.
So anytime that it's active, that the camera
sensor is active, it's basically pulsing a white bright light.
ANDREW HUBERMAN: Got it.
MARK ZUCKERBERG: Which is, by the way, more than cameras do.
ANDREW HUBERMAN: Right.
Someone could be holding a phone.
MARK ZUCKERBERG: Yeah.
I mean, phones aren't showing a light, bright sensor
when you're taking a photo.
ANDREW HUBERMAN: People oftentimes
will pretend they're texting and they're actually recording.
I actually saw an instance of this in a barber shop
once, where someone was recording
and they were pretending that they were texting.
And it was interesting.
There was a pretty intense interaction that ensued.
And it was like, wow, it's pretty easy for people
to feign texting while actually recording.
MARK ZUCKERBERG: Yeah.
So I think when you're evaluating
a risk with a new technology, the bar shouldn't be is it
possible to do anything bad.
It's does this new technology make it easier
to do something bad than what people already had.
And I think because you have this privacy light that is just
broadcasting to everyone around you, hey,
this thing is recording now--
I think that makes it actually less discreet
to do it through the glasses than what you could
do with a phone already, which I think is basically the bar
that we wanted to get over from a design perspective.
ANDREW HUBERMAN: Thank you for pointing out
that it has the privacy light.
I didn't get long enough in the experience
to explore all the features.
But again, I can think of a lot of uses--
being able to look at a restaurant from the outside
and see the menu, get a status on how crowded it is.
As much as I love--
I don't want to call out-- let's just
say app-based map functions that allow you to navigate
and the audio is OK.
It's nice to have a conversation with somebody on the phone
or in the vehicle.
And it'd be great if the road was traced where I should turn.
MARK ZUCKERBERG: Yeah, absolutely.
ANDREW HUBERMAN: These kinds of things
seem like it's going to be straightforward for Meta
engineers to create.
MARK ZUCKERBERG: Yeah, in a version, we'll have it
so it'll also have the holographic display, where
it can show you the directions.
But I think that there will basically just
be different price points that pack different amounts
of technology.
The holographic display part, I think,
is going to be more expensive than doing
one that just has the AI, but is primarily communicating
with you through audio.
So I mean, the current Ray-Ban Meta glasses are $299.
I think when we have one that has a display in it,
it'll probably be some amount more than that.
But it'll also be more powerful.
So I think that people will choose
what they want to use based on what the capabilities are
that they want and what they can afford.
But a lot of our goal in building things
is we try to make things that can be accessible to everyone.
Our game as a company isn't to build things and then charge
a premium price for it.
We try to build things that then everyone can use, and then
become more useful because a very large number of people
are using them.
So it's just a very different approach.
We're not like Apple or some of these companies that just
try to make something and then sell it for as much
as they can, which, I mean, they're a great company.
So I mean, I think that model is fine, too.
But our approach is going to be we
want stuff that can be affordable
so that way everyone in the world can use it.
ANDREW HUBERMAN: Long lines of health,
I think the glasses will also potentially solve
a major problem in a real way, which
is the following for both children and adults.
It's very clear that viewing objects in particular screens
up close for too many hours per day leads to myopia.
It literally changes the length of length of the eyeball
and nearsightedness.
And on the positive side, we know,
based on some really large clinical trials,
that kids who spend--
and adults who spend two hours a day or more out of doors
don't experience that and maybe even reverse their myopia.
And it has something to do with exposure to sunlight.
But it has a lot to do with long view, viewing
things at a distance greater than three or four feet away.
And with the glasses, I realize, one
could actually do digital work out of doors.
It could measure and tell you how much time
you've spent looking at things up close versus far away.
I mean, this is just another example that leaps to mind.
But in accessing the visual system,
you're effectively accessing the whole brain
because it's the only two bits of brain that
are outside the cranial vault. So it just
seems like putting technology right at the level of the eyes,
seeing what the eyes see, has just
got to be the best way to go.
MARK ZUCKERBERG: Yeah.
Well, multimodal, I think, is-- you want the visual sensation.
But you also want text or language.
ANDREW HUBERMAN: Sure.
That all can be brought to the level of the eyes, right?
MARK ZUCKERBERG: What do you mean by that?
ANDREW HUBERMAN: Well, I mean, I think
what we're describing here is essentially
taking the phone, the computer, and bringing it
all to the level of the eyes.
And of course, one would like--
MARK ZUCKERBERG: Oh, Physically at your eyes?
ANDREW HUBERMAN: Physically at your eyes, right?
MARK ZUCKERBERG: Yeah.
ANDREW HUBERMAN: And one would like more kinesthetic
information, as you mentioned before-- where the legs are,
maybe even lung function.
Hey, have you taken enough steps today?
But that all can be-- if it can be figured out on the phone,
it can be-- by the phone, it can be figured out by the glasses.
But there's additional information there,
such as what are you focusing on in your world.
How much of your time is spent looking at things far away
versus up close?
How much social time did you have today?
It's really tricky to get that with a phone.
If my phone were right in front of us
as if we were at a standard lunch
nowadays, certainly in Silicon Valley,
and then we're peering at our phones, I mean,
how much real direct attention and was in the conversation
at hand versus something else?
You can get issues of where are you
placing your attention by virtue of where
you're placing your eyes.
And I think that information is not accessible
with a phone in your pocket or in front of you.
Yeah, I mean, a little bit, but not nearly as rich and complete
information as one gets when you're really
pulling the data from the level of vision
and what kids and adults are actually
looking at and attending to.
MARK ZUCKERBERG: Yeah, yeah.
ANDREW HUBERMAN: It seems extremely valuable.
You get autonomic information, size of the pupils.
So you get information about internal states.
MARK ZUCKERBERG: I mean, there's internal sensor and outside.
So the sensor on the Ray-Ban Meta glasses is external.
So it basically allows you to see what you see--
sorry, the AI system to see what you're seeing.
There's a separate set of things which
are eye tracking, which are also very powerful for enabling
a lot of interfaces.
So if you want to just look at something
and select it by looking at it with your eyes
rather than having to drag a controller over or pick up
a hologram or anything like that,
you can do that with eye tracking.
So that's a pretty profound and cool experience, too, as well
as just understanding what you're
looking at so that way you're not wasting compute power
drawing pixels and high resolution
in a part of the world that's going to be
in your peripheral vision.
So yeah, all of these things, there
are interesting design and technology trade-offs,
where if you want the external sensor, that's one thing.
If you also want the eye tracking,
now that's a different set of sensors.
Each one of these consumes compute,
which consumes battery.
They take up more space.
So it's like, where are the eye tracking sensors going to be?
It's like, well, you want to make sure
that the rim of the glasses is actually quite thin because--
I mean, there's a variance of how thick can glasses
be before they look more like goggles than glasses.
So I think that there's this whole space.
And I think people are going to end up choosing what
product makes sense for them.
Maybe they want something that's more powerful,
that has more of the sensors, but it's
going to be a little more expensive,
maybe like slightly thicker.
Or maybe you want a more basic thing
that just looks very similar to what Ray-Ban glasses are
that people have been wearing for decades but has AI in it
and you can capture moments without having
to take your phone out and send them to people.
In the latest version, we got the ability in to live stream.
I think that that's pretty crazy, that now you can be--
going back to your concert case or whatever else you're doing,
you can be doing sports or watching
your kids play something.
And you can be watching.
And you can be live streaming it to your family group,
so people can see it.
I think that stuff is--
I think that's pretty cool, that you basically
have a normal looking glasses at this point that can live stream
and has an AI assistant.
So the stuff is making a lot faster progress
in a lot of ways than I would have thought.
And I don't know.
I think people are going to like this version.
But there's a lot more still to do.
ANDREW HUBERMAN: I think it's super exciting.
And I see a lot of technologies.
This one's particularly exciting to me
because of how smooth the interface is
and for all the reasons that you just mentioned.
What's happening with and what can we expect around
AI interfaces and maybe even avatars
of people within social media?
Are we not far off from a day where
there are multiple versions of me
and you on the internet or people?
For instance, I get asked a lot of questions.
I don't have the opportunity to respond to all those questions.
But with things like ChatGPT, people
are trying to generate answers to those questions
on other platforms.
Will I have the opportunity to soon
have an AI version of myself where people
can ask me questions about what I recommend for sleep
and circadian rhythm, fitness, mental health, et cetera based
on content I've already generated
that will be accurate so they could just ask my avatar?
MARK ZUCKERBERG: Yeah, this is something
that I think a lot of creators are going
to want that we're trying to build
and I think we'll probably have a version of next year.
But there's a bunch of constraints
that I think we need to make sure that we get right.
So for one, I think it's really important that--
it's not that there's a bunch of versions of you.
It's that if anyone is creating an AI assistant version of you,
it should be something that you control.
I think there are some platforms that are out there today
that just let people like make--
I don't know-- an AI bought of me or other figures.
And it's like, I don't know.
I mean, we have platform policies for--
and for decades, since the beginning
of the company at this point, which is almost 20 years,
that basically don't allow impersonation.
Real identity is like one of the core aspects
that our company was started on.
You want to authentically be yourself.
So yeah, I think if you're almost any creator,
being able to engage your community--
and there's just going to be more demand
to interact with you than you have hours in the day.
So there are both people who out there
who would benefit from being able to talk
to an AI version of you.
And I think you, and other creators,
would benefit from being able to keep your community engaged
and service that demand that people have to engage with you.
But you're going to want to know that that AI version of you
or assistant is going to represent you
the way that you would want.
And there are a lot of things that
are awesome about these modern LLMs.
But having perfect predictability
about how it's going to represent something
is not one of the current strengths.
So I think that there's some work that
needs to get done there.
I don't think it needs to be 100% perfect all the time.
But you need to have very good confidence, I would say,
that it's going to represent you the way that you'd
want for you to want to turn it on,
which, again, you should have control over
whether you turn it on.
So we wanted to start in a different place, which
I think is a somewhat easier problem, which is creating
new characters for AI personas.
So that way, it's not--
we built one of the AIs is like a chef.
And they can help you come up with things
that you could cook and can help you cook them.
There's a couple of people that are
interested in different types of fitness that
can help you plan out your workouts
or help with recovery or different things like that.
There's an AI that's focused on DIY crafts.
There's somebody who's a travel expert that
can help you make travel plans or give you ideas.
But the key thing about all of these
is they're not modeled off of existing people.
So they don't have to have 100% fidelity to making sure
that they never say something that a real person who they're
modeled after would never say because they're just made up
characters.
So I think that that's a somewhat easier problem.
And we actually got a bunch of different well-known people
to play those characters because we thought
that would make it more fun.
So there's like Snoop Dogg is the dungeon master.
So you can drop him into a thread
and play text-based games.
And I do this with my daughter when I tuck her in at night.
And she just loves storytelling.
And it's like Snoop Dogg, as the dungeon master,
will come up with here's what's happening next.
And she's like, OK, I turn into a mermaid.
And then I like swim across the bay.
And I go and find the treasure chest and unlock it.
And it's like, and then Snoop Dogg just always
will have a next version of the--
a next iteration on the story.
So I mean, it's stuff that's fun.
But it's not actually Snoop Dogg.
He's just the actor who's playing the dungeon master,
which makes it more fun.
So I think that's probably the right place to start,
is you can build versions of these characters
that people can interact with doing different things.
But I think where you want to get over
time is to the place where any creator or any small business
can very easily just create an AI assistant that can represent
them and interact with your community or customers,
if you're a business, and basically just help
you grow your enterprise.
So I think that's going to be cool.
It's a long-term project.
I think we'll have more progress on it to report on next year.
But I think that's coming.
ANDREW HUBERMAN: I'm super excited about it
because we hear a lot about the downsides of AI.
I mean, I think people are now coming around to the reality
that AI is neither good nor bad.
It can be used for good or bad.
And there are a lot of life-enhancing spaces
that it's going to show up and really, really
improve the way that we engage socially, what we learn,
and that mental health and physical health
don't have to suffer and, in fact,
can be enhanced by the sorts of technologies
we've been talking about.
So I know you're extremely busy.
I so appreciate the large amount of time
you've given me today to sort through all these things.
MARK ZUCKERBERG: Yeah, it's been fun.
ANDREW HUBERMAN: And to talk with you and Priscilla
and to hear what's happening and where things are headed,
the future certainly is bright.
I share in your optimism.
And it's been only strengthened by today's conversation.
So thank you so much.
And keep doing what you're doing.
And on behalf of myself and everyone listening,
thank you because, regardless of what people say,
we all use these platforms excitedly.
And it's clear that there's a ton of intention,
and care, and thought about what could be in the positive sense.
And that's really worth highlighting.
MARK ZUCKERBERG: Awesome, thank you.
I appreciate it.
ANDREW HUBERMAN: Thank you for joining me
for today's discussion with Mark Zuckerberg and Dr. Priscilla
Chan.
If you're learning from and/or enjoying this podcast,
please subscribe to our YouTube channel.
That's a terrific zero-cost way to support us.
In addition, please subscribe to the podcast
on both Spotify and Apple.
And on both Spotify and Apple, you
can leave us up to a five star review.
Please also check out the sponsors
mentioned at the beginning and throughout today's episode.
That's the best way to support this podcast.
If you have questions for me, or comments about the podcast,
or guests that you'd like me to consider
hosting on the Huberman Lab podcast,
please put those in the comment section on YouTube.
I do read all the comments.
Not during today's episode, but on many previous episodes
of the Huberman Lab podcast, we discuss supplements.
While supplements aren't necessary for everybody,
many people derive tremendous benefit from them for things
like enhancing sleep, hormone support, and improving focus.
If you'd like to learn more about the supplements discussed
on the Huberman Lab podcast, you can go to live momentous--
spelled O-U-S--
so livemomentous.com/huberman.
If you're not already following me on social media,
it's hubermanlab on all social media platforms.
So that's Instagram, Twitter-- now called X--
Threads, Facebook, LinkedIn.
And on all those places, I discuss
science and science-related tools, some of which
overlaps with the content of the Huberman Lab podcast,
but much of which is distinct from the content
on the Huberman Lab podcast.
So again, it's hubermanlab on all social media platforms.
If you haven't already subscribed
to our monthly Neural Network Newsletter,
the Neural Network Newsletter is a completely zero-cost
newsletter that gives you podcast summaries
as well as toolkits in the form of brief PDFs.
We have toolkits related to optimizing sleep, regulating
dopamine, deliberate cold exposure,
fitness, mental health, learning, and neuroplasticity
and much more.
Again, it's completely zero-cost to sign up.
You simply go to hubermanlab.com, go over
to the Menu tab, scroll down to newsletter
and supply your email.
I should emphasize that we do not
share your email with anybody.
Thank you once again for joining me for today's discussion
with Mark Zuckerberg and Dr. Priscilla Chan.
And last but certainly not least,
thank you for your interest in science.
[MUSIC PLAYING]