Welcome to the Huberman Lab Podcast, where we discuss science and science-based tools for everyday life. I'm Andrew Huberman, and I'm a professor of neurobiology and ophthalmology at Stanford School of Medicine. My guest today is Dr. Terry Snowski. Dr. Terry Snowski is a professor at the Salk Institute for Biological Studies, where he directs the Computational Neurobiology Laboratory. As his title suggests, he is a computational neuroscientist; that is, he uses math, as well as artificial intelligence and computing methods, to understand the overarching, ultra-important question of how the brain works. Now, I realize that when people hear
terms like computational neuroscience, algorithms, large language models, and AI, it can be a bit overwhelming and even intimidating. But I assure you that the purpose of Dr. Snowski's work—and indeed today's discussion—is all about using those methods to clarify how the brain works and, indeed, to simplify the answer to that question. So, for instance, today you will learn that regardless of who you are or your level of experience, all your motivation in all domains of life is governed by a simple algorithm or equation. Dr. Snowski explains how a single rule, a single learning rule, drives all
of our motivation-related behaviors, and it, of course, relates to the neuromodulator dopamine. If you're familiar with dopamine as a term, today you will really understand how dopamine works to drive your levels of motivation, or in some cases, a lack of motivation—and how to overcome that lack of motivation. Today, we also discuss how best to learn. Dr. Snowski shares not just information about how the brain works, but also practical tools that he and his colleagues have developed, including a zero-cost online portal that teaches you how to learn better based on your particular learning style—the way that
you, in particular, forge and implement information. Dr. Snowski also explains how he himself uses a particular type of physical exercise in order to enhance his cognition, that is, his brain's ability to learn information and to come up with new ideas. Today, we also discuss both the healthy brain and the diseased brain, in conditions like Parkinson's and Alzheimer's, and how particular tools that relate to mitochondrial function can perhaps be used to treat various diseases, including Alzheimer's dementia. I'm certain that by the end of today's episode, you will have learned a tremendous amount of new knowledge about
how your brain works and practical tools that you can implement in your daily life. Before we begin, I'd like to emphasize that this podcast is separate from my teaching and research roles at Stanford. It is, however, part of my desire and effort to bring zero-cost, consumer information about science and science-related tools to the general public. In keeping with that theme, I'd like to thank the sponsors of today's podcast. Our first sponsor is BetterHelp. BetterHelp offers professional therapy with a licensed therapist, carried out completely online. I've been doing weekly therapy for well over 30 years, and
initially, I didn't have a choice; it was a condition of being allowed to stay in school. But pretty soon, I realized that therapy is an extremely important component of one’s overall health. In fact, I consider doing regular therapy just as important as getting regular exercise, including cardiovascular exercise and resistance training, which of course I also do every single week. Now, there are essentially three things that great therapy provides. First of all, it provides a good rapport with somebody that you can trust and talk to about essentially all issues that you want to. Second of all,
great therapy provides support in the form of emotional support or simply directed guidance on what to do or what not to do in given areas of your life. And third, expert therapy can provide you with useful insights that you would not have been able to arrive at on your own. BetterHelp makes it very easy to find an expert therapist with whom you really resonate, and that can provide you the benefits I just mentioned that come with effective therapy. If you'd like to try BetterHelp, go to betterhelp.com/huberman to get 10% off your first month. Again, that's
betterhelp.com/huberman. Today's episode is also brought to us by Helix Sleep. Helix Sleep makes mattresses and pillows that are customized to your unique sleep needs. Now, I've spoken many times before on this and other podcasts about the fact that getting a great night's sleep is the foundation of mental health, physical health, and performance. Now, the mattress you sleep on makes a huge difference in terms of the quality of sleep that you get each night. How soft it is, or how firm it is, how breathable it is—all play into your comfort and need to be tailored to
your unique sleep needs. If you go to the Helix website, you can take a brief two-minute quiz. It asks you questions such as, do you sleep on your back, your side, or your stomach? Do you tend to run hot or cold during the night? Things of that sort. Maybe you know the answers to those questions; maybe you don't. Either way, Helix will match you to the ideal mattress for you. For me, that turned out to be the Dusk mattress. I started sleeping on a Dusk mattress about three and a half years ago, and it's been
far and away the best sleep that I've ever had. If you'd like to try Helix, you can go to helixsleep.com/huberman, take that two-minute sleep quiz, and Helix will match you to a mattress that is customized for your unique sleep needs. For the month of November 2024, Helix is giving up to 25% off on all mattress orders and two free pillows. Again, that's helixsleep.com/huberman to get up to 25% off and two free pillows. "Pillows, and now for my discussion with Dr. Terry Sejnowski. Dr. Terry Sejnowski, welcome! Great to be here. We go way back, and I'm
a huge, huge fan of your work because you've worked on a great many different things in the field of neuroscience. You're considered by many a computational neuroscientist, so you bring mathematical models to an understanding of the brain and neural networks. We're also going to talk about AI today, and we're going to make it accessible for everybody—biologist or no math background. To kick things off, I want to understand something. I understand a bit about the parts list of the brain, and most listeners of this podcast will understand a little bit of the parts list of the
brain, even if they've never heard an episode of this podcast before, because they understand there are cells. Those cells are neurons, and those neurons connect to one another in very specific ways that allow us to see, hear, think, etc. But I've come to the belief that even if we know the parts list, it doesn't really inform us how the brain works. Right? This is the big question: how does the brain work? What is consciousness? All of this stuff. So, where and how does an understanding of how neurons talk to one another start to give us
a real understanding about how the brain works? What is this piece of meat in our heads? Because it can't just be, okay, the hippocampus remembers stuff, and the visual cortex perceives stuff. When you sit back and you remove the math from the mental conversation, if that's possible for you, how do you think about "how the brain works"? At a very basic level, what is this piece of meat in our heads really trying to accomplish? From, let's just say, the time when we first wake up in the morning and we're a little groggy, till we make
it to that first cup of coffee or water, or maybe even just to urinate first thing in the morning, what is going on in there? What a great question! You know, I have a U Pat Church, and I wrote a book called "Computational Brain." In it, there's this levels diagram at levels of investigation at different spatial scales, from molecular at the very bottom to synapses and neurons, circuits, neural circuits, how they're connected with each other, and then brain areas in the cortex, and then the whole central nervous system span—10 orders of magnitude, you know, 10th
to the 10th in spatial scale. So, where is consciousness in all of that? There are two approaches that neuroscientists have taken. I shouldn't just say "neuroscientists"; I should say that scientists have taken. The first one, which you describe, is: you know, let's look at all the parts. That's the bottom-up approach—a reductionist approach. You make a lot of progress; you can figure out how things are connected and understand how development works, how neurons connect. But it's very difficult to really make progress because you quickly get lost in the forest. Now, the other approach, which has been
successful but at the end unsatisfying, is the top-down approach. This is the approach that psychologists have taken, looking at behavior and trying to understand the laws of behavior. These are the behaviorists. Even people in AI were trying to do a top-down approach to write programs that could replicate human behavior, intelligent behavior. I have to say, both of those approaches—bottom-up or top-down—have really not gotten to the core of answering any of those big questions. However, there's a whole new approach now that is emerging in both neuroscience and AI at exactly the same time in this moment
in history. It's really quite remarkable. There’s an intermediate level between the implementation level at the bottom, how you implement some particular mechanism, and the actual behavior of the whole system. This is called the algorithmic level; it’s in between. Algorithms are like recipes. They're like, you know, when you bake a cake. You have to have ingredients, and you have to say the order in which they’re put together and how long—that kind of thing. If you get it wrong, it doesn’t work; it’s just a mess. Now, it turns out that we’re discovering algorithms, and we’ve made a
lot of progress with understanding the algorithms that are used in neural circuits. This speaks to the computational level of how to understand the function of the neural circuit. I’m going to give you one example of an algorithm, which we worked on back in the 1990s when Peter De and Reed Monu were postdoc in the lab. It had to do with a part of the brain below the cortex called the basal ganglia, which is responsible for learning sequences of actions to achieve some goal. For example, if you want to play tennis, you have to be able
to coordinate many muscles, and a whole sequence of actions has to be made if you want to serve accurately. You have to practice, practice, practice. Well, what’s going on there is that the basal ganglia basically takes over from the cortex and produces actions that get better and better and better. And that's true not just of the muscles but it's also true of thinking. If you want to become good at anything..." Area, if you want to become a good, uh, finance professional, or if you want to become a good doctor or a neuroscientist, right, you have
to be practicing—practicing—in terms of understanding what, uh, you know, what the details of the profession are and what works, what doesn't work, and so forth. It turns out that the basal ganglia interacts with the cortex, not just in the back, which is the action part, but also with the prefrontal cortex, which is the thinking part. Can I ask you a question about this briefly? The basal ganglia, as I understand, are involved in the, um, organization of two major types of behaviors: “go,” meaning to actually perform a behavior, and “no-go,” which instructs you not to engage
in that behavior. Learning an expert golf swing or even a basic golf swing or a tennis racket swing involves both of those things: go and no-go. Given what you just said, which is that the basal ganglia are also involved in generating thoughts of particular kinds, I wonder, therefore, if it's also involved in the suppression of thoughts of particular kinds. I mean, you don't want your surgeon cutting into, um, you know, a particular region and just thinking about their motor behaviors: what to do and what not to do. They presumably need to think about what to
think about, but also what not to think about. You don't want that surgeon thinking about how their kid was a brat that morning, and, um, they’re frustrated because the two things interact. So, is there a go-no-go in terms of action and learning, and is there a go-no-go in terms of thinking? Well, I mentioned the prefrontal cortex, and that part—the loop with the basal ganglia—is one of the last to mature in, uh, you know, early adulthood. And you know what? The problem is that for adolescents, it's not the no-go part for, you know, planning and actions
that is quite there yet, and so often it doesn't kick in to prevent you from doing things that are not in your best interest. So, yes, absolutely right. But one of the things, though, is that learning is involved, and this is really a problem that we cracked first theoretically in the 90s and then experimentally later by recording from neurons and also brain imaging in humans. So, it turns out we know the algorithm that is used in the brain for how to learn sequences of actions to achieve a goal, and it’s the simplest possible algorithm you
can imagine: it's simply to predict the next reward you're going to get. If I do an action well, will it give me something of value? And, uh, you learn every time you try something—whether you got the amount of reward you expected or less—you use that to update the synapses (synaptic plasticity) so that the next time you'll have a better chance of getting a better reward. You build up what's called a value function, so the cortex, now over your lifetime, is building up a lot of knowledge about, you know, things that are good for you, things
that are bad for you. Like, you go to a restaurant, you order something—how do you know what's good for you? Right? You've had lots of meals in a lot of places, and now that is part of your value function. This is the same algorithm that was used by AlphaGo—this is the program that DeepMind built. This is an AI program that beat the world Go champion, and Go is the most complex game that humans have ever played on a regular basis—far more complex than chess, as I understand. Yeah, that's right. So, Go is to chess what
chess is to something like checkers. In other words, the level of difficulty is another way off above it because you have to think in terms of battles going on all over the place at the same time, and the order in which you put the pieces down is going to affect what's going to happen in the future. So, this value function is super interesting, and I wonder whether—and I think I might know the answer to this—but I wonder whether this value function is implemented over long periods of time. So, you talked about the value function in
terms of learning a motor skill—let's say swinging a tennis racket to do a perfect tennis serve—or even just a decent tennis serve. When somebody goes back to the court, let's say on the weekend, once a month, over the course of years, are they able to tap into that same value function every time they go back, even though there's been a lot of intervening time and learning? That's question number one. And then the other question is: do you think that this value function is also being played out in more complex scenarios, not just motor learning? Such
as, let's say, a domain of life that for many people involves some, um, trial and error. It would be like human relationships. We learn how to be friends with people; we learn how to be a good sibling; we learn how to be a good romantic partner. Right? We get some things right; we get some things wrong. So, is the same value function being implemented? We're paying attention to what was rewarding, but what I didn't hear you say also was what was punishing. So, are we only paying attention to what is rewarding, or are we also
integrating punishment? We don't get an electric shock when we get the serve wrong, but we can be frustrated. Identified is, um, some a very, uh, important feature, uh, which is that rewards, uh, by the way, you know, every time you do something, you're updating this value function every time, and it accumulates. And the answer to your first question, the answer is that it's always going to be there. It doesn't matter; it's a very permanent part of your experience and who you are. And, uh, interestingly, the behaviorists knew this back in the 1950s that, uh, you
can get there two ways: through trial and error. You know, small rewards are good because you're constantly coming closer and closer to getting the, uh, what you're seeking—better tennis player or being able to make a friend. But the negative punishment is a much more effective one-trial learning. You don't need to have, you know, 100 trials to, you know what you need when you're training a rat to do some task with small food rewards. But if you just shock the rat, boy, that rat doesn't forget that. Yeah, one really bad relationship will have you learning certain
things forever. And this is also PTSD, post-traumatic stress disorder, is another good example of that. That can screw you up for the rest of your life. So, but the other thing—and you pointed out something really important—which is that a large part of the prefrontal cortex is devoted to social interaction. And this is how humans—you know, when you come into the world, you don't know what language you're going to be speaking. You don't know what the cultural values are that you're going to have to be able to become a member of this society and things that
are expected of you. All of that has to come through experience, through building this value function. So, this is—and this is something we discovered in the 20th century—and now it's going into AI. It's called reinforcement learning in AI. It's a form of procedural learning as opposed to the cognitive level, where you think and you do things. Cognitive thinking is much less efficient, uh, because you have to go step by step; with procedural learning, uh, it's automatic. Can you give me an example of procedural learning in the context of, um, a comparison to cognitive learning? Like,
is there an example of perhaps how to make a decent cup of coffee using, uh, you know, purely knowledge-based learning versus procedural learning where procedural learning wins? And I can imagine one, but you're the true expert here. Well, you know, you know a lot of examples, but, uh, my just—since we've been talking about tennis, can you imagine learning how to play tennis through a book? Reading a book? That's so funny! On the plane back from Nashville yesterday, the guy sitting across the aisle from me was reading a book about, um, maybe he was working on
his pilot's license or something. And I looked over and couldn't help but notice these diagrams of the plane flying, and I thought, I'm just so glad that this guy is a passenger and not a pilot. And then I thought about how the pilots learned, and presumably it was a combination of practical learning and textbook learning. I, me—when you scuba dive, this is true: I'm scuba dive certified, and when you get your certification, you learn your dive tables and you learn why you have to wait between dives, etc., and gas exchange and a number of things.
But there's really no way to simulate what is to take your mask off underwater, put it back on, and then, you know, blow the water out of your mask like that. You just have to do that in a pool, and you actually have to do it when you need to for it to really get drilled in. Yes, you, you, you, uh, it's really essential for things that have to be executed quickly and expertly to get that, you know, really down pat so you don't have to think. And this happens in school, right? In other words,
you have classroom lessons where you're given explicit instruction, but then you go do homework—that's procedural learning. You do problems; you solve problems. And, you know, I'm a PhD physicist, so I went through all of the classes, you know, in theoretical physics, and, um, it was really the problems that were really the core of becoming a good physicist. You know, you can memorize the equations, but that doesn't mean you understand how to use the equations. I think it's worth highlighting something: a lot of times on this podcast we talk about what I call protocols. It would
be, you know, like get some morning sunlight in your eyes to stimulate your suprachiasmatic nucleus by way of your retinal ganglion cells. Audiences of this podcast will recognize those terms. It's basically get sunlight in your eyes in the morning and set your circadian clock. That's right, and you can hear that a trillion times. But I do believe that there's some value to both knowing what the protocol is and the underlying mechanisms. There are these things in your eye that, you know, encode the sunrise qualities of light, etc., and then send them to your brain, etc.,
etc. But then once we link knowledge—pure knowledge—to a practice, I do believe that the two things merge someplace in a way that, um, let's say reinforces both the knowledge and the practice, right? So, these things are not necessarily separate; they bridge. In other words, doing your theoretical physics, uh, problem sets reinforces the examples that you learned in lecture. And in your textbooks, and vice versa, so this is a battle that's going on right now in schools. Uh, you know, what you've just said is absolutely right; you need both. We have two major learning systems: we
have a cognitive learning system, which is cortical, and we have a procedural learning system, which is subcortical, basal ganglia. The two go hand in hand if you want to become good at anything; the two are going to help each other. What's going on right now in schools in California, at least, is that they're trying to get rid of the procedural. That's ridiculous! They don't want students to practice because it's going to be, uh, you know, you're stressing them; they don't want them to feel that, you know, that they're having difficulty. But for those listening, I'm
covering my eyes because, I mean, this would be like saying—um, goodness, there's so many examples—like here's a textbook on swimming, and then you're going to go out to the ocean someday and you will have never actually swum, right? And now you're expected to be able to survive, let alone swim well. It's crazy; it's crazy. But I'll tell you, Barbara Oakley and I have a massive open online course on learning how to learn, and it helps students. We aimed it at students, but it actually has been taken by four million people in 200 countries, ages 10
to 90. What is this called? Learning How to Learn. Is there a paywall? No, it's completely free. Amazing! And, you know, I get incredible feedback—fabulous letters—almost every day. Well, you're about to get a few more! Okay, I did an episode on Learning How to Learn, and my understanding of the research is that we need to test ourselves on the material. The testing is not just a form of evaluation; it is a form of identifying the errors that help us then compensate for the errors and learn. But it's very procedural; it's not about just listening and
regurgitating. You know, you put your finger on it, which is that—and this is what we teach the students—is that the way the brain works, right, is not that it memorizes things like a computer. You have to have active learning; you have to actively engage. In fact, when you're trying to solve a problem on your own, right, this is where you're really learning by trial and error. That's the procedural system. If someone tells you what the right answer is, you know, that's just something that gets stored away somewhere, but it's not going to automatically come up
if you are faced with something that's not exactly the same problem but is similar. And by the way, this is the key to AI—completely essential for the recent success of these large language models, you know, that the public is now beginning to use. They're not parrots; they're not just memorizing what they've taken in. They have to generalize, which means to be able to do well on new things that come in that are similar to the old things that you've seen, but allow you to solve new problems. That's the key to the brain: the brain is
really, really good at generalizing. In fact, in many cases, you only need one example to generalize. Like going to a restaurant for the first time, there are a number of new interactions. Right? There might be a host or a hostess, you sit down at these tables you've never sat at, somebody asks you questions, you read it—okay, maybe it's a QR code these days, but right? Forever after, you understand the process of going into a restaurant. It doesn't matter what the genre of food happens to be or what city you're sitting in; outside, you can pretty
much work it out: sit at the counter, sit outside, sit at the table. There are a number of key action steps that I think pretty much translate everywhere, unless you go to some super high-end thing or some super low-end thing where it's a buffet or whatever. You know, you can start to fill in the blanks here. If I understand correctly, there's an action function that's learned from the knowledge and the experience. Exactly! And then where is that action function stored? Is it in one location in the brain, or is it kind of an emergent property
of multiple brain areas? So, you’re right at the cusp here of where we are in neuroscience right now; we don't know the answer to that question. In the past, it had been thought that, you know, the cortex had regions, like countries, each of which was dedicated to one function. Right? You know, and interestingly, you can record from neurons, and it certainly looks that way. In other words, there's a visual cortex in the back, and there's a whole series of areas, then there's the auditory cortex here in the middle, and then the prefrontal cortex for social
interaction. So it looked really clear-cut that it's modular. Now, what we're facing is we have a new way to record from neurons optically; we can record from tens of thousands from dozens of areas simultaneously. What we're discovering is that if you... Want to do any task you're engaging, not just the area that you might think you know has the input coming in; the visual system is getting input from the motor system, right? In fact, you know there's more input coming from the motor system than from the eye. Really? Yes, yeah, and Churchin at UCLA has
shown that in the mouse. This is so now we're looking at global interactions between all these areas, and that's where real complex cognitive behaviors emerge: from those interactions. Now we have the tools for the first time to actually be able to see them in real time, and we're doing that now, first on mice and monkeys, but we now can do this in humans. So I've been collaborating with a group at Mass General Hospital to record from people with epilepsy. They have to have an operation for people who are drug-resistant to find out where it starts
in the cortex—where it is initiated, where the seizure starts. Then, to go in, you have to record simultaneously from a lot of parts of the cortex for weeks until you find out where it is. After that, you go in and you try to take it out, and often that helps. It's very, very invasive, but for two weeks, we have access to all those neurons in that cortex that are being recorded from constantly. I've used... I started out because I was interested in sleep, and I wanted to understand what happens in the cortex of a human
during sleep. But then we realized that you can also help people who have these debilitating problems with seizures. They’re there for two weeks and have nothing to do, so they love the fact that scientists are interested in helping them, teaching them things, and finding out where in the cortex things are happening when they learn something. This is a gold mine; it's unbelievable. I've learned things from humans that I could have never gotten from any other species. Amazing! Obviously, language is one of them, but there are other things in sleep that we've discovered, having to do
with traveling waves. There are circular traveling waves that go on during sleep, which is astonishing. Nobody ever really saw that before. But if you were to ascribe one or two major functions to these traveling waves, what do you think they are accomplishing for us in sleep? And by the way, are they associated with deep sleep, slow-wave sleep, or with rapid eye movement sleep? This is non-REM sleep. This is during intermediate transition states—the transition state. Okay, our audience will probably... They've heard a lot about slow-wave sleep from me and Matt Walker, from rapid eye movement sleep.
So what do these traveling waves accomplish for us? Okay, so in the case of... they’re called sleep spindles. The waves last for about a second or two, and they travel, like I say, in a circle around the cortex. It's known that these spindles are important for consolidating experiences you've had during the day into your long-term memory storage. So it’s a very important function. If you take out... it's the hippocampus that is replaying the experiences. It's a part of the brain that is very important for long-term memory. If you don't have a hippocampus, you can’t learn
new things. That is to say, you can’t remember what you did yesterday or, for that matter, even an hour earlier. But the hippocampus plays back your experiences, causing the sleep spindles now to knead that into the cortex. It’s important you do that right because you don't want to overwrite the existing knowledge you have. You just want to basically incorporate the new experience into your existing knowledge base in an efficient way that doesn’t interfere with what you already know. So that's an example of a very important function that these traveling waves have. I'd like to take
a quick break and acknowledge our sponsor, AG1. AG1 is a vitamin, mineral, probiotic drink that includes prebiotics and adaptogens. I've been drinking AG1 since 2012, and I started doing it at a time when my budget was really limited. In fact, I only had enough money to purchase one supplement, and I'm so glad that I made that supplement AG1. The reason for that is, even though I strive to eat whole foods and unprocessed foods, it's very difficult to get enough vitamins and minerals, micronutrients, and adaptogens from diet alone in order to make sure that I'm at
my best—meaning having enough energy for all the activities I participate in from morning until night, sleeping well at night, and keeping my immune system strong. When I take AG1 daily, I find that all aspects of my health—my physical health, my mental health, my performance, recovery from exercise—all of those improve. I know that because I've had lapses when I didn't take my AG1, and I certainly felt the difference. I also noticed, and this makes perfect sense given the relationship between the gut microbiome and the brain, that when I regularly take AG1, I have more mental clarity
and more mental energy. If you'd like to try AG1, you can go to drinkAG1.com/huberman to claim a special offer. For this month only, November 2024, AG1 is giving away a free one-month supply of omega-3. fatty acids from fish oil, in addition to their usual welcome kit of five free travel packs and a year’s supply of vitamin D3 K2. As I've discussed many times before on this podcast, omega-3 fatty acids are critical for brain health, mood, cognition, and more. Again, go to drinka1.com/huberman to claim this special offer. Today's episode is also brought to us by David.
David makes a protein bar unlike any other. It has 28 grams of protein, only 150 calories, and 0 grams of sugar. That's right: 28 grams of protein, and 75% of its calories come from protein. These bars from David also taste amazing! My favorite flavor is chocolate chip cookie dough, but then again, I also like the chocolate fudge flavored one, and I also like the cake flavored one. Basically, I like all the flavors; they're incredibly delicious. For me personally, I strive to eat mostly whole foods. However, when I'm in a rush, away from home, or just
looking for a quick afternoon snack, I often find that I'm looking for a high-quality protein source. With David, I'm able to get 28 grams of protein with the calories of a snack, which makes it very easy to hit my protein goals of 1 gram of protein per pound of body weight each day. It allows me to do that without taking in excess calories. I typically eat a David bar in the early afternoon, or even mid-afternoon, if I want to bridge that gap between lunch and dinner. I like that it's a little bit sweet, so it
tastes like a tasty snack, but it also gives me that 28 grams of very high-quality protein with just 150 calories. If you would like to try David, you can go to david.com/huberman. Again, the link is david.com/huberman. As I call, there are one or two things that one can do in order to ensure that one gets sufficient sleep spindles at night and thereby incorporate this new knowledge. This was from the episode that we did with Gina Poe from UCLA, I believe, and others, including Matt Walker. My recollection is that the number one thing is to make
sure you get enough sleep at night so you experience enough of these spindles. We're all familiar with the cognitive challenges, including memory challenges and learning challenges, associated with lack of sleep—insufficient sleep. But the other was that there was some interesting relationship between daytime exercise and nighttime prevalence of sleep spindles. Are you familiar with that literature? Oh yes, no, this is a fascinating literature, and it's all pointing in the same direction, which is that we always neglect to appreciate the importance of sleep. I mean, obviously, you're refreshed when you wake up, but a lot of things
happen. It's not that your brain turns off; it's that it goes into a completely different state, and memory consolidation is just one of those things that happen when you fall asleep. Of course, you know there's dreaming and so forth. We don't fully appreciate or understand exactly how all the different sleep stages work together, but exercise is a particularly important part of getting the motor system tuned up, and it’s thought that REM (rapid eye movement) sleep may be involved in that. So that’s yet another part of the sleep stages you go through: you go back and
forth between dream sleep and slow-wave sleep, back and forth, back and forth during the night. And then when you wake up, you're in the REM stage—more and more REM, more and more REM. But you know, that's all observation. As a scientist, what you want to do is perturb the system and see if maybe, if you had more sleep spindles, you'd be able to remember things better. So it turns out Sarah Mednick, who was at UC Irvine, did this fantastic experiment. It turns out there’s a drug called Zolpidem, which goes by the name Ambien. You may
have some experience with that. If not, I'm aware of what people use it as—a sleep aid. That’s right; a lot of people take it in order to sleep. Okay, well, it turns out that it causes more sleep spindles. Really? Yeah, it doubles the number of sleep spindles if you take the drug. You take the drug after you've done the learning; you do the learning at night, and then you take the drug, and you have twice as many spindles. You wake up in the morning, and you can remember twice as much from what you learned, and
the memories are stable over time. It’s like it’s in there. Yes, yeah, no, it consolidates it. I mean, that’s the point. What’s the downside of Ambien? Okay, here’s the downside: people who take the drug say if you're going to Europe and you take it, and then you sleep really soundly, often you find yourself in the hotel room and completely have no clue; you have no memory of how you got there. I’ve had that experience without Ambien or any other drugs, where I am very badly jetlagged, and I wake up. For a few seconds, what feels
like an eternity, I have no idea where I am. Okay, it’s terrifying. Well, that’s another problem you have with jet lag. Jet lag really screws things up, but this is something where it could... "Be an hour. You know you took the train, or you took a taxi or something, and you're here now. This seems crazy: how could it be a way to improve learning and recall on one hand and then forgetfulness on the other hand? Well, it turns out what's important is that when you take the drug, right? In other words, it helps consolidate the
experiences you've had in the past before you took the drug, but it will wipe out experiences you have in the future after you take the drug, right? Yeah, you—sorry, I'm not laughing. It must be a terrifying experience, but I'm laughing because you know there's some beautiful pharmacology and indeed some wonderfully useful pharmaceuticals out there. Some people may cringe to hear me say that, but there are some very useful drugs out there that save lives and help people deal with symptoms, etc. Side effects are always a concern, but this particular drug profile, Ambien, seems to reveal
something perhaps even more important than the discussion about spindles, or Ambien, or even sleep, which is that you've got to pay the piper somehow, as they say. That's right, that you tweak one thing in the brain, and something else goes. You don’t get anything for free. I think that this is something that is true not just of drugs for the brain but steroids for the body, right? Sure, yeah. I mean, steroids—even low-dose testosterone therapy, which is very popular nowadays—will give people more vigor, etc., but it is introducing a sort of second puberty. Puberty is perhaps
the most rapid phase of aging in the entire lifespan. The same thing with people who take growth hormone would be probably a better example because certainly those therapies can be beneficial to people, but growth hormone gives people more vigor, but it accelerates aging. Look at the quality of skin that people have when they take growth hormone: it looks more aged. They physically change. I'm not for or against these things; it’s highly individual. But I completely agree with you. I would also venture that with the growing interest in so-called nootropics, and people taking things like modafinil—not
just for narcolepsy or daytime sleepiness but also to enhance cognitive function—okay, maybe they can get away with doing that every once in a while for a deadline task or something, but my experience is that people who obsess over the use of pharmacology to achieve certain brain states pay in some other way. Absolutely. Whether or not stimulants, or sedatives, or sleep drugs, those behaviors will always prevail. Behaviors will always prevail as tools. Yeah, and one of the things about the way the body evolved is that it really has to balance a lot of things, and so
with drugs, you’re basically unbalancing it somehow, and the consequence is, as you point out, that in order to make one part better, you sacrifice something else somewhere else. As long as we're talking about brain states and connectivity across areas, I want to ask a particular question, then I want to return to this issue about how best to learn, especially in kids, but also in adulthood. I've become very interested in and spent a lot of time with the literature and some guests on the topic of psychedelics. Let’s leave the discussion about LSD aside because—do you know
why there aren't many studies of LSD? This is kind of a fun one. No one is expected to know. Against the law, I think. Oh, but there’s so much in her MDMA, and there are lots of studies going on about those—yeah, changed. But when I was growing up, you know, as you know, it was against the law. So, what I learned is that there are far fewer clinical trials exploring the use of LSD as a therapeutic because, with the exception of Switzerland, none of the researchers are willing to stay in the laboratory as long as
it takes for the subject to get through an LSD journey, whereas psilocybin tends to be a shorter experience. Okay, let’s talk about psilocybin for a moment. My read of the data on psilocybin is that it’s still open to question, but that some of the clinical trials show pretty significant recovery from major depression. It’s pretty impressive, but if we just set that aside and say, okay, more needs to be worked out for safety, what is very clear from the brain imaging studies that look at before and after, resting state, task-related, etc., is that you get more
resting state global connectivity—more areas talking to more areas than was the case prior to the use of the psychedelic. Given the similarity of the psychedelic journey—and here I'm specifically talking about psilocybin—to things like rapid eye movement sleep and things of that sort, I have a very simple question: do you think that there's any real benefit to increasing brain-wide connectivity? To me, it seems a little bit haphazard, and yet the clinical data are promising—if nothing else, promising. Is what we’re seeking in life, as we acquire new knowledge, as we learn tennis or golf or take up
singing or what have you, as we go from childhood into the late stages of our life, that whole transition—are we increasing connectivity and communication between different brain areas? Is that what the human experience is really about, or is it that we’re getting more modular? Are we getting more segregated in terms..." Of this area, talking to this area in this particular way, um, feel free to explore this in any way that feels meaningful or to say "pass" if it's not a good question. No, it's a great question. I mean, you have all these great questions, and
we don't have complete answers yet, but, uh, specifically with regard to connectivity, um, if you look at what happens in an infant's brain during the first two years, there's a tremendous amount of new synapses being formed. This is your area, by the way; you know more about this than I do. Yeah, that's true. But then you prune them, right? The second phase is that you have an overabundance of synapses, and now what you want to do is prune them. Why would you want to do that? Well, you know, synapses are expensive. It takes a lot
of, uh, energy to activate all of the neurons and the synapses, especially because there's the turnover of the neurotransmitter. What you want to do is, uh, reduce the amount of energy and only use those synapses that have been proven to be the most important. Right now, unfortunately, as you get older, you find that the pruning slows down but doesn't go away, so the cortex thins and so forth. So, I think it goes in the opposite direction. I think that as you get older, you lose connectivity, but interestingly, you retain the old memories. The old memories
are really rock solid because they were put in when you were young. Yeah, the foundation—the foundation upon which everything else is built. But it's not totally one way in the sense that even as an adult, as you know, you can learn new things, maybe not as quickly. By the way, uh, this is one of the things that surprised me. So Barbara and I have, you know, looked at the people who really benefited the most. It turns out that the peak of the demographic is 25 to 35. Barbara Oakley—yeah, she's really the mastermind. She's a fabulous
educator and has a background in engineering. But what's going on? So, it turns out we aimed our, uh, our MOOC at kids in high school and college because that's their business. They go every day, and they go into work, and they have to learn, right? That's their business. But, in fact, very few of the students are actually, you know, they weren't taking the course. Why should they? They spend all day in the class, right? Why do they want to take another class? So, this is your, um, the learning to learn class. Learning how to learn.
Okay, so you did this with Barbara? So we did this. I did it with Barbara. And now, 25 to 35, we have this huge peak—huge. So what's going on? Here's what's going on; it's very interesting. So, you're 25, you've gone to college. Half the people, by the way, who take the course went to college, right? So it's not like, you know, filling in for college. This is like topping it off. But you're in a workforce; you have to learn new skills. Maybe you have a mortgage; maybe you have children, right? You can't afford to go
off and take a course or get another degree, so you take a MOOC, and you discover, you know, I'm not quite as agile as I used to be in terms of learning. But it turns out with our course you can boost your learning. So that, even though your brain isn't learning as quickly, you can do it more efficiently. This is amazing! I want to take this course. Um, I will take this course. What, um, what sort of time commitment is the course? You already pointed out that it's zero cost, which is amazing. Yeah, yeah, okay.
So, it's bite-sized videos lasting about 10 minutes each, and there are about 50 or 60 over the course of one month. And are you tested? Are you self-tested? Yeah, there are tests, there are quizzes, there are tests at the end, and there are forums where you can go and talk to other students if you have questions. We have TAs, too. Anyone can do this—anyone in the world! In fact, we have people in India, housewives who say thank you, thank you, thank you, because I could have never learned about how to be a better learner, and
I wish I had known this when I was going to school. Why do more people not know about this learning to learn course? Although, as people know, if I get really excited about it or about anything, I'm never going to shut up about it, but I'm going to take the course first because I want to understand the gut. You'll enjoy it. Uh, we have like 98% approval, which is phenomenal. It's sticky. Is it math? Vocabulary? No math, no vocabulary. It's not—we're not teaching anything specific. We're not trying to give you knowledge; we're trying to tell
you how to acquire knowledge and how to do that. How to deal with exam anxiety, for example, or how to, you know, we all procrastinate, right? We put things off. Well, no, I'm kidding. We all procrastinate. How to avoid that; we teach you how to avoid that. Fantastic! Okay, I'm going to skip back a little bit now with the intention of double-clicking on this learning to learn thing you pointed out. That in particular, in California but elsewhere as well, um, there isn't as much procedural practice-based learning anymore. Um, I'm going to play devil's advocate here,
uh, and I'm going to point out that this is not what I actually believe, but you know, when I was growing up, you had to do your times tables and your division, and you know, and then your fractions and your exponents, and you know, they build on one another. Um, and then at some point, you know, you take courses where you might need it, like a graphing calculator. To some people, they’re going to be like, “What? What is this?” But the point being that there were a number of things that you had to learn to
implement functions, and you learn by doing. You learn by doing. Um, likewise, in physics class, we, you know, we were attaching things to strings for mechanics and learning that stuff. Okay? Um, and learning from the chalkboard lectures, I can see the value of both certainly, and you explained that the brain needs both to really understand knowledge and how to implement and back and forth. But nowadays you’ll hear the argument, “Well, why should somebody learn how to read a paper map unless it's the only thing available?” Because you have Google Maps, or if they want to
do a calculation, they just put it into the top bar function on the internet, and boom, out comes the answer. So there is a world where certain skills are no longer required, and one could argue that the brain space, activity, time, and energy in particular could be devoted to learning new forms of knowledge that are going to be more practical in the school and workforce going forward. So how do we reconcile these things? I mean, I'm of the belief that the brain is doing math, and you and I agree it's electrical signals and chemical signals,
and it's doing math, and it's running algorithms. I think you convinced us of that, um, certainly. But how are we to discern what we need to learn versus what we don't need to learn in terms of building a brain that's capable of learning the maximum number of things, or even enough things, so that we can go into this very uncertain future? Because, as far as you know and I know, neither of us has a crystal ball. So what is essential to learn? And for those of us that didn't learn certain things in our formal education,
what should we learn? How to learn? Well, uh, this is generational. Okay, so technologies provide us with tools. You mentioned the calculator, right? Uh, well, a calculator didn't eliminate, uh, you know, the education you need to get in math, but it made certain things easier. It made it possible for you to do more things and more accurately. However, interestingly, uh, students in my class often, uh, come up with answers that are off by, you know, eight orders of magnitude. That’s a huge amount, right? It's clear that they didn't key in the calculator properly, but they
didn't recognize that it was a very far—completely way off the beam—because they didn't have a good feeling for the numbers. They don't have a good sense of, you know, exactly how big it should have been, you know, order of magnitude basic understanding. So it's kind of a—there's a benefit in that you can do things faster and better, but then you also lose some of your intuition if you don't have the procedural system in place. I'm thinking about a kid that wants to be a musician who uses AI to write a song about a bad breakup
that then is kind of recovered when they find new love. I'm guessing that you could do this today and get a pretty good song out of AI, but would you call that kid a songwriter or a musician? On the face of it, yeah, the AI is helping, and then you'd say, “Well, that's not the same as sitting down with a guitar and trying out different chords, and feeling the intonation in their voice.” But I'm guessing that for people that were on the electric guitar, they were criticizing people on the acoustic guitar, you know? So we
have this generational thing where we look back and say, “That’s not the real thing; you need to get the…” So what are the key fundamentals is really a critical question. Okay, so I'm going to come back to that because this is how the way you put it at the beginning had to do with, uh, whether your—how your brain is allocating resources. Okay, so when you're younger, you can take in things; your brain is more malleable. For example, uh, how good are you on social media? I well, I do all my own Instagram and Twitter, and
those accounts have grown in proportion to the amount of time I've been doing it. So yeah, I would say pretty good. I mean, I'm not the biggest account on social media, but for a science health account, we’re doing okay, um, thanks to the audience. Well, well, well, this speaks well for the fact that you've managed to break, you know, to go beyond the generation gap. Because I can type with my thumbs, Terry! Okay, there you go. That's a manual skill that you learn—a new phenomenon in human evolution. I couldn't believe it; I saw people doing
that, and now I can do it too. But the thing is that if you learn how to do that early in life… You're much more, uh, good at it. You can move your thumbs much more quickly. Also, uh, you can have many more, you know, tweets go, and not—what are they called now? They're not called tweets on X. I think they still call them tweets because it's hard to verb the letter X. Elon didn’t think of that one. I like X because it's cool; it's kind of punk, and it's got a black, uh, kind of
format, and it fits with kind of the, the, the, you know, the engineer-like black X, you know, and this kind of thing. But yeah, we'll still call them tweets. Well, okay, we'll call them tweets. Okay, that's good. But you know, I walk across campus and I see everybody—like half the people are tweeting, or you know, they're doing something with their cell phones. They're, I mean, it's unbelievable. You have beautiful sunsets at the Salk Institute; we'll put a link to one of them. I mean, it is truly spectacular, awe-inspiring to see a sunset at the Salk
Institute. Every day is different, and everyone's on their phones these days. It's sad, and they're looking down at their phones as they walk along— even people who are skateboarding! Unbelievable. I mean, you know, it's amazing what the human being can do when they learn to get into something. But what happens is the younger generation picks up whatever technology it is, and the brain gets really good at it. You can pick it up later, but you know, not quite as agile, not quite as, uh, maybe obsessive. It fatigues me. I will point this out: Doing anything
on my phone feels fatiguing in a way that reading a paper book or even just writing on a laptop or a desktop computer is fundamentally different. I can do that for many hours. If I'm on social media for more than a few minutes, I can literally feel the energy draining out of my body. Interesting. I could do, um, sprints or deadlifts for hours and not feel the kind of fatigue that I feel from doing social media. So, you know, this is fascinating. I like to know what's going on in your brain. Why is it? And
also, I'd like to know from younger people whether they have the same experience. I think not. I think my guess is that they don’t feel fatigued because they got into this early enough. This is actually a very, very, uh—I think that has a lot to do with the foundation you put into your brain. In other words, things that you learn when you're really young are foundational, and they make things easier—some things easier. Yeah. I spent a lot of time in my room as a kid, either playing with Legos, or action figures, or building fish tanks,
or reading about fish. I tended to read about things and then do a lot of procedural-based activities. You know, I read skateboard magazines and skated. I was never one to really just watch a sport and not play it. So, you know, bridging across these things, social media to me feels like an energy sink. But of course, I love the opportunity to be able to teach people and learn from people at such scale. But at an energetic level, I feel like I don’t have a foundation for it. It's like I'm trying to, like, jerry-rig my cognition
into doing something that it wasn't designed to do. Well, there you go. And it's because you don't have the foundation. You didn't do it when you were younger, and now you have to sort of use the cognitive powers to do a lot of what was being done now in a younger person procedurally. I'd like to take a quick break and thank one of our sponsors: Element. Element is an electrolyte drink that has everything you need and nothing you don't. That means the electrolytes—sodium, magnesium, and potassium—in the correct ratios, but no sugar. We should all know
that proper hydration is critical for optimal brain and body function. In fact, even a slight degree of dehydration can diminish your cognitive and physical performance to a considerable degree. It's also important that you're not just hydrated but that you get adequate amounts of electrolytes in the right ratios. Drinking a packet of Element dissolved in water makes it very easy to ensure that you're getting adequate amounts of hydration and electrolytes. To make sure that I'm getting proper amounts of both, I dissolve one packet of Element in about 16 to 32 ounces of water when I wake
up in the morning and drink that, basically, first thing in the morning. I'll also drink a packet of Element dissolved in water during any kind of physical exercise that I'm doing, especially on hot days when I'm sweating a lot and losing water and electrolytes. There are a bunch of different great-tasting flavors of Element. I like the watermelon, I like the raspberry, I like the citrus—basically, I like all of them. If you'd like to try Element, you can go to drinkelement.com/huberman to claim an Element sample pack with the purchase of any Element drink mix. Again, that's
drinkelement, spelled L-M-N-T. So it's drinkelement.com/huberman to claim a free sample pack. Today's episode is also brought to us by Juve. Juve makes medical-grade red light therapy devices. Now, if there's one thing that I've consistently emphasized on this podcast, it is the incredible impact that light can have on our biology. Now in addition to sunlight, red light and near... Infrared light has been shown to have positive effects on improving numerous aspects of cellular and organ health, including faster muscle recovery, improved skin health and wound healing, improvements in acne, reduced pain and inflammation, improved mitochondrial function, and
even improving vision itself. Now, what sets JUV lights apart and why they're my preferred red light therapy devices is that they use clinically proven wavelengths, meaning they use specific wavelengths of red light and near-infrared light in combination to trigger the optimal cellular adaptations. Personally, I use the JUV whole body panel about three to four times a week, and I use the JUV handheld light both at home and when I travel. If you'd like to try JUV, you can go to juv.com/huberman. JUV is offering Black Friday discounts of up to $1,300 now through December 2nd, 2024.
Again, that's juv.com/huberman to get up to $300 off select JUV products. I'm going to tell you something that is going to help all of your listeners: my book, "ChatGPT and the Future of AI." I went through and looked at other people's experiences with ChatGPT; I just wanted to know what people were thinking. I came across an article, I think it was in The New York Times, about a technical writer who decided she would spend one month using it to help her write her articles. She said that when she started out, by the end of the
day, she was completely drained. It was like working on a machine, struggling to get it to work. Then she thought, wait a second, what if I treat it like a human being? What if I'm polite instead of, you know, being curt? She said suddenly she started getting better answers by being polite and engaging in the back-and-forth way you would with a human. For example, she would say, "Could you please give me information about so and so? I'm really having trouble. Oh, the answer you gave me was fabulous; it's exactly what I was looking for, and
now I need to move on to the next part; can you help me with that too?" In other words, the way you talk to a human. So, is it that she was talking to the AI, to ChatGPT, in a way that her brain was familiar with asking questions to a human? In other words, is the AI learning from her and therefore giving her the sorts of answers that are more facile for her to integrate with? I think it's both. First of all, ChatGPT mirrors the way you treat it. If you treat it like a machine,
it will treat you like a machine because that's what it's good at. But here's the surprise: she said that once she started treating it like a human, at the end of the day, she wasn't fatigued anymore. Why? Well, it turns out that all your life you interact with humans in a certain way, and your brain is wired to do that, so it doesn't take any effort. By treating ChatGPT as if it were a human, you're taking advantage of all the brain circuits in your brain. This is incredible, and I'll tell you why. Many people—not just
me, but many individuals—really enjoy social media and learn from it. I mean, yesterday, I learned a few things that I thought were just fascinating about how we perceive our own identity based on whether or not we're filtering it through the responses of others, or whether or not we take a couple of minutes and really sit and think about how we actually feel about ourselves. There are very interesting ideas about the locus of self-perception and things like that. I also watched a really cool video of a baby raccoon popping bubbles while standing on its hind limbs,
and that was really delightful. Social media provided me both of those things in a series of minutes, and I thought to myself, this is crazy! The raccoon is kind of trivial, but it delighted me, and that’s not trivial. But here's the question: could it be that one of the detrimental aspects of social media is that, if we're complimenting one another, or if we are giving hearts, or we're giving thumbs downs, or we're in an argument with somebody, or we're doing a clapback or they're clapping back on us—as it's called on X—that it isn't necessarily the
way that we learned to argue? It’s not necessarily the way that we learn to engage in healthy disputes. As a consequence, it feels like, and this is my experience, that certain online interactions feel really good, and others feel like they grate on me because there's almost like an action step that isn't allowed. You can't fully explain yourself or understand the other person. I am somebody who believes in the power of real face-to-face dialogue, or at least phone dialogue. I feel the same way about text messaging. When text messaging first came out, I remember thinking, "I
was not a kid that passed notes in class; this feels like passing notes in class. In fact, this whole text messaging thing is beneath me." That's how I felt, and over the years, of course... I became a text messenger, and it's very useful for certain things. "Be there in five minutes," "Running a few minutes late"—in my case, that's a common one. Um, but I think this notion of what GRS is on us, and how it relates to whether or not it matches our childhood-developed template of how our brain works, is really key because it touches
on something that I definitely want to talk about today, that I know you've worked on quite a bit, which is this concept of energy. What we're talking about here is energy—not woo biology, not woo science, wellness energy. We're talking about we only have a finite amount of energy. Years ago, the great Ben Baris sadly passed away. Our former colleague and my postdoctoral adviser came to me one day in the hallway and he stopped me. He called me Andy, like you do, and he said, "Andy, how can we get so run down of energy as we
get older? Why am I more tired today than I was 10 years ago?" I was like, "I don't know. How are you sleeping?" He's like, "I'm sleeping fine." Ben never slept much in the first place, but he had a ton of energy. I thought to myself, "I don't know; what is this energy thing that we're talking about?" I want to make sure that we close the hatch on this notion of a template neural system, in which you either find experiences invigorating or depleting. I want to make sure we close the hatch on that, but I
want to make sure that we relate it at some point to this idea of energy and why it is that, with each passing year of our life, we seem to have less of it. You know, you asked these great questions, and I wish that I had great answers. Well, so far, you really do have great answers. They're certainly novel to me in the sense that I've not heard answers of this sort. So, there's a tremendous amount of learning for me today, and I know for the audience. But let's say you're somebody who's 20 years old
versus 50 years old—what should they do? I mean, we need to integrate with the modern world; we also need to relate across generations. Oh, yeah, no, this is true. People aren't retiring as much; they're living longer. Birth rates are down, but we all have to get along, as they say. So, you know, it is interesting. I think it's true that we all, as we get older, have less of the vigor—if I could use a somewhat different word from energy. We'll come back to that. But I think there are some who manage to keep an active
life. Here's something that, again, in our MOOC, we really emphasize. Could you explain a MOOC? I think most people won't know what a MOOC is, just for their sake. Okay, this has been around for about—actually, it started at Stanford, Andrew, and D. Nicer. They have a company called Ceramics, and what happens is that you get professors, and in fact anybody who has knowledge or professional expertise, to give lectures that are available to anybody in the world who has access to the Internet. And you know, there are probably tens of thousands now—any specialty: history, science, music—you
name it. There's somebody who's an expert on that who wants to tell you because they're excited about what they're doing. So, what we wanted to do was help people with learning. And so part of the problem is that it gets more difficult; it takes more effort as you get older. It depletes your vigor more, if we're going to stay with this language of energy and vigor. Yeah, that's right. So, let's actually use the word energy. As you know, in the cell, there is a physical power plant called the mitochondrion, which supplies us with ATP, the
coin of the realm for the cell to be able to operate all of its machinery, right? So, one of the things that happens when you get older is that your mitochondria run down. You have fewer of them and they're less efficient. That's right—they're less efficient. And actually, drugs can do that to you too. They can harm mitochondria. Or recreational drugs? No, the drugs you take for illness. I'm not sure about recreational drugs, but I know it's the case that there are a lot of drugs that people take because they have to. But the other thing—and
this is something—that's the bad news. Here's the good news: the good news is that you can replenish your energy through exercise. That exercise is the best drug you could ever take; it's the cheapest drug you could ever take that can help every organ in your body. It helps, obviously, your heart; it helps your brain; it rejuvenates your brain. It helps your immune system. Every single organ system in the body benefits from regular exercise. I run on the beach every day at the Salk Institute. I can—and I also, it's on a mesa, 340 feet above sea
level. So, I go down every day, and then I climb up the cliff. Yeah, those steps down to Black's Beach are—they're good. Workout. They are. They are. And so, this is something that has kept me active. And it's, and I—I do hiking. I went hiking in the Alps last fall, so this is, uh, in September. So this, this is, I think, something that people really ought to realize is that, you know, it's like, uh, putting away reserves of energy for, you know, when you get older. The more you put away, the better off you are.
Here's something else, okay? Now, this is jumping now to Alzheimer's. So, uh, a study that was done in China many, many years ago—when I first came to U La Jolla, San Diego—I heard this from the head of the Alzheimer's program. He had done a study in China on onset, and they had three populations: they had peasants who had almost no education; then they had another group that had a high school education; and then there were people who were, you know, advanced in education. So, it turns out that the onset of Alzheimer's was earlier for the
people who had no education, and it was the latest for the people who had the most education. Now, this is interesting, isn't it? Because it's—and presumably the genes aren't that different, right? I mean, they're all Chinese. So, one possibility—and obviously we don't really know why—but one possibility is that the more you exercise your brain with education, the more reserve you have later in life. I believe in the notion, and I don't have a better word for it—maybe you do, or a phrase for it—of kind of a cognitive velocity. You know, I sometimes will play with
this. I'll read slowly, or I'll see where my default pace of reading is at a given time of day, and then I'll intentionally try to read a little bit faster while also trying to retain the knowledge I'm reading. Right? So, I'm not just reading the words; I'm trying to absorb the information. And you can feel the energetic demand of that. And then I'll play with it. I'll kind of back off a little bit, and then I'll go forward, and I try to find the sweet spot where I'm not reading at the pace that is reflexive,
but just a little bit quicker while also trying to retain the information. I learned this when I had a lot of catching up to do at one phase of my educational career. Fortunately, it was pretty early, and I was able to catch up on most things. You know, occasionally things slip through, and I have to go back and learn how to learn. You know? And if I get anything wrong on the internet, they sure as heck point it out, and then we go back and learn. And guess what? I never forget that because punishment—social punishment—is
a great signal. So, thank you all for keeping me learning. But I picked that up from my experience of trying to get good at things like skateboarding or soccer when I was younger. There's a certain thing that happens when skateboarding—that was my sport growing up—where it's actually easier to learn something going faster. You know, most kids try and learn how to ollie and kickflip standing in the living room on the carpet. That's the worst way to learn how to do it. It's all easier going a bit faster than you're comfortable. It's also the case that
if you're not paying attention, you can get hurt. It's also the case that if you pay too much cognitive attention, you can't perform the motor movements, right? So there's this sweet spot that eventually I was able to translate into an understanding of when I sit down to read a paper or a news article or even listen to a podcast. There's a pace of the person's voice, and then I'll adjust the rate of the audio where I have to engage cognitively, and I know I'm in a mode of retaining the information and learning. Whereas if I
just go with my reflexive pace, it's rare that I'm in that perfect zone. So, I point this out because perhaps it will be useful to people. I don't know if it's incorporated into your learning how to learn course, but I do think that there is something which I call kind of cognitive velocity, which is ideal for learning versus kind of leisurely scrolling. And this is why I think that social media is detrimental. I think that we train our brain basically to be slow, passive, and multicontext—cycling through—and unless something is very high salience, it kind of
makes us, kind of, fat and lazy—forgive the language, but I'm going to be blunt here—fat and lazy cognitively, unless we make it a point to also engage in learning. Right? And my guess is it's tapping into this mitochondrial system—very likely, that's one part of it. By the way, you know, the way that you've adjusted the speed is very interesting because it turns out that stress— you know, everybody thinks, "Oh, stress is bad," but no. It turns out that stress that is, you know, only for a limited amount of time that you control is good for
you. It's good for your brain; it's good for your body. I run intervals on the beach just the way that you do cognitive intervals when you're reading. In other words, I run—I run like hell for about 10 seconds, and then I, you know, go to a jog, and I run like hell for another 10 seconds, and it’s pushing. your body into that extra gear that helps the muscles. The muscles need to know that this is what they've got to put out, and that's where you gain, uh, you know, muscle mass—not from just doing the same
running pace every day. Well, your intellectual and physical vigor is undeniable. Um, I've known you a long time; you've always had a slight forward center of mass in your, uh, intellect and even the speed at which you walk. Terry D, I say, okay, for a Californian, you're a quick walker. Okay, yeah, so, uh, that's a compliment, by the way. Um, East Coasters know what I'm talking about, and Californians would be like, you know, um, why not slow down? The reason to not slow down too much for too long is that these mitochondrial systems, the energy
of the brain and body, as you point out, are very linked. I do think that below a certain threshold, it makes it very hard to come back, just like below a certain threshold is hard to exercise um without getting depleted or even injured. We need to maintain this, so perhaps now would be a good time to close the hatch on this issue of, um, how to teach young people—everyone should take this learning-to-learn course as a free resource. Amazing! Um, as it relates to AI, do you think that young people and older people (now I'm 49,
so I put myself in the older bracket) should be learning how to use AI? They are already learning how to use AI, and, uh, again, it's just like, uh, when new technology comes along, who picks it up first? It's the younger people, and it's astonishing. Uh, you know, they're using it a lot more than I am. You know, I use it, uh, almost every day, but, uh, I know a lot of students who basically—and by the way, it's like any other tool. It's a tool, uh, that you need to know how to use. Where do
you suggest people start? So, um, I have started using Claude AI. Okay, this was, um, suggested to me by somebody expert in AI as an alternative to ChatGPT. I don't have anything against ChatGPT, but I'll tell you, I really like the, um, aesthetic of Claude AI. It's a bit of a softer beige aesthetic; it feels kind of Apple-like. I like the Apple brand, and it gives me answers. Maybe it's the font, maybe it's the feel; maybe this goes back to the example I used earlier where I like Claude AI, and I'm a big fan of
it. They don't pay me to say this; I have never met them. I have no relationship to them, except that it gives me answers in a bullet-pointed format that feels very aesthetically easy to transfer that information into my brain or onto a page. Right? So I like Claude AI. You use ChatGPT. How should people start to explore AI, um, for the sake of getting smarter, learning knowledge, just for the sake of knowledge, having fun with it? What's the best way to do that? Well, I think exactly what you did, which is, there's now dozens and
dozens of different, uh, chatbots out there, and different people will, uh, feel comfortable with one or the other. ChatGPT is the first, so that's why it's kind of taken over a lot of the, uh, cognitive space, right? It's become like Kleenex, right? That word—that's why I used it as the first word in my new book because it's iconic. But, uh, some of them—I have to say that, for example, some that are really much better at math than others, uh, such as Google's Gemini, recently did some fine-tuning with, uh, what's called, uh, you know, chain of
reasoning. In other words, when you reason, you go through a sequence of steps, and when you solve a math problem, you go through a sequence of steps of doing— you know, fitting first, finding out what's missing, and then adding that. It went from 20% correct to 80% right on those problems. And as people hear that, they probably think, "Well, that means 20% wrong still." But could you imagine any human or panel of humans behind a wall where, if you asked it a question and then another question and another question, it would give you back better
than 80% accurate information in a matter of seconds? So, I think we are, uh, uh, being, uh, perhaps a little bit, uh, unfair to compare these large language models to the best humans rather than the average human. Right? As you said, most people couldn't pass the LSAT, the law test to get into law school, or MCAT, the test to get into medical school, and ChatGPT has— is there a world now where we take the existing AI, LLMs—these computers basically that can learn like a collection of human brains—and send that somehow into the future? Right? Give
them an imagined future. Okay, could we give them outcome A and outcome B and let them forage into future states that we are not yet able to get to, and then harness that knowledge and explore the two different outcomes? I think that's perhaps the better question in some sense, um, because we can't travel back in time, but we can perhaps travel into the future with AI if you provide it different scenarios and you say, unlike a panel of people—a panel of experts, medical experts or, um, space travel experts or, um, sea travel experts—you can't say,
"Hey, you know what? Don't sleep tonight. Um, you're just going to work for... The next 48 hours—in fact, you're going to work for the next three weeks or three months—um, and you know what? You're not going to do anything else. You're not going to pay attention to your health; you're not going to do anything else. But you can take a large language model, and you can say, "Just forage for knowledge under the following different scenarios," and then have that fleet of large language models come back and give us the information, like, I don't know, tomorrow.
Okay, so I've lived through this myself. Back in the 1980s, I was just starting my career, and I was one of the pioneers in developing learning algorithms for neural network models. Jeff Hinton and I collaborated together on something called the B-machine, and he actually won a Nobel Prize for this rec just this year. Yeah, one of my best friends—brilliant—and he well deserved it for not just the Boltzmann machine but all the work he's done since then on machine learning, backpropagation, and so forth. But back then, Jeff and I had this view of the future. AI
was dominated by symbol processing, rules, and logic, right? Writing computer programs for every problem—you need a different computer program, and it was very, you know, human resource-intensive to write programs. So, it was very, very slow-going, and they never actually got there. They never wrote a program for vision, for example, even though the computer vision community really worked hard for a long time. But you know, we had this view of the future. We had this view that nature has solved these problems and is existence proof that you can solve the vision problem. Look, every animal can
see, even insects! Right? Come on, well, figure out how they did it. Maybe we can help by following up on nature. We can actually—again, going back to algorithms I was telling you about—so in the case of the brain, what makes it different from a digital computer? Digital computers basically can run any program, but a fly brain, for example, only runs the program that its special-purpose hardware allows it to run. Not much neuroplasticity; there's enough there—just enough habituation and so forth—so that you can survive. And this is survive 24 hours. I'm not trying to be disparaging
to the fly biologists, but when I think of neuroplasticity, I think of the magnificent neuroplasticity of the human brain to customize to a world of experience. I agree. You know, when I think about a fly, I think about a really cool set of neural circuits that work really well to avoid getting swatted, to eat, and to reproduce—and not a whole lot else. They don't really build technology; they might have interesting relationships, but who knows? Who cares? It's just sort of like... It's not that it doesn't matter; it's just a question of the lack of plasticity
that makes them kind of a "meh" species. Okay, I can see I've pressed your button here. No, no, no! I love fly biology. They taught us about algorithms for direction selectivity in the visual system. No, I love fly biology; I just think that the lack of neuroplasticity reveals a certain, um, key limitation, and that the reason we're the curators of the earth is because we have so much plasticity. Of course! Of course. But you have to take, you know, one step at a time. Nature first has to be able to create creatures that can survive,
and then, you know, their brains get bigger as their environment gets more complex. And, you know, here we are. But the key is that it turns out that certain algorithms in the fly brain are present in our brain, like conditioning. Classical conditioning—you can classical condition a fly in terms of, you know, training it to, when you give it a reward, it will produce the same action. Right? This is like conditioned behavior. And that algorithm that I told you about—that is in your value function, right? Temporal difference learning—that algorithm is in the fly brain; it's in
your brain. So we can learn about, learn from many species. I was just having a little fun poking at the fly biologists. I actually think drosophila has done a great deal, as has honeybee biology. For instance, if you give caffeine to bees on particular flowers, they'll actually try and pollinate those flowers more because they actually like the feeling of being caffeinated. There's a bad pun about a buzz here, but I'm not going to make that pun because everyone's done it before. Right. No, I fully absorb and agree with the value of studying simpler organisms to
find the algorithms, right? That's where we are right now. But now, to go just into the future, now I'm telling the story about where we were. We were predicting the future. We were saying, "This is an alternative to traditional AI." We were not taken seriously; everybody—experts—said, "No, no, write programs, write programs." They were getting all the resources, the grants, the jobs, and we were just like the little furry mammals under the feet of these dinosaurs. Right? In retrospect, I love the analogy, but the dinosaurs died off. This is—but the point I'm making is that it's
possible for our brain to make these extrapolations into the future. Why not AI versions of brains? Why not? I think your idea is a... Great one! Yeah, I mean, the reason I'm excited about AI—and increasingly so across the course of this conversation—is because there are very few opportunities to forage information at such a large scale, particularly around the circadian clock. I mean, if there's one thing that we are truly a slave to as humans, it is circadian biology, right? You’ve got to sleep sooner or later, and even if you don't, your cognition really waxes and
wanes across the circadian cycle. If you don't sleep, you're going to die early; we know this. Computers can work, work, work. Sure, you've got to power them; there are a bunch of things related to that, but that's tractable. So, computers can work, work, work, and the idea that they can provide a portal into the future—that they can just bring it back so we can take a look—excites me. I'm not saying we have to implement their advice, but to be able to send a panel of diverse, computationally diverse, experientially diverse AI experts into the future and
bring us back a panel of potential routes to take is so exciting. Maybe a good example would be treatments for schizophrenia. This is an area that I want to make certain that we talk about. You know, I grew up learning as a neuroscience student that schizophrenia was somehow a disruption of the dopamine system because if you give neuroleptic drugs that block dopamine receptors, you get some improvement in the motor symptoms and some of the hallucinations, etc. Now, we also have people who say, "No, that's not really the basis of schizophrenia." I love your thoughts on
this, and we have incredible work from people like Chris Palmer at Harvard. We even have a department at Stanford now focusing on what Chris really founded as a field, which is metabolic psychiatry—the idea that, who could imagine (I’m being sarcastic here), what you eat impacts your mitochondria, how you exercise impacts your mitochondria, and mitochondria impact brain function. What, and behold, the metabolic health of the brain and body impacts schizophrenia symptoms. Chris has explored ways that people can use a ketogenic diet, maybe not to cure, but to treat—and in some cases, maybe even cure—schizophrenia. So here
we are at this place where we still don't have a “cure” for schizophrenia, but you could send large language models into the future and start to forage all of the data in those fields. I could do that in an hour, plus come up with a bunch of hypothesized different positive and negative result clinical trials that don't even exist yet: 10,000 subjects in Scandinavia who go on a ketogenic diet and have a certain level of susceptibility to schizophrenia based on what we know from twin studies. Things that never, ever, ever would be possible to do in
an afternoon, maybe even in a year—there isn't funding. And boom, get the answers back and let them present us those answers. Then you say, "Well, it’s artificial,” but so are human brains coming up with these experiments. So to me, I'm starting to realize that it's not that we have to implement everything that AI tells us or offers us; it just sure as hell gives us a great window into what might be happening or is likely to happen. Specifically for schizophrenia, I'm pretty sure that if we had these large language models 20 years ago, we would
have known back then that ketamine would have been a really good drug to try to help these people. Tell us about the relationship between ketamine and schizophrenia. Okay, because I think a lot of people—and maybe you could define schizophrenia—though most people think about people hearing voices and psychosis, there’s a bit more to it that maybe we just need to bring out. One of the things now that we know is, see, the problem is that if you look at the endpoint, it doesn't tell you what started the problem. It starts early in development. Schizophrenia appears when,
you know, during late adolescence or early adulthood, but it actually is already a genetic problem from the get-go. So, what is the concordance in identical twins? Meaning, if you have one identical twin in the womb, and one is destined to be full-blown schizophrenic, what’s the probability that the other will be as well? Here’s the experiment: this has been replicated many, many times, but first, let me start with a human. Ketamine was, for a long time—and still is—a party drug, also known as Special K. I've never taken it, but this is what I hear. It's an
anesthetic, but I'll tell you what happens because I've talked to people who have done this. You take ketamine sub-anesthetically. By the way, it's an anesthetic; it's given to children, and it's a pretty good anesthetic. It's also used in veterinary medicine. But in any case, you give it to young adults. Here’s what they experience: they experience an out-of-body experience, a wonderful feeling of energy, and it’s a high—a very unusual high. Now, if they just go and have one experience, that’s one thing, but if they party two days in a row, a lot of them come into
the emergency room, and here’s what happens… Symptoms are full-blown psychosis—full-blown! We're talking about, you know, indistinguishable from a schizophrenic break: so, auditory hallucinations—yeah, auditory hallucinations; you know, paranoia, very, very advanced. You know, you'd say that my God, this person here is really gone. You know, in has become a schizophrenic, and this is really, like I say, the symptoms are the same. However, if you isolate them for a couple of days, they'll come back, right? So, it means that schizophrenia can induce—I'm sorry—ketamine can induce a form of schizophrenia psychosis temporarily, not permanently, fortunately. Okay, so what
does it attack? Okay, and there's another literature on this. It turns out that it binds to a form of receptor, a glutamate receptor called NMDA receptors, which are very important by way for learning and memory. But we know the target, and we also know what the acute outcome is: that it reduces the strength of the inhibitory circuit. The interneurons that use inhibitory transmitters get—the enzyme that creates the inhibitory transmitter is downregulated. And what does that do? It means that there's more excitation. And what does that mean? When there's more excitation, it means that there's more
activity in the cortex, and there's actually much more vigor, and you start becoming crazy, right, if it's too much activity. So this is interesting. This is telling us, I think, that we should be thinking about—now there's a whole field now in psychiatry that has to do with the glutamate hypothesis for where the actual imbalance first occurs. It's an imbalance between the excitatory and inhibitory systems that are in the cortex, that keep you in balance. And NMDA and methylaspartate receptors are glutamate receptors—one class. That's one class; that's right. Okay, so now here is the hypothesis for
why ketamine might be good for depression. People are taking it now who are depressed, right? So here you have a drug that causes over-excitation, and here you have a person who's under-excited. Depression is associated with lower excitatory activity in some parts of the cortex. Well, if you titrate it, you can come back into balance, right? So what you do is you fight depression with a touch of schizophrenia. Now, you know you have to keep giving—I think once every three weeks—they have to have a, you know, a new dose of ketamine, but it's helped an enormous
number of people with very, very severe, you know, clinical depression. So, as we learn more about the mechanisms underlying some of these disorders, the better we are going to be at extrapolating and coming up with some solutions—at least to prevent it from getting worse. By the way, I'm pretty sure that the large language models could have figured this out, you know, long ago. So, in an attempt to understand how we might be able to leverage these large language models now—how would we have used these large language models long ago? Let's say you had 2024 AI
technology in, let’s have fun here, 1998, the year that I started graduate school. Right at that time, it was like the dopamine hypothesis of schizophrenia was in every textbook, there was a little bit about glutamate perhaps, but, you know, it was all about dopamine. So how would the large language models have discovered this? Ketamine was known as a drug. Ketamine, by the way, is very similar to PCP, F cycline, which also binds the NMDA receptor. So how would— which is also a part of a drug, which is also—a drug that I also don’t recommend, nor
ketamine, frankly. I don’t recommend any recreational drugs, but I’m not a recreational drug guy. But, what would those large language models do? If you got 2024 technology placed into 1998, they’re foraging for existing knowledge, but then are they able to make predictions? Like, hey, this stuff is going to turn out to be wrong, or hey, this stuff—okay? Okay. You know, this is all very, very speculative, and really, we can begin actually to see this happening now. So, I have a colleague at the Salk Institute, Rusty Gage, a very distinguished neuroscientist. He was one of the—he
discovered that there are new neurons being born in the hippocampus, right, which is something in adults. This is something that in a textbook says that doesn’t happen, right? So that was around 1998, right? That’s right. And I actually have a paper with him where we tested LTP—long-term potentiation—for actually the effects of exercise on neurogenesis. Exercise increases neurogenesis. It increases the cells that become part of the circuit; more cells become integrated, and this is true in humans as well, right? Yeah. And there was some cancer drug that was given that, you know, they showed that there
were new cells that they were able to later in postmortem actually see that they were born in the adult. Okay, so here we are, okay, in 1998, and the question is, can you jump into the future? Okay, so Rusty, we happened to talk about this issue, about, you know, he's using these large language models now for his research. I said, "Oh wow, how do you use it?" And he said, "We edit it as an idea pump." What do you mean, "idea pump"? Well, we—you know, we give it all of the... Experiments that we've done, and,
uh, we have it, you know, the literature, its access to the literature, and so forth. We ask it for ideas for new experiments. Oh, I love it! I love it! I was on a plane where I sat next to a guy that works at Google, and he's one of the main people there in terms of voice-to-text and text-to-voice software. He showed me something; I'll provide a link to it because it's another one of these open resource things. I'm not super techy; I'm not like the— I don't get an F in technology, don’t get an A+;
I'm kind of in the middle. So I think I'm pretty representative of the average listener for this podcast. Presumably, what you show me is that you can take—you open up this website and you can take PDFs or you can take URLs—so websites, website addresses—and you just place them in the margin. You literally just drag and drop them there, and then you can ask questions, and the AI will generate answers that are based on the content of whatever you put into this margin: those PDFs, those websites. The cool thing is it references them, so you know
which article it came from, right? And then you can start asking it more sophisticated questions, like in the two examples of the effects of a drug—one being very strong and one being very weak—which of these papers do you think is more rigorous based on, you know, subject number, but also kind of the strength of the findings? You know, "strength of findings" is pretty vague, right? Anyone that argues those are weak findings—those aren't enough subjects—well, we know a hell of a lot about human memory from one patient, H.M. So, strength of findings, when people are involved,
is a subjective thing, right? You really have to be an expert in a field to understand the strength of findings, and even then, what's amazing is it starts giving back answers like, "Well, if you're concerned about the number of subjects, this paper..." But that's a pretty obvious one— which one had more subjects? But it can start critiquing the statistics that they used in these papers in very sophisticated ways, explaining back to you why certain papers may not be interesting and others are more interesting, and it starts to weight the evidence. Oh my God! And then
you say, "Well, with that weighted evidence, can you hypothesize what would happen if..." And so I've done a little bit of this where it starts trying to predict the future based on, you know, 10 papers that you gave it five minutes ago. Amazing! I don't think any professor could do that except in their very specific area of interest, and if they were already familiar with the papers, and it would take them many hours, if not days, to read all those papers in detail. And they might not actually come up with the same answers, right? Right?
Yeah, so this is—so actually, this is something that is happening in medicine, by the way, for doctors who are using AI as an assistant. This is really interesting. So, there was a paper in Nature about dermatology—you know, skin lesions. There are several thousand skin lesions; some of them are cancerous, and others are benign. In any case, they tested the expert doctors, and then they tested an AI, and they were both doing about, you know, 90% right. However, if you let the doctor use the AI, it boosts the doctor to 98%—98% accuracy! Yes! And what's going
on there is very interesting. It turns out that although they got the same 90%, they had different expertise: the AI had access to more data, and so it could look at the lesions that were rare that the doctor may never have seen. Okay, but the doctor has more in-depth knowledge of the most common ones that he's seen over and over again and those, the subtleties, and so forth. Putting them together makes so much sense that they're going to improve if they work together. I think that now what you're saying is that using AI as a
tool for discovery—with, you know, the expert who's interpreting and looking at the arguments, the statistical arguments, and also looking at the paper maybe in a new way—maybe that's the future of science. Maybe that's what's going to happen. Everybody's worried about, "Oh, AI is going to replace us; it's going to be much better than we are. Everything—and humans are obsolete." Nothing could be further from the case. Our strengths and weaknesses are different. By working together, it's going to strengthen both, you know, what we do and what AI does. It's going to be partnership; it's not going
to be adversarial; it's going to be a partnership. Would you say that's the case for things like understanding or discovering treatments for neurologic illness, or for avoiding large-scale catastrophes? Like, can it predict macro movements? Let me give an example: here in Los Angeles, there's occasionally an accident on the freeway. You have a lot of cameras over freeways nowadays. You have cameras in cars— you can imagine all of the data being sent in in real time. And you could probably predict accidents pretty easily. I mean, these are just moving objects, right, at a specific rate—who's driving
haphazardly? But you— "Could also potentially, um, signal takeover of the brakes or the steering wheel of a car and prevent accidents. I mean, certain cars already do that, but could you essentially eliminate... well, let's do something even more important. Let's eliminate traffic. I don't know if you can do that, but, um, because that's a funnel problem, but, um, could you predict, um, physical events in the world into the future? Okay, this has already been done—not for traffic, but for hurricanes. So, you know, as you know, the weather is extremely difficult to predict—except here in California,
where it's always going to be sunny. But now, uh, what they've done is, uh, feed a lot of previous data from previous hurricanes, and also, uh, simulations of hurricanes. You can simulate them in a supercomputer; it takes days and weeks, so it's not very useful for actually accurately predicting where it's going to hit Florida. But what they did was, after training up the AI on all of this data, it was able to predict with much better accuracy exactly where in Florida it's going to make a landfall. And it does that on your laptop in 10
minutes. Incredible! So, something just clicked for me, and it's probably obvious to you and to most people, but I think this is true. I think what I'm about to say is true. At the beginning of our conversation, we were talking about the acquisition of knowledge versus the implementation of knowledge—just learning facts versus learning how to implement those facts in the form of physical action or cognitive action, right? A math problem is cognitive action, physical action. Okay, AI can do both: knowledge acquisition. It can learn facts—long lists of facts and combinations of facts—but, presumably, it can
also run a lot of problem sets and solve a lot of problem sets. I don't think, except with some crude, still to me, examples of robotics, that it's very good at action yet, but it will probably get there at some point. Robots are getting better, but they're not doing what we're doing yet. But it seems to me that as long as they can acquire knowledge and then solve different problem sets—different iterations of combinations of knowledge—that basically they are in a position to take any data about prior events or current events and make pretty darn good
predictions about the future and run those back to us quickly enough and to themselves quickly enough that they could play out the different iterations. And so I'm thinking, you know, one of the problems that seems to have really vexed neuroscientists, the field of medicine, and the general public has been, like, the increase in the, um, at least diagnosis of autism. I've heard so many different hypotheses over the years. I think we're still pretty much in the fog on this one. Um, could AI start to come up with, um, new and, um, potential solutions and, and
treatments, if they're necessary, but maybe get to the heart of this problem? It might, and it depends on the data you have; it depends on the complexity of the disease. Um, but it will happen. In other words, we will use those tools the best we can because, obviously, if you can make any progress at all and jump into the future, wow, that would save lives. That would help so many people out there. I really think the promise here is so great that even though there are flaws and regulatory problems, we just— we really, really have
to push, and we have to do that in a way that is, um, going to help people, uh, you know, in terms of making their jobs better and, and, uh, helping them, uh, solve problems that otherwise they would have had difficulty with, and so forth. And it's beginning to happen, but, you know, it's, uh, these are early days. So we're at a stage right now with AI that is similar to what happened after the first flight of the Wright Brothers. You know, in other words, it's that significant. The achievement that the Wright Brothers made was
to get off the ground 10 feet and to power forward with a human being 100 feet. Right? That was it. That was the first flight. And it took an enormous amount of improvements. The most difficult thing that had to be solved was control. How do you control it? How do you make it go in the direction you want it to go? Shades of what's happening now in AI is that, you know, we are off the ground; we're not going very far yet, but who knows where it will take us into the future? Let's talk about
Parkinson's disease—a depletion of dopamine neurons that leads to difficulty in smooth movement generation, um, and also some cognitive and mood-based, um, dysfunction. Um, tell us about your work on Parkinson's, and what did you learn? So, as you point out, Parkinson's is, uh, first a degenerative disease. It's very interesting because the dopamine cells are a particular part of the brain, the brain stem, and they are the ones that are responsible for procedural learning. I told you before about temporal difference: it's dopamine cells, and, uh, it's a very powerful way for the—it's a global signal called a
neuromodulator because it modulates all the other signals taking place, you know, throughout the cortex. And also, it's, uh, very important for, uh, learning, uh, sequences of actions, uh, you know, that produce, um, survival. But the problem is that—" With certain, uh, environmental insults, you know, especially, you know, uh, toxins like pesticides, uh, those neurons are very vulnerable. And when they die, you get all of the symptoms that you just described. The people who have lost those cells, uh, actually, before the treatment, you know, L-DOPA, which is a dopamine precursor, they actually became "locked in." They
didn't move; they were still alive but just didn't move at all. You know, it's tragic—yes, locked in, it's called. Yeah, it's tragic, tragic. So when the first, uh, trials of L-DOPA were given to them, it was magical because suddenly they started talking again. So, I mean, this is amazing, amazing! I'm curious, when they started talking again, did they report that their brain state during the locked-in phase was slow velocity? Like, was it sort of like a dreamlike state, or did they feel like they were in a nap, or were they, like, screaming to get out?
Because their physical velocity obviously was zero—they're locked in, after all. I've long wondered when coming back from a run or from waking up from a great night's sleep, when I shift into my, you know, waking state, whether or not physical velocity and cognitive velocity are linked. Okay, that's a wonderful observation or question. You know I bet you know the answer. Okay, here's something that is really, uh, amazing: it was, uh, discovered interestingly that, you know, they tend to move slowly, as you said, but to them cognitively, they think they're moving fast. Now, it's not because
they can't move fast—because you can say, "Well, can you move faster?" Sure, and they move normally, right? But to them, they think they're moving at, you know, super velocity. So it's a set point issue. Yes, it's all about set points—what's really going on—and the set point gets further and further down. You know, now they, without moving at all, think they're moving. Right? I mean, this is what's going on. By the way, you can ask them, you know, "What was it like? You know, we were talking to you, and you didn't respond." "Oh, I didn't feel
like it." The brain confabulates an answer they have; well, they confabulated it because they didn't have enough energy, or they couldn't initiate actions. That's one of the things that they have trouble with, uh, movements—starting a movement. Yeah, as you can tell, I'm fascinated by this notion of cognitive velocity. And again, there may be a better or more accurate or official, um, language for it, but I feel like it encompasses so much of what we try to do when we learn. And the fact that during sleep, you have these very, um, vivid dreams during rapid eye
movement sleep, so cognitive velocity is very fast. Time perception is different than in slow-wave sleep dreams. Um, I really think there's something to it as a, um, at least one metric that relates to brain state. Yes, I've long thought that we know so much more about brain states during sleep than we do about wakeful brain states. Like, we talk about focus, motivation, and flow. I mean, these are not scientific terms. I'm not being disparaging of them; they're pretty much all we've got, um, until we come up with something better. But, like, we're biologists and neuroscientists
and computational neuroscientists, in your case, and we're, like, trying to figure out, like, what brain state are we in right now? Our cognitive velocity is a, you know, a certain value. But I think the more that people think about this, um, you know, I'll venture to say that the more that they think a little bit about their cognitive velocity at different times of day... I start to notice that there tend to be a few times of day for me. It tends to be early to late mid-morning, um, and then again in the evening after a
little bit of trough in energy. That, boy, that hour and a half each—like, that's the time to get real work done! I can mentally sprint far at those times, right? But there are other times of day when I don't care how much caffeine I drink. I don't care unless it's a stressful event that I need to meet the demands of that stress—you know, I just can't get to that faster pace while I'm also engaging. You can read faster; you can listen, but you're not using the information—you're not storing the information. Right, what time of day
for you? No, I get most done in the morning, and then you're right—later after dinner is also different, though; I think in the morning, I'm better at creative stuff, and then I think that in the evening I'm better at actually just cranking it out. You know, interesting, um, given the relationship between, uh, body temperature and circadian rhythm, I would like to run an experiment that, um, relates, uh, core body temperature to cognitive velocity. You know, I've actually noticed this is something that is just purely subjective, but the temperature of the salon inside the building is
kept at 75—it's, like, you know, rock solid—but in the afternoon, I feel a little chilly. It's probably my, you know, internal—sure, you know, body temperature. Body temperature, yeah, is probably going down, and that may correspond to the loss of energy—you know, the amount of, uh, the ability. For the brain and everything else, by the way, you know this is Q10. This is jargon. Every single enzyme in your every cell can go at different rates depending on the temperature, right? And so, yeah, if the body temperature is doing this, then all the cells are doing this
too, right? So, this is an explanation—I’m not sure if it’s the right one—but, yeah, Craig Heller, my colleague at Stanford in the biology department, has beautifully described how the enzymatic control over pyruvate, I believe it is, controls muscular failure, that local muscular failure. You know when people are trying to move some resistance? It has everything to do with the temperature, the local temperature. Wow! That shuts down certain enzymatic processes that don’t allow the muscles to contract the same way. You know he knows the details, and he covered them on this podcast. I'm forgetting the details.
You start to go, wow, like these enzymes are so beautifully controlled by temperature. And, of course, his laboratory is focused on ways to bypass those temperature constraints, or to change temperature locally in order to bypass those limitations, and they have shown them again and again. It’s just incredible! Yeah, I don’t—here we’re speculating about what it would mean for cognitive velocity, but I think it’s such a different world to think about the underlying biology as opposed to just thinking about like a drug. You know, you increase dopamine, norepinephrine, and epinephrine—the so-called catecholamines—and you’re going to increase
energy, focus, and alertness. But you’re going to pay the price. You’re going to have a trough in energy, focus, and alertness that’s proportional to how much greater it was when you took the drug. Boy, amphetamines are a good example! Boy, you know, you’re going a mile a minute when you’re taking the drug. Of course, it’s your impression, and the reality is you don’t actually accomplish that much more. Have any LLMS been used to answer this really pressing question of what is going to be the consequence on cognition for these young brains that have been weaned
while taking Ritalin, Adderall, Vyvanse, and other stimulants? Because we have millions of kids that have been raised as experiments on our whole cadre, a whole generation, and you know, I really would like to know the answer. I wonder if anybody is studying that. That’s really a great question because we gave them speed, effectively. You know, the drug that causes the brain to be activated. But, by the way, the consequence is that when it wears off, you have no energy, right? Right? You’re just completely spent. Yeah, that’s it. That’s the pit. And so, that’s why you
take more of it. You see, that’s the problem. It’s a spiral. I love how today you’re making it so very clear how computation, how math, and computers, and AI now are really shaping the way that we think about these biological problems, which are also psychological problems, which are also daily challenges. I also love that we touched on mitochondria and how to replenish mitochondria. I want to make sure that we talk about a couple of things that I know are in the back of people’s minds—no pun intended here—which are consciousness and free will. Normally, I don’t
like to talk about these things, not because they’re sensitive, but because I find the discussions around them typically to be more philosophical than neurobiological, and they tend to be pretty circular. And so you get people like Kevin Mitchell, who is a real... I think he has a book about free will. He believes in free will. You’ve got people like Robert Sapolsky, who wrote the book "Determined." He doesn’t believe in free will. How do you feel about free will, and is it even a discussion that we should be having? Well, if you go back 500 years—you
know, the Middle Ages—the concept didn’t exist, or at least not in the way we use it, because everybody... it was the way that humans felt about the world and how it worked and its impact on them was that it’s all fate. They had this concept of fate, which is that there’s nothing you can do, that something is going to happen to you because of what’s going on with the gods up above, or whatever it is, right? You attribute it to the physical forces around you that caused it, not to your own free will, not to
something that you did that caused this to happen to you, right? So, I think that these words that, by the way, we use—free will, consciousness, intelligence, understanding—they're weasel words because you can't pin down... There is no definition of consciousness that everybody agrees on, and it’s tough to solve a scientific problem if you don’t have a definition that you can agree on. And, you know, there’s this big controversy about whether these large language models understand language or not, right, the way we do. And what it really is revealing is we don’t understand what understanding is. We
literally don’t have a really good argument or measure, you know, that you could measure someone’s understanding and then apply it to ChatGPT and see whether it’s the same. It probably isn’t exactly the same. Same, but maybe there's some continuum here we're talking about. Right? Um, you know, the way I look at it, uh, you know, it's as if an alien suddenly landed on Earth and started talking to us in English, right? And the only thing we could be sure of was that it's not human. I met some people that I wondered about their, uh, terrestrial
origins. Okay, okay, well, okay, now there's a big diversity amongst humans too. You're right about that, yeah. Yeah, certain colleagues of ours at UCSD years ago, uh, one in particular in the physics department, who I absolutely adore as a human being, um, just had such an unusual pattern of speech and behavior—totally appropriate behavior, but just unusual. In the middle of a faculty meeting, he would just kind of turn to me and start talking while the other person was presenting, and I was like, "Maybe not now," and he'd go, and he would say, "Oh, okay." But
in any other domain, you'd say he was very socially adept. And so, you know, there are certain people that just kind of discard convention, and you kind of wonder, like, "Is he an alien?" It's kind of cool in a cool way, like, you know, he's one of my, again, a friend and somebody—it's true, it's true, you know. No, no, not everybody, uh, has adopted the same social conventions. Uh, you know, it could be a touch of autism. I mean, yeah, that's a problem that—I mean, in other words, there are very high-functioning autistic people out there.
Well, he's brilliant. This, and often they are, you know, it's—you’re high people who are brilliant with autism. Uh, but, but, you know, could you build an LLM that was more, um, uh, on one end of the spectrum versus the other to see what kind of information they... For a paper I reviewed, a paper seemed would be a really important thing to do. It's been done! Okay, there was a paper that I reviewed where they took the LLM, they fine-tuned it with different data from people with different disorders, you know, autism and so forth, um, and,
um, sociopaths. You know, that's scary, but you want to know the answer? No! And they got these LLMs to behave just like those people who have these disorders. You can get them to behave that way. Yes! Could you do, um, political leaning in values? I haven't seen that, but, uh, it's pretty clear to me, at least, that if you can do sociopathy, you can probably do any political belief. You know, but you could also view all this as, um, you could take benevolent tracks. You could also say, um, hyper-creative, um, sensitive to, um, uh, emotional
tone of voices and find out what kind of information that person—uh, excuse me— that LLM, okay, brings back versus somebody who is very oriented towards just the content of people's words as opposed to what, you know—because among people, you find this. You know, if you’ve ever left a party with a significant other, and sometimes someone will say, “I’ve had this experience," with, like, "Did you see that interaction between so-and-so?" I'm like, "No, what are you talking about?" Like, "Did you hear that?" I'm like, "No, not at all." I didn't hear—I heard the words, but I
did not pick up on what you were picking up on, right? And it was clear that there are two very different experiences of the same content based purely on a difference in interpretation of the tonality. Okay, there's a lot of information that, as you point out, has to do with, uh, the tone, the, uh, spatial expressions. Uh, you know, there's a tremendous amount of information that is passed, not just with words, but with all the other parts, the visual input, and so forth. And some people are good at picking that up, and others are not.
There’s a tremendous variability between individuals, and you know, that’s—biology is all about diversity, and it’s all about, you know, needing a gene pool that’s very diverse so that you can evolve and, uh, survive catastrophic changes that, uh, occur in a climate, for example. But, uh, wouldn't it be wonderful if we could create an LLM that could understand what those differences are? Now just think about it, right? Like a truly diverse LLM that integrated all those differences. But here's how—what you'd have to do. What you'd have to do is to train it up on data from
a bunch of individuals—human individuals. Now, one of the things about these LLMs is that they don't have a single persona; they can adopt any persona. You have to tell it what you're expecting from it or ask it in a way that works for you, and you'll get back a certain persona. If you—if I once gave it an abstract from a very technical computational paper and I said, "You are a neuroscientist; I want you to explain this abstract to a 10-year-old," it did it in a way that I could never have done it. It really simplified
it. Some of the subtleties were not in it, but it explained, you know, what plasticity is and explained what a synapse is. And, you know, it did that. It's almost like a qualifying exam for a graduate student. I saw something today on X, formerly known as Twitter, that blew my mind that I wanted your thoughts on, that is very appropriate to what you're saying right now, which is someone was asking questions of an LLM on Chat GPT—or maybe one... Of these other anthropic, or Claude, or something like that, I probably misuse those names. One of
the AI online sites—and somewhere in the middle of its answers, the LLM decided to just take a break and start looking at pictures of landscapes. It was as if the LLM was doing what a cognitively fatigued person, or any kind of online person, would do: taking a break and looking at a couple of pictures of something they may be thinking about, like going camping there or something, and then getting back to whatever task. We hear about hallucinations in AI, that it can imagine things that aren't there, just like a human brain. But that blew my
mind. I haven't encountered that, but you know, isn't it fascinating? That's a sign of a real generative internal model. If it's... see, here's the thing: the thing that most distinguishes I think an LLM from a human is that, if you go into a quiet room and just sit there without any sensory stimulation, your brain keeps thinking. In other words, you think about what you want to do, you know, planning ahead or something that happened to you during the day. Your brain is always generating internally. You know, after talking to one of these large language models,
it just goes blank. There is no continuous self-generated thought. Yet we know self-generated thought, and in particular, brain activity during sleep— as you illustrated earlier with the example of sleep spindles and rapid eye movement sleep—are absolutely critical for shaping the knowledge that we experience during the day. So yes, these LLMs are not quite where we are at yet. I mean, they can outperform us in certain things, like Go, but how soon will we have LLMs, AI that has self-generated internal activity? We're getting closer. This is something I'm working on myself, actually, trying to understand how
that's done in our own brains, generating continual brain activity that leads to planning and things. We don't know the answer to that question yet in neuroscience. By the way, you go to a lecture, and you hear the words one after the next over an hour, and you see the slides one after the next. At the end, you ask a question. Just think about what you just did! Somehow, you were able to integrate all that information over the hour and then use your long-term memory to come up with some insight or some issue you wanted to
discuss. How did your brain remember all that information? Traditional working memory that neuroscientists study only holds information for a few seconds—like maybe a telephone number or something—but we're talking about long-term working memory. We don't understand how that is done. LLMs, actually—large language models—can do something called in-context learning. It was a great surprise because there is no plasticity. The thing learns at the beginning; you train it up on data, and then all it does after that is inference, a fast loop of activity—one word after the next. That's what happens with no learning, no learning. But it's
been noticed that as you continue your dialogue, it seems to get better at things. How could that be? How could it be in-context learning, even though there's no plasticity? That's a mystery; we don't know the answer to that question yet. But we also don't know the answer for humans either, right? Could I ask you a few questions about you and how it relates to science and your trajectory? Building off of what you were just saying, do you have a practice of meditation, or eyes closed, sensory input reduced or shut down to drive your thinking in
a particular way? Or are you, you know, at your computer talking to your students, in a postdoc and sprinting on the beach? You know, it's funny you mention that because I get my best ideas—not sprinting on the beach, but you know, just either walking or jogging. It’s wonderful! I think serotonin goes up—it's another neuromodulator. I think that stimulates ideas and thoughts. Inevitably, I come back to my office and I can't remember any of those great ideas. What do you do about that? Well, now I take notes—okay, voice memos, yeah. Some of them pan out; there's
no doubt about it that you're put into a situation. It is a form of meditation, you know? If you're running at a steady pace, nothing distracting about the beach. Do you listen to music or podcasts? No, I never listen to anything except my own thoughts. There's a former guest on this podcast who happens to be triple-degreed from Harvard, but she's more in the personal coach space, very high level and impressive. A mind—impressive human all around—and she has this concept of "wordlessness" that can be used to accomplish a number of different things. But this idea that
allowing oneself, or creating conditions for oneself, to enter states throughout the day—or maybe once a day—of very minimal sensory input. No lecture... No podcast, no book, no music, nothing, and allowing the brain to just kind of, um, idle and go a little bit, uh, nonlinear, if you will. Right? Where we're not constructing thoughts or paying attention to anyone else's thoughts through those media venues, um, in any kind of structured way, as a source of great ideas and creativity. It's been studied. Psychologists call it mind wandering. Mind wandering! Yeah, it is a significant literature, and it’s,
uh, often when you have an “aha” moment, when you know your mind is wandering and it's thinking nonlinearly, uh, in the sense of not following a sequence that is logical, you know, hopping from thing to thing. Often, that's when you get a great idea—uh, with just letting your mind wander. Yeah, and that happens to me. I wonder whether social media and just texting on phones in general have eliminated a lot of the, you know, walks to the car after work, where one would normally not be on a call or in communication with anyone or anything.
I used to do experiments where I was, you know, like petting and running immunohistochemistry, and it was very relaxing, and I could think while I was, wow, I’m relaxing and thinking of things, and then I would listen to music sometimes. Okay, so we have a whole session, uh, you know, a clip in learning how to learn about exactly this phenomenon. Here’s what we tell our students, right? If you’re having trouble with some concept or, you know, you don’t understand something, you’re beating your head against the wall—don’t just stop! Stop! Just go off and do something.
Go off and, um, clean the dishes; go off and, you know, walk around the block. And inevitably what happens is when you come back, your mind is clear and you figure out what to do. That’s one of the best pieces of advice that anybody could get because, you know, nobody has told us how the brain works. Right? Some people are really good at intuiting, uh, because they've experienced you, maybe. Um, but everybody I know who's really, uh, made important contributions—and I’ll bet you’re one of them, uh—you know, you’re struggling with some problem at night and
you go to bed, and you wake up in the morning: “Ah, that's the solution! That's what I should do.” Right? First thing in the morning, when I wake up, is when I'm almost bombarded with, um, I wouldn’t say insight—not always meaningful insight—but certainly what was unclear becomes immediately clear. That's the thing that is so amazing about sleep. And you can see people who know this can count on it. In other words, the key is to think about it before you go to sleep. Right? Your brain works on it during the sleep period. Right? And so,
you know, don’t watch TV because then who knows what your brain's going to work on. You know, use the time before you fall asleep to think about something that is bothering you, or maybe something that you know you’re trying to understand. Maybe, uh, you know, a paper that you read. The paper—and say, “Oh, you know, I’m tired, I’m going to go to sleep.” You wake up in the morning and say, “Oh, I know what’s going on in that paper!” Yeah, I mean, that's what happens. You can use, you know, once you know something about how
the brain works, you can take advantage of that. Do you pay attention to your dreams? Do you record them? No? No? Okay, so here’s the problem: Dreams seem so, uh, iconic, and a lot of people, you know, somehow attribute things to them, but there's never been any good theory or any good understanding, first of all, of why we dream. We still—it’s still not completely clear. I mean, there are some ideas, but, uh, what—why this particular dream is this? Does that have some significance for you? And the only thing that I know, uh, that might explain
a little bit is that, uh, you know, dreams are often very visual. Uh, you know, rapid eye movement sleep—so there's something happening that, that actually, it's interesting—all the neuromodulators are downregulated during sleep. And then during G sleep, the cocoline comes up, right? So that's a very powerful neuromodulator. It's important for attention, for example, but it doesn't come up in the prefrontal cortex, which means that the circuits in the prefrontal cortex that are interpreting what the sensory input coming in, uh, are not turned on. So any of these—whatever happens in your visual cortex is not being
monitored anymore, so you get bizarre things—you know, that you start floating, and, you know, things happen to you, and, you know, it's not anchored anymore. And so, but that still doesn't explain why. Right? Why you have that period. It's important because if you block it—and there are some sleeping pills that do block it—you know, it really does cause problems with, uh, you know, normal cognitive function. Cannabis as well—people who, um, come off cannabis, um, experience a tremendous REM rebound and lots of dreaming, uh, in the, you know, the days and weeks and months after, um,
cannabis. Um, wow—with, I don't want to call it withdrawal, ‘cause that is a different meaning. No, no, it’s a—it’s a imbalance that was caused, you know, because the brain adjusted to the, you know, the endocannabinoid levels. And now, uh, it's... Got to go back, and it takes time, but it's interesting, isn't it? Interesting how it affects dreams. I think that may be a clue—maybe a very, very common phenomenon. I'm told—I’m not a cannabis user, but no judgment there; I just am not. It's actually a book I read years ago when I was in college, so
a long time ago, by Alan Hobson, who was out at Harvard. Oh yeah, I know him! Oh cool, I never met him, but he had this interesting idea that dreams, in particular rapid eye movement dreams, were so very similar to the experience that one has on certain psychedelics, like LSD (lysergic acid diethylamide) or psilocybin, and that perhaps dreams are revealing the unconscious mind. You know, not using any psychological terms, but you know, when we're asleep, our conscious mind can’t control thought and action in the same way, obviously. It's sort of a recession of the water
line, so we're getting more of the unconscious processing revealed. You know, that’s an interesting hypothesis. How would you test it? Probably, you’d have to put someone in a scanner, have them go to sleep, and put them in the scanner on a psilocybin journey, this kind of thing. You know, it's tough; I mean, any of these observational studies, of course, we both know, are deficient in the sense that what you’d really like to do is control the neural activity. That’s right—you’d like to get in there and tickle the neurons over here and see how the brain
changes. And you’d love to get real-time subjective reports. This is the problem with sleep and dreaming: you can wake people up and ask them what they were just dreaming about, but you can't really know what they're dreaming about in real time. It's true, yeah, it's true. By the way, you know there are two kinds of dreams. Very interesting! So, if you wake someone up during REM sleep, you get very vivid, changing dreams—they're always different and changing. But if you wake someone up during slow wave sleep, you often get a dream report, but it's the kind
of dream that keeps repeating over and over again every night, and it has a very heavy emotional content. Interesting—that's in slow wave sleep, huh? Yeah, because I’ve had a few dreams over and over and over throughout my life, so this would be in slow wave sleep. Yeah, probably slow wave sleep, yeah. Fascinating! As a neuroscientist who’s computationally oriented, you really incorporate the biology so well into your work. That’s one of the reasons you’re this luminary of your field and who’s also now really excited about AI. What are you most excited about now? Like, if you
had—and you know, of course, this isn’t the case—but if you had like 24 more months to just pour yourself into something, and then you had to hand the keys to your lab over to someone else, what would you go all in on? Well, so the NIH has something called the Pioneer Award, and what they're looking for are big ideas that could have a huge impact. Right? So I put one in recently, and here’s the title: “A Temporal Context in Brains and Transformers.” In brains and Transformers—AI, right? The key to ChatGPT is the fact there’s this
new architecture—it’s a deep learning architecture, a feedforward network, but it’s called a Transformer, and it has certain unique parts. There’s one called self-attention, and it’s a way of doing what is called temporal context. What it does is connect words that are far apart. You give it a sequence of words, and it can tell you the association. Like if you use the word “this,” and then you have to figure out in the last sentence what it did refer to—well, there are three or four nouns it could have referred to, but from context, you can figure out
which one it does, and you can learn that association. Could I just play with another example to make sure I understand this correctly? I’ve seen these word bubble charts. Like if we were to say “piano,” you'd say “keys,” you’d say “music,” you’d say “seat,” and then, you know, it kind of builds out a word cloud of association. And then over here, we’d say, “I don’t know,” I’m thinking about the Salin, say “Sunset Stonehenge.” Anyone that looks up, there’s this phenomenon called Salhenge. Then you start building out a word cloud over there. These are disparate things,
except I’ve been to a classical music concert at the Salk Institute Symphony OFA twice, so they’re not completely non-overlapping. So you start getting associations at a distance, and eventually, they bridge together. Is this what you’re referring to? Yes, I think that’s an example, but it turns out that every word is ambiguous; it has like three or four meanings, and so you have to figure that out from context. In other words, there are words that live together and come up often, and you can learn that just by predicting the next word in a sentence. That’s how
a Transformer gets trained: you give it a bunch of words, and it keeps predicting the next word in the sentence, like in my email now—it tries to mostly write part of the time. Okay, well that's because it's a very primitive version of this algorithm. What happens is, if you treat... If you train it up on enough, it not only can answer the next word, it builds up a semantic representation in the same way you describe the words that are related to each other having, uh, you know, associations. Uh, it can figure that out, and it
has representations inside this very large network with trillions of parameters—unbelievable how big they’ve gotten. Uh, and those associations now form an internal model of the meaning of the sentence. Literally, it’s been—this is something that now we’ve probed these Transformers, and so we pretty much are confident that means it’s forming an internal model of the outside world—in this case, a bunch of words—and that’s how it’s able to actually respond to you in a way that is sensible, that makes sense, and actually is interesting and so forth. Uh, and it's all the self-attention I’m talking about. So,
in any case, my Pioneer proposal is to figure out how the brain does self-attention, right? It’s got to do it somehow, and I’ll give you a little hint: basal ganglia. It’s in the basal ganglia—that’s my hypothesis. Well, we’ll see. I mean, you know, I’ve—I'll be working with experimental people. Uh, I’ve worked with John Reynolds, for example, who studies primate visual cortex, and we’ve looked at traveling waves there. And there are other people that have looked at U in primates, and you know, and so now these traveling waves I think are also a part of the
puzzle pieces—of the puzzle—that are going to give us a much better view of how the cortex is organized and how it interacts with the basal ganglia. I’ve already—we’ve already been there, but we still—you know, neuroscientists have studied each one of these parts of the brain independently, and now we have to start thinking about putting the pieces of the puzzle together, right? Trying to get all the things that we know about these areas and see how they work together in a computational way, and that’s really where I want to go. I love it, and I do
hope they decide to fund your Pioneer award. I do too, yeah, and should they make the bad decision not to, you know, maybe we’ll figure out another way to get the work done. Certainly you will. Um, Terry, I want to thank you, um, first of all for coming here today, taking time out of your busy cognitive and running and teaching and research schedule to share your knowledge with us, and also for the incredible work that you're doing on public education and teaching the public. I should say giving the public resources to learn how to learn
better at zero cost. So we will certainly provide links to learning how to learn, and your book, and to these other incredible resources that you’ve shared. And you’ve also given us a ton of practical tools today related to exercise, mitochondria, and some of the things that you do, which of course are just your versions of what you do, but that are certainly going to be of value to people, including me, in our cognitive and physical pursuits and, frankly, just longevity. I mean, this is not lost on me and those listening that your vigor is, as
I mentioned earlier, undeniable, and it’s been such a pleasure over the years to just see the amount of focus and energy and enthusiasm that you bring to your work and to observe that it not only hasn’t slowed, but you’re picking up velocity. So thank you so much for educating us today. I know I speak on behalf of myself and many, many people listening and watching. This is a real gift—a real incredible experience to learn from you. So thank you so much. Well, thank you, and I have to say that I’ve been blessed over the years
with wonderful students and wonderful colleagues, and I count you among them who really I’ve learned a lot from. Thank you. But, you know, science is a sociality, and we learn from each other, and we all make mistakes, but we learn from our mistakes and that’s the beauty of science; we can make progress. Now, you know, your career has been remarkable too because you have affected and influenced more people than anybody else I know personally with the knowledge that you are broadcasting through your interviews, but also, you know, just in terms of your interests. Really, I’m
really impressed with what you’ve done, and I want you to keep, you know, at it because we need people like you. We need scientists who can actually express and reach the public. If we don’t do that, everything we do is behind closed doors, right? Nothing gets out, and you’re one of the best of the breed in terms of being able to explain things in a clear way that gets through to more people than anybody else I know. Well, thank you. I’m very honored to hear that. It’s a labor of love for me, and I’ll take
those words in, and I really appreciate it. It’s an honor and a privilege to sit with you today, and please come back again. I would love to. All right, thank you, Terry. You’re welcome. Thank you for joining me for today’s discussion with Dr. Terry Sejnowski. To find links to his work, the ZeroC... Cost online learning portal that he and his colleagues have developed, and to find links to his new book, please see the show note captions. If you're learning from and/or enjoying this podcast, please subscribe to our YouTube channel—that's a terrific, zero-cost way to support
us. In addition, please follow the podcast on both Spotify and Apple, and on both Spotify and Apple, you can leave us up to a five-star review. Please check out the sponsors mentioned at the beginning and throughout today's episode; that's the best way to support this podcast. If you have questions or comments about the podcast, or guests or topics that you'd like me to consider for the Huberman Lab podcast, please put those in the comment section on YouTube. I do read all the comments. For those of you that haven't heard, I have a new book coming
out—it's my very first book! It's entitled *Protocols and Operating Manual for the Human Body*. This is a book that I've been working on for more than five years and is based on more than thirty years of research and experience. It covers protocols for everything from sleep to exercise to stress control, protocols related to focus and motivation, and of course, I provide the scientific substantiation for the protocols that are included. The book is now available for pre-sale at protocolsbook.com; there you can find links to various vendors, and you can pick the one that you like best.
Again, the book is called *Protocols: An Operating Manual for the Human Body*. If you're not already following me on social media, I am Huberman Lab on all social media platforms, so that's Instagram, X (formerly known as Twitter), Threads, Facebook, and LinkedIn. On all those platforms, I discuss science and science-related tools, some of which overlap with the content of the Huberman Lab podcast but much of which is distinct from the content on the Huberman Lab podcast. Again, that's Huberman Lab on all social media platforms. If you haven't already subscribed to our Neural Network newsletter, our Neural
Network newsletter is a zero-cost monthly newsletter that includes podcast summaries as well as protocols in the form of brief 1 to 3-page PDFs. Those one to three-page PDFs cover things like deliberate heat exposure, deliberate cold exposure, foundational fitness protocols, as well as protocols for optimizing your sleep, dopamine, and much more. Again, all available completely free of charge—simply go to hubermanlab.com, go to the menu tab, scroll down to newsletter, and provide your email. We do not share your email with anybody. Thank you once again for joining me for today's discussion with Dr. Terry Sejnowski, and last
but certainly not least, thank you for your interest in science! [Music]