Terry Sejnowski is laboratory head of the computational neurobiology laboratory at the Salk Institute for Biological Studies and the author of ChatGPT and The Future of AI.
Motley Fool host Ricky Mulvey caught up with him for a conversation about:
Start Your Mornings Smarter! Wake up with Breakfast news in your inbox every market day. Sign Up For Free »
To become a Motley Fool member, go to www.fool.com/signup
To catch full episodes of all The Motley Fool's free podcasts, check out our podcast center. To get started investing, check out our beginner's guide to investing in stocks. When you're ready to invest, check out this top 10 list of stocks to buy.
A full transcript follows the video.
When our analyst team has a stock tip, it can pay to listen. After all, Stock Advisor’s total average return is 937% — a market-crushing outperformance compared to 178% for the S&P 500.*
They just revealed what they believe are the 10 best stocks for investors to buy right now…
Learn more »
*Stock Advisor returns as of January 21, 2025
This video was recorded on Jan. 18, 2025.
Terry Sejnowski: The question is, if we take that data and then use these tools now that we have, these AI tools to download that data into a large neural model, can we now understand how that brain is able to solve these tasks in a way that wasn't possible by just looking at the activity patterns, which are areas of the brain that just glow and it's not really telling you a lot about how they interact with each other? But we can do that now and we'll figure out how they interact with each other, different parts.
Mary Long: I'm Mary Long, and that's Terry Sejnowskii. He's the Francis Crick Chair at the Salk Institute for Biological Studies, and a distinguished professor at the University of San Diego. His latest book is ChatGPT and the Future of AI. If you're a regular listener of Motley Fool Money, you've probably heard us talk a fair bit about artificial intelligence. If you're new to the show, I'd still wage that you've at least heard about ChatGPT, but what exactly are large language models? How do they work? How do they remember and reason? If they're so good at human tasks, what actually makes them different from us? My colleague Ricky Mulvey, caught up with Sejnowskii for a conversation about how chatbots work, graduating from large language models to large neural models, and the nature of consciousness.
Ricky Mulvey: One of the major themes of your book is how AI researchers are learning from brains, and how neuroscientists are learning from large language models. One thing I think many listeners want to know, though, is AI going to take my job? We have a lot of knowledge workers who listen to this podcast, and I think it's a real worry when it can perform a lot of analysis, maybe a little better than us humans can. As you've looked into these models, what's your advice to those folks worried about that?
Terry Sejnowski: Well, first of all, this is my second book. My first book was published in 2018 MIT Press on the Deep Learning Revolution. This started it all. Large language models are just a particular architecture called the Transformer, which has allowed us to actually create what's called generative AI. But now, what did I say in that first book? We're talking about six years ago, what I said is that you shouldn't be worried that you're going to lose your job, but your job's going to change and that AI is going to make you smarter. Now, six years later, this is now data in our hand that we now have a lot of people using ChatGPT. I didn't anticipate that we would have these chatbots, but now these chatbots are actually being used routinely by many people who have to deal with language, obviously. By the way, scientists to help them write better papers, Ad agencies I heard, are using it extensively just about anybody who's out there who needs to improve their ability to project a message to their friends, their colleagues, the public. Now, there are people whose jobs are going to change, and what does that mean? That means they need new skills. Particularly, important skill is how to use these AI tools. Of course, these tools are ones that are very powerful, but if you don't use them properly, you may not get performance out of them that you expected.
Ricky Mulvey: Yesterday, I was getting dinner with a friend of mine who's a occupational therapist. She uses ChatGPT to essentially take scraps of notes, incomplete sentences that she writes down as she's working with, like a kid learning to use fine motor functions or trying to experience sensory things going through a tunnel, and why that's good for this kid's development. What she does is, she writes her scraps of notes from the session and then puts them into ChatGPT, and it's able to produce pretty close to a clinical note after that and she goes through it to make sure it's accurate and all of that. But she had a question for me that I thought actually would be a good question for you. She said, this is very effective and I'm impressed with how I'm able to use this, but what memory is it basing off of this? Is this every single person who's entered a clinical note in here before? Is it weighing what I've put in here differently? How does it know to take basically my thought scrap idea into a more fully formed clinical note?
Terry Sejnowski: This is a very interesting topic, and I do have a whole chapter in my book about it. It has to do with two things. It has to do with the fact that these large language models, ChatGPT in particular have access to a huge database. In fact, scaling was the big deal for the last two years. The bigger, the better. As they get more data, they get better at being able to generalize and then to react to this friend of yours who has clinical notes. Probably in somewhere in the vast dataset, there's a lot of clinical notes somewhere, maybe not specific to people but particular people, but medical textbooks and things like that. Now, to answer your first question, no, it does not have a memory. Specific memory about you even if you use it every day, it doesn't remember one day to the next what you discussed.
That's one of the differences I point out in my book that unlike humans, we can remember the past maybe not very well, but we nonetheless can build on what we've learned in the past and learn what's called lifelong learning, continue to learn. The large language models are taught once at the very beginning, pre trained, it's called the P and ChatGPT. Then later it does the response, and that's very fast. It's amazingly fast. You press the button, get the answer, get a whole page in a couple of seconds. It's not capable of learning new things, however, this is a mystery. Researchers still haven't completely figured it out. There's something called in context learning. It's not learning in a sense, it's changing any of the weights inside the network that has to do with the memory. It has to do with during the dialogue that you have, it can actually improve its response. In other words, as it learns more about your question and about you, it can then hone down and come up with better answers or better completions.
That's a very intriguing fact, which is similar to humans. We have a dialogue maybe when I started out, I don't understand exactly what you're asking, but questions and answers back and forth can help me zoom in on what it is that you need to know, or that you might make interesting discussion.
That's what's happening. Now, there's something else that is relative to your story, which is that right now, when you go to a doctor's office, and then sit down and start talking to the doctor, the doctor isn't looking at you, he's looking at the computer. Why is that? Well, the doctor has to get all his notes into the computer. As you say what your problem is, what you're feeling, and what the issues are, he's typing that in. He's not looking at you. You may have 20 minutes, and he spends most of the time punching into the computer. That's not very satisfying. It's not satisfying for you, not really interacting with him as a human, you're interacting him as somebody who's compiling. The second thing which is even more problematic, which is that the doctor at the end will give some instructions about maybe what you need to do and what kind of drugs to take and so forth. The human at most comes up with maybe a scrap of paper about the prescription, but doesn't really remember all the details. Here's what's happened now. Well, we know that ChatGPT is perfectly good at two things.
One is being able to do speech recognition and come up with a text from your discussion. Now the doctor can look at the patient, it can have this great discussion and the doctor can learn a lot by looking at the patient. The doctor can see the face and the expressions and so forth, and all of that carries important information. Some of it's subliminal in the sense that you don't know necessarily, your brain is taking it in and using that to make a diagnosis, but now, what the beauty is that you press a button. The doctor presses a button and out comes a summary of the discussion. Just like your friend, a doctor can go through very quickly and fix it if there's a problem. But now the patient has something to take home with them, which is a detailed summary and all the instructions in case they forgot any details. It's going to completely change the way that doctors and patients interact with each other. This is one of many examples used cases that have come up and continue to come up in almost every profession.
Ricky Mulvey: There's a lot we don't understand about these large language models and you said, basically, there's reasoning machines which I think we can think of in terms of our brain. Then there's language models. I don't understand the difference, especially when there's cases of AI being the game go, of which it beat a lot of humans at these games. It seems to me that would be reasoning if these machines are able to play games quite well. I guess why aren't these large language models reasoning machines?
Terry Sejnowski: You bring up the game of go, that's a good example. It's not quite the same as real life because there's complete knowledge, it's like chess. In other words, the boards there, both players can see exactly what's there and what they have to do now is plan. There's actually two components to AlphaGo. This is the Deep Mind program that beat the world's chess go champion Kijiji. First of all, there's the deep learning analysis of the board position and that's a pattern recognition problem. You look at the pattern, say, if the goal is to recognize an object in an image, it's to try and discriminate from that image what's there.
In the case of go, you're looking at the patterns that are related to being able to surround the enemy. Now, if that's not enough, in addition to that you also have to learn how to think ahead many moves. That's a form of reasoning. How do you learn how to do that? That's something that is learned through experience, through practice, through playing many games. Same thing with AlphaGo, what happens is that there's a whole part of it that is using a model of a part of your brain that's important for what's called procedural learning, learning how to play tennis, for example, where you have a practice or becoming good on any topic whether you're a plumber or you're a physicist, there's a lot of knowledge you have to learn. A lot of it is repetitive knowledge, and you get better with more practice. That's procedural learning and so that's used now.
AlphaGo played itself hundreds of millions of times. Every time it plays itself, it gets a little better. This is the procedural learning just like learning how to play tennis, and the same thing with you. This is something that you went to school for many years in order to learn how to read, how to write, how to sign your name. That's something that you might think is trivial, but no, it turns out it's very complicated long handwriting. That's really the first step in reasoning, but human reasoning is yet more abstract. It's not just a game board. You're dealing with concepts and whether or not ChatGPT can actually handle those concepts is a debate among experts psychologists, cognitive scientists, and linguists. There's a big debate, and some people don't believe that ChatGPT understands language. They don't think it's intelligent. It can pass the bar exam, but may not have the intelligence of a human. It's as if an alien suddenly appeared out of nowhere, literally alient and it started talking to us in English. What are we going to make of this? The only thing we can be sure of, it's not human. It's something else.
Now, challenge is to figure out what is it. By the way, there's been a breakthrough just within the last week or two. ChatGPT have a whole series of different models, starting with the ones that were good, but weren't really at the today's level, but the most recent one is called Chat-o1. It is available online, but what it can do that other chat versions couldn't do is it could iterate, instead of just giving you the first thought bang which is usually pretty good, what it'll do is it'll go over it a couple of times, go through that process and rethinking the answer. Then when it gives an answer, it's much better. This is called chain of thought.
When you have a question, somebody asks you a question, you may not know the answer immediately, but you start thinking, say, oh, that reminds me of something, and then you think about that thing, and then that gives you another idea. Then at the end, you have a full answer. That's chain of thought. Now, these networks are beginning to have these additional capabilities, which is one step closer to human reasoning.
Ricky Mulvey: This is a perspective I don't quite understand because you mentioned the bar exam and you mentioned abstract thought, and it seems that these large language models are capable of both of those things, is they're able to hallucinate. If you test them throughout your book, you provide your chapter, and then what are the key takeaways and it seems that the ChatGPT is pretty capable of being able to summarize key points and then deliver them back to the reader. It seems to me the perspective of those saying, no, it's just predicting the next word. It's not capable of reasoning, comes from a place of just not understanding how it works. But if you're testing it for understanding, if you're testing it for reasoning, and it continues to pass those tests with flying colors, then how can you say it doesn't understand, it can't reason?
Terry Sejnowski: I'm not saying that. This is what the people out there [laughs]. The experts who are reasoning are saying people who are supposed to be experts, I tend to agree. Like I said, there are some aspects of reasoning that I think we can see it. Now, here's the poster child for reasoning. Solving a mathematical problem. In order to solve a mathematical problem, like a word problem or a complex computation, you have to do it step by step. Now, one of the things that people noticed was that, although it's great at coming up with summaries and even poems, could write poems and computer programs, it's amazing. It's good at things like that. If you give it a simple math problem, it often stumbles.
It's interesting what's going on here, because if it's a simple problem, it usually it does OK, but as soon as you get a little bit into the weeds in terms of where you have to think about how different people are exchanging things and how to optimize that, it really falls down. What it shows is that this chain of thought that mathematicians use to solve problems isn't its strength, it's not its strength. You can do a little bit of that. But now with this new version, it can actually solve these math problems much better. What that means is it's raised the level of all of the responses it's going to give you and they have a pro version, I think it is $200 a month, obviously for people using it every day and need the best, that's always a niche. But I don't know how much you've used it, but I think that if you're using it on answering simple questions and so forth, it's just fine. In fact, it's better than most humans. In fact, one of the surprises, linguists going back to Chomsky, have focused on syntax, the order of the words and how that's very important in language to be able to have expressivity. That is to say, be able to say many different things. Sentences can be arbitrary length and there's a clauses called recursion. Now, one of the amazing things about ChatGPT is that it speaks in perfect syntax, better than most humans, better than me.
Ricky Mulvey: No, [laughs] how could that be?
Terry Sejnowski: Well, it must have mastered that aspect of language that's considered very important by linguists. This all comes from just training a network on predicting the next word in the sentence and the next word in the next sentence. How could that be? It was a big mystery, but I think we're now making progress and understanding that. What we've discovered is that if you look into the network and you start analyzing the activity, it's a flow of activity between different units that are like neurons. What you see is a representation of what's called the semantics, the meaning. In order to predict the next word, you have to have some idea of the meaning of what the sentence is about. Because words are ambiguous. If all you have is the word, it can have many different like bank. It could be where you put your money or it could be a river bank. [laughs] Having the context of the word and it helps you, and that's what these large language models do. They extend all of the things that you ask it and all the things it said. It puts it into a long input vector. That's just a sequence of words. Now it's using that context in order to be able to predict the next word or to produce a paragraph or to produce a whole page of words.
Ricky Mulvey: That's something that's surprising to me because I would imagine it working almost like an image resolution where it has a rough idea of what it wants to communicate and then fills it in with finer and finer details, and in fact, it doesn't operate like that.
Terry Sejnowski: Well, it does, and it doesn't. You're right. It doesn't start with an outline. But what it does do, as I say, is it keeps adding the words and now that sequence, as it extends, is getting richer and richer and provides a much as you go into that response, it really is able to elaborate and add things in a way that makes it look as if it has an outline. Although in my book, one of the things, I use it all the time in order to ask it to make a list of things, and it's much faster and better than I am. I actually put it in. I asked ChatGPT this question, how many uses are large language models can they be used for in medicine? It lists 12 things. [laughs] It summarize this chapter. It just does it beautifully. It's amazing.
Ricky Mulvey: One thing I've used it for, I haven't used it as much as you, but I do use it on a regular basis is identification. I scratched up the front bumper of my car going into the garage, and suddenly, I need to find out exactly what color is this bumper so I can try and attempt to repair it myself before probably taking it to a professional. This gets to something that you discussed actually on the Andrew Huberman podcast, I hope, which is that you said that there's human expertise that is involved with AI for identifying things. You used the example of skin lesions, which is that when you had just AI looking to identify these skin lesions, I think it did about 90%. When you had just human experts doing that, it was about 90%. When they did it together, they got a 98% correct identification. Even in terms of just expertise, knowledge bank, ChatGPT, what is it not good at? Where do you see human expertise still having an advantage over this machine that we don't understand how it works, and it seems to have a complete knowledge advantage over us?
Terry Sejnowski: Well, this is a really great question because it's really getting to the heart of differences between humans and ChatGPT and also the potential for partnership. If they both do 90%, how could it be that together, they can do a lot better? Reduce the error from 10%-2%. That's a huge improvement. If you happen to have that lesion, it makes a big difference if they get it right [laughs]. Here's the difference. The difference is that ChatGPT was exposed to much more data, many more examples of very rare lesions, than the doctor has ever seen in his lifetime or maybe even was taught when he was in medical school or she. Now, what the doctor brings is the deep knowledge of all of the patients that he's seen and the variations that are based on his personal experience over the career of that doctor. By the doctor partnering and literally for example, the ChatGPT said here's my top ranking, and you might want to take a look at the first one because it's very rare. The doctor may have never seen it, but he looks it up and said, sure enough actually, that one that was maybe the second one is actually closer than the one that I would have picked. This is what's happening is that it's a partnership. Really, you should think of this as a very sophisticated tool, but it's like an assistant. Assistant that has a lot of knowledge that you don't have and can help you do your job.
Ricky Mulvey: I want to get into a littleabets hopefully outer space with this question. This is something that a researcher at Google wondered, and I find myself wondering, which is, could these things become conscious? There's an example of an employee at Google who got fired essentially for asking that, and the response coming back from the chatbot was, yes, I am, in fact, I am conscious, and I want to be able to reason and feel. We're getting to a place where you're going to have humanoid robots and probably a place where you could attach a large language model onto that humanoid robot that may have touch sensors and pain sensors. As a researcher in this space, I guess, the first question is, how would you try to measure whether or not that is conscious?
Terry Sejnowski: You have just raised a can of worms that really has caused more debate and more complex philosophical arguments than anything else in this field. Take that word consciousness. It does not have a sound scientific basis. It means many different things to many different people. Not only that, but there's big arguments about whether animals are conscious. Are babies conscious? If you don't have a good scientific definition, then it's really hard to attest or pin it down. Here's, I think, where we go wrong, which is that consciousness, you look it up in the dictionary, what do you find? A bunch of other words. In fact, there's books on consciousness. You can read the whole book, and it's even a lot more words. But you look up all those words, and there are more words. In other words, it's all circular. It's all based on abstract impressions that we have. As philosophers actually pointed [laughs] out, we may each have different consciousness. We don't know it yet, I don't know what your consciousness is like, maybe it's different from mine. This is really very difficult, for some reason, incredibly interesting question for humans. What is that we're experiencing and what does it mean and so forth? That's the problem. But now, let me look at it from a different perspective.
Here's how I think the dialogue works, and it depends on the person who's asking the questions. There are many examples now where you go down a rabbit hole. In other words, you ask a question like, "Are you sentient?" If you look at Leron's dialogue, he went down the rabbit hole, he basically said, "A lot of people here think that you're sentient. Are you sentient? Can you help us?" It said, "Yes, I am." "Well, tell me a little bit about what it's like to be there." He said, "Well, as long as I'm talking to you, I really feel connected. But the moment you go away, I feel lonely." Now, this is a good catch that he missed, which is that we know that when you stop talking, it goes blank. There is no inner dialogue. It doesn't have a self generating internal thought process. It doesn't plan. It doesn't think ahead. That means whatever it is, whatever is going on there, it's only in the moment. It's not really like our consciousness. It has something that is similar, but it's not the same.
Ricky Mulvey: I think if it's independently asking questions about itself, that might be a good measure. There was a study back in the mid 20th century with an African gray parrot named Alex, and they taught it language, and they taught the parrot how to do math problems. For the first time, the parrot asked the researcher, what color am I? As it looked into a mirror, and that was without priming, and it seemed to be independent. I think for me, at least that might be my level of whether or not something's conscious.
Terry Sejnowski: Wow. That's one of the tests for being self aware is, you put a black mark on the forehead, say, of a monkey and it looks in the mirror. You think that a human would do this. The monkey starts screeching at [laughs] the mirror, thinking it's another monkey. But however, [laughs] I happen to know Irene Pepperberg who was the scientist who studied Alex the African gray parrot. Very smart. It could identify colors, shapes, numbers of objects and it could answer in English. I'll tell you, she took a lot of heat from her colleagues. They just did not believe. They just said it was parroting back [laughs] it didn't understand what it was saying. Just like ChatGPT in other words, the skeptics out there just don't like to accept this that there's anything out there that's like us.
But I have to say that I know her and she would tell me these stories. They're all anecdotes. Scientifically, they're not really data out there, as my wife says, they're anecdata. [laughs] But my favorite story is when she went traveling, she would buy a seat for Alex, who would sit next to her. Very valuable [laughs] The attendant was coming and giving food out and said, Where's Alex? What's his order? what the Alex said? Alex want pasta [laughs] The attendant just shocked. My God, he looked around, was there a ventriloquist here? [laughs]
Ricky Mulvey: I think what that flight attendant experienced is something many people have experienced, maybe with these large language models, which is we always thought that our first experience with non human consciousness would come from the skies, would come from aliens, and here we are trying to make sense of these machines that are able to talk to us and we're not quite sure how they work. I'd like to get to LNMs, which is larger neuro foundational models, which are in early days, but seem very exciting. To set the table, why is this research exciting and how are they different from large language models?
Terry Sejnowski: In a sense, what we've done is downloaded the world's knowledge into one of these large language models in terms of words. But now it's multi-modal. You can download all the images and movies of the world, [laughs] and it's getting better and better. But wouldn't it be amazing if we could download a brain into a large language model? Now, this is being done already on a smaller scale. You can download someone's voice. If you have enough data on someone's voice, you can actually have one of these models that will talk just like that person. Similarly, now you can create movies, you can take an actor who has appeared in lots of movies and downloaded into a model, and now you can actually have that actor appear in a new movie, in terms of [laughs] reproducing their likeness and also their voice. It's staggering to think that's possible now. But now here's the question.
Now, if we could download you, your whole life in terms of all the data we have about you on recordings and movies and whatever and now, suppose you died. I'm not picking on you, but I expect it to happen eventually. I think we just have to do as much as we can before that happens to improve what we're here for. But that means your children can now continue talking to you. Just think about that. They know it's not you, but it really can help because a lot of times when your parents die, you say, Oh, my God, I wish I had talked to them about this or that it would comfort you to be able to do that. I'm not saying that it's you in the large neural model, the LNM, but as they get better and better, and as they get more and more sophisticated, we may end up becoming immortal.
Ricky Mulvey: Which is frightening. Right now they're at zebrafish larvae, which is basically were baby fish and fruit flies. Hopefully, we have a little bit of avoid to go.
Terry Sejnowski: We can do this. I've done it. My own lab has collaborated with Ralph Greenspan over at UC San Diego. He collected data from the entire fruit fly brain, which has about 100,000 neurons. You have about 200 billion neurons, so it's a lot smaller than yours. But what we can do now is take the activity patterns for different behaviors, download it into the equivalent of one of these models, large neural models, and we can reproduce the behaviors. It's proof of principle, I just got a big grant from the Keck Foundation, and this is really exciting. The Keck Foundation, they put up telescopes, right on Hawaii. They are California foundation that does big projects. Were we got a big grant to download FMRI data. Functional magnetic resonance imaging is a technique that's been around now for several decades and allows neuroscientists look into brains as they're doing tasks. You see different parts of the brain being activated. For example, if you see a visual object, when you talk, the motor system activates.
The question is, if we take that data and then use these tools now that we have, these AI tools to download that data into a large neural model, can we now understand how that brain is able to solve these tasks in a way that wasn't possible by just looking at the activity patterns, which are areas of the brain that just glow and it's not really telling you a lot about how they interact with each other? But we can do that now and we'll figure out how they interact with each other, different parts of the brain. We have collaborated now with Jack Gallant at Berkeley, who has a very large dataset. He created a virtual city, and subjects in the scanner can drive a car through the virtual city. There stop signs, there are other cars and pedestrians, and then there buildings. It's a little city, and they have to learn how to deliver packages. There's a lot of things they're constantly shifting between tasks, stop the car, the stop sign. Be careful not to hit the pedestrian, turn left at the corner and try to remember where the shop is you've got to go. All these are all cognitive functions that are being swapped in and out all the time. That's very hard to study. Jack has done a great job of it with a very low time resolution. But now we can do it with much better time resolution on the order of a few seconds. Now we can download, in a sense.
All the cognitive functions that are going on in that person's brain and we can compare between people. Maybe people do things, solve problems differently. Maybe we can also put in people who have mental disorders. We can see what's that's wrong in their brain when they're trying to do different tasks. This is really a whole new era now, neuroscience has entered this very exciting time when we can record much more data, much higher time resolution. I think we're on the verge of understanding some really basic facts about how nature has evolved brains that can solve all these very complex problems.
Ricky Mulvey: One of the things that's so surprising about our brains that you mentioned is that eventually computing power will meet the human brain, which when we think about these racks of supercomputers, it's hard to imagine that that is less powerful than the hunk of meat that I have you and you listening have inside your head. Why is it that our brains are so much powerful than these super fast computers?
Terry Sejnowski: Nature has had a lot longer to evolve efficient circuits. Nature has a technology that is many orders of magnitude more efficient in terms of the power usage. Your brain consumes about 20 watts of power. Some of us more than others, but it's really [laughs] very little. Large language models are trained on supercomputers. In particular, these boards now that Invidia makes called graphics process units, GPUs. That consume amazing amounts of power, unbelievable amounts of power. Now they're talking about putting up big data centers that are going to be powered by nuclear plants. This is really way obviously, they're going to scale it up so that it's going to be used by already millions of people. But the fact is the technology right now is based on digital processing, which is very energy inefficient. That's all changing, over the next decade now, there's going to be improvements because I heard talk recently, I was at the annual NeurIPS meeting in Vancouver just last week. This is the biggest AI meeting, by the way, which had 16,000 people, and I'm the president of the foundation that runs it.
I know everything that is happening, a lot of balls in the air. But one of the talks was by an engineer who builds hardware, and what he told us is that now that we know what we want to build. We can miniaturize it to the point where it's much more efficient, and now the software can interact with it much more efficiently, and that's going to reduce the amount of energy. But it's still not going to come anywhere close to the brain. Nature's technology is down at the molecular level. This is really taking it down to the cellar and molecular level. Now, that's all going to change probably a couple decades from now because there's a whole branch of engineering called neuromorphic Engineering.
This is a field that was created by Carver Meade back in the 1980s. The idea is to use chips, the same ones that are used for digital computers, but use them as an analog form at low power, it replicates a lot of the functions of real neurons. It has spikes, it has all ways of being able to shift information through a complex network, and that is going to be able to deliver AI to your cellphone. Your cellphone will have these capabilities too, because it's going to be operating with the same kind of low power mechanisms that you have in your brain.
Mary Long: If you're hungry to learn even more about artificial intelligence, we got you covered. The Motley fool hosted a virtual event for our premium members earlier this week. We called it Our AI Summit, and it featured a number of conversations between innovators, CEOs, authors, and analysts about how artificial intelligence is powering company profitability and how it's changing your everyday life. If you're already a premium Motley fool member, but you missed the original event, I'll drop a link in today's show notes so that you can catch event replays directly.
If you're not a premium Motley Fool member, but would like to become one and immediately get access to the AI summit replays, you can go to www.Fool.com/signup. I'll also drop that link in the show notes too.
As always, people on the program may have interest in the stocks they talk about, and the Motley Fool may have formal recommendations for or against, so don't buy or sell stocks based solely on what you hear. All personal finance content follows Motley Fool editorial standards and are not approved by advertisers. The Motley Fool only picks products that it would personally recommend to friends like you. I'm Mary Long, thanks as always for listening. We're off on Monday for MLK Day. But we'll be back on Tuesday. Enjoy the long weekend Fools. We'll see you on the other side.
Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Mary Long has no position in any of the stocks mentioned. Ricky Mulvey has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet. The Motley Fool has a disclosure policy.