A lot has changed in the world of AI since we first spoke and I wanted to really get your thoughts around everything that's happening, where it's going, but I think it's good to just get a basic idea for people what is the first version of AI like what was that that we can kind of coin and define to justize where it's going and everybody has different implications but I just want to know as I know it I mean back uh you know the term AI was coined by John McCarthy who I knew we didn't get
on that well but that's a different story. Um but what's that why is that why is that just intellectual so to begin the AI story so uh in 1956 there was this conference at Dartmouth where John McCarthy coined the term AI and uh one of the things John McCarthy did in the in the years right after that was invented this language called lisp which was kind of a very early computer language very early but sophisticated in conception computer language It was hard to implement and it wasn't really implemented fully for decades. So the reason John
McCarthy and I perhaps didn't see completely eye to eye was that lisp he saw as being the only language in which you could implement artificial intelligence and what was then thought to be an important piece of artificial intelligence which was things like symbolic mathematical computation. Well, in 1979, I kind of got into building my own kind of symbolic computation system, and it wasn't practical to use lisp at that time. It was just there weren't good implementations and so on. So, I used this then new fangled language called C um which isn't very new fangled anymore,
but uh and John McCarthy kind of never forgave me for that. And I think that was kind of the um was also the beginning of realizing that I mean while lisp is very was very interesting language it wasn't yet very practical had had history worked out differently uh lisp would have been kind of a a much more prominent language today. It probably would have been the thing that I have had implemented my system in too. Anyway, John McCarthy coined the term artificial intelligence back in 1956. kind of the backstory of that whole thing was that
when computers well electronic computers were first kind of coming on the scene in the late 1940s and so on, the typical description of them was giant electronic brains. So people had the idea from very early on what computers are going to do is automate thinking kinds of things. And people kind of assumed that that wouldn't be terribly hard. And that was kind of the um uh in the 1950s into the early 1960s it was kind of like yeah you know we just build a slightly bigger computer we'll be able to automate thinking just like we've
been able to automate kind of making a bulldozer or something like that. Um it was uh there were a couple of different approaches that were taken. Well actually as one indication of kind of the way people were thinking in the 1960s where a lot of kind of money and effort was put into AI. Um, one of the problems of the time was the cold war and people were thinking well there are occasionally these sort of high level diplomatic exchanges between you know the US and Soviet Union and so on and they had this idea that
well you know in those exchanges there would be some interpreter who'd be you know translating Russian to English etc etc etc and they were like we're really worried the interpreter is going to mislead everybody and uh it's going to lead to World War II or whatever so Let's put a machine in place instead. Let's have an an automatic translator. Yeah. Way better, right? No possibility of human error. Well, people thought that was going to work in the 1960s. And it's actually very interesting to read what people wrote in the early 60s about kind of AI
and what was going to happen with AI and the idea that you know kind of we were a species that was mostly going to go down in the history of the earth as being the thing that created the next species that was was AI and so on. And it's really really funny to read these things now because honestly pretty much word for word they're the same as what people say today with the one exception that a bunch of the kind of language of you know men will do this so to speak has been you know
adjusted in modern times. It will be people will do it. Everything else is everything else is pretty con sort of saying and thinking the same things about kind of the the future of of well essentially giant electronic brains for a long time. But there were really two approaches that well first thing in the 1960s there were two different approaches that were taken to AI and those there continue to be two different approaches today. there was kind of the the symbolic approach and the statistical approach. The um the the idea the symbolic approach was kind of
the thought that you can have sort of a computational representation of the world and you can have your AI kind of figure out things in the world in a kind of computational fashion. That's actually been an approach that I've been deeply involved in for a very long time. But that's that's one of the approaches. The other approach was the statistical approach just saying forget about having rules for how the world works. Just say we notice this and that and the other thing. Let's extrapolate from what we notice. Let's just do the statistics of the world
to guess what how things work. The main approach that got used there was neural networks. Neural networks are famous today, but neural networks are an incredibly old idea. I mean, I've actually been researching a bit the the ancient history of neural networks, and it goes back even much further than I imagined. It goes back to the mid mid 1800s. How far does it go back? So, what happened was it was a sort of neural nets were all entangled with the question of how our brains actually work. And so, the people had looked at brains under
microscopes. There was a chap called Golgi who figured out in I think the 1870s how to stain nerve cells because when you look at you know a slice of brain tissue under a microscope it just looks really complicated. You can't see anything. Golgi figured out this way to stain it so that some tiny fraction of the cells would turn purple and then you could see those cells that have turned purple and you could kind of see the the pattern of those things. There was for for a while there was a big sort of dispute between
Golgi and a chap called Romani Kajal uh because Golgi thought that sort of every nerve that you would see these sort of nerve cells and that they were really all connected in one big net. Romani Kajal thought they were all separate cells that had sort of synapses gaps between them. But that was that was a dispute of the late 1800s. Um the two of them even disagreeing violently about that won one of the early Nobel prizes in uh together um for for studying those kinds of things. I think there were more things to give interesting
Nobel prizes to in those days than there are perhaps today but that's a different story. Um the uh uh in any case the um uh people knew that nerves were electrical because actually the way electricity was discovered was by Voltaar and people uh kind of noticing that frog legs kicked when you gave them electric shocks. So people kind of knew it's electrical and then they kind of knew there's a network of nerves in the brain. And what did that mean? Well, by the 1870s, people were talking about how there might be it might be implementing
logic in the network of nerves in the brain in some kind of electrical way. This is before there were really electrical machines that did any anything like logic. Um so then well the the next probably big step was 1943 uh chap called Warren McCulla and chap called Walter Pittz. Um Warren McCulla was a was a neurohysiologist and psychiatrist. Walter Pittz was a a young math very young math person. Um they worked together wrote a paper about kind of the logical theory of neural nets. That paper is kind of the foundation of everything that's been done
since. They kind of laid out this way of idealizing the thing that might actually be in brains but idealizing it in sort of a mathematical way. Well, computers didn't yet exist. Uh, in, you know, by 1946, there were starting to be electronic computers. By the 19 mid 1950s, people were implementing kind of this idea of neural nets on early computers and on special purpose computers that they'd made particularly for doing neural nets. And particularly there were this idea of the so-called perceptron which was a kind of a a way of having sort of this this
very simplified artificial neural net that would do kind of statistical uh would would figure out given an image it would sort of figure out from the statistics of that image what was in the image. It all was going reasonably well but there were some glitches. So, a famous glitch was there was some uh trial for the military of these of these things where there were a bunch of pictures of tanks which had tanks in them and other pictures that didn't have tanks in them. And uh you know the the perceptron did really well figuring out
which pictures had tanks in in in them. But then somebody realized that all the pictures with tanks were taken during the day and the pictures without tanks were taken at night. So really all the perceptron was doing was something very trivial. And that's a this is a repeated issue with trying to understand what's happening in AI. You know, is what you see just a consequence of some feature of the data that you didn't happen to notice but it's sort of trivial or is it that you're seeing something that's kind of a deep figuring out that's
being done by the AI. So then uh actually in uh so what happened uh after that was was um in the early 1960s a person I actually knew pretty well named Marvin Minsky uh who was a kind of AI pioneer who'd originally been interested in neural networks. He wrote his PhD thesis about neural networks. He even built a neural net machine but he uh kind of decided perceptrons and all things neural nets are trivial. and he and a chap called Simo Papot wrote this book about perceptrons which argued that perceptrons and neural nets can't do
anything interesting. Game over. So by the late 1960s, everybody said neural nets are doomed. They'll never do anything interesting. They're all trivial, etc., etc., etc. Uh still uh and and so sort of AI at the time was if there was going to be anything that happened with AI, it was going to be sort of on the symbolic side of figuring out rules for how the world works. and uh sort of trying to take things like ideas from psychology and sort of take the ideas which had been expressed kind of vaguely in psychology and try to
sort of tighten them up and make them things that you could implement on a computer. That was the idea. So then we get to the beginning of the 1980s and uh uh I was by this point sort of uh uh actively interested in these kinds of things and um well let's see this history is a bit complicated. The the um uh so well the thing that was sort of the dominant theme of AI in the 1980s was these things called expert systems. And the idea was you would have sort of a rules-based way of describing
the world that you would learn from an expert. Somebody would essentially write the rules based on some expert in geology or some expert in medicine or something like this would write the rules and then the computer would be able to do whatever the expert could do. That was kind of the idea. Now I have to say uh and and that idea was was kind of a a significant idea in the 1980s about how AI would work. Meanwhile, the um uh well I myself have been interested in in kind of how you would do things like
math sort of automatically, how you would do symbolic math, algebraic math automatically. that had been viewed as a kind of a a thing that if we could do this, we'd know we had AI. It had been viewed as sort of a big test case for AI. Um but uh I I in 1979 I already mentioned that I I started building a system called SMP symbolic manipulation program that did kind of symbolic math and many other kinds of symbolic things. And it did it in a way that I would never have claimed was anything like how
brains do it. Meanwhile, I myself have been interested in kind of how would you take kind of the knowledge of the world and make it somehow automatically accessible was a thing I've been interested in since I was a early teenager. And I kind of having had success in sort of building up this symbolic computation system, I kind of got to thinking could I do something with sort of vagger knowledge, knowledge that wasn't as precise as the kind of knowledge in math. And so I got to thinking, this is around 1980 or so, I got to
thinking, you know, how would I make something that deals with that? And I kind of assumed that to make a thing that could deal with vagger kinds of knowledge, I would have to kind of make a brain-like thing. I would have to sort of solve the general problem of artificial intelligence. And so I started thinking about how would I do that? And I was interested in kind of pattern matching and how would you sort of fuzzily match, you know, is that roughly a picture of this or or not? And I had a bunch of ideas,
never figured out how to do it at the time. Meanwhile, uh, neural nets had kind of a comeback. This is around 1982 or so. There were, uh, was a couple of sort of experiments that were done with neural nets where it was like, wow, they're able to actually do things. And the fact that they were sort of, you know, squashed flat by the perceptrons analysis wasn't really right. If you had deeper neural nets that had sort of more layers of of computation in them, then it would then they might be able to do something interesting.
So, you know, I did some experiments on neural nets. I could never get them to do anything terribly interesting. I kind of lost interest in those things. Meanwhile, in my own personal trajectory there, I had started my first company that was sort of aimed at doing sort of mathematical computation kinds of things through a a series of probably not great business decisions of deciding that, you know, I should bring in other people to run the company and things like this. We ended up getting venture capital and the venture capital was like these expert systems things,
they're amazing. you should go chase that particular you know shiny direction so to speak. So the company kind of pivoted to having well a division of the company doing expert systems kinds of things and and it's actually kind of strange to realize that the company changed its name eventually to inference corporation that was probably 1983 and it feels very I hadn't even thought about that in so many years. It seems so very modern today. It's it was an AI company. um it was doing expert systems AI and the company built all kinds of things for
for I don't know it built early credit reporting systems or credit assessment systems. It built a bunch of uh testing systems for for NASA and things like this. So it was was really doing uh in those days kind of symbolic AI. I wasn't much involved in that side of the company. The company eventually well eventually went public in a very undistinguished IPO sometime in the 1990s but um I hadn't been involved with it for a long time by that time. The other thing that happened in AI in the 1980s so I told you this is
a long shaggy story but but it's it's it's kind of interesting to understand if you want to know where AI is today and kind of what the what the roots of what's going on today have been. So uh the um well so so I guess two other things of note happened in the 1980s with AI. Uh there was a time when uh Japan was viewed as a country where kind of oh it just copies American technology. Everybody was kind of very down on that. Japanese government had this this great idea. They said let's do a
research project that's going to jump ahead of everybody else. It was called the Japanese fifth generation computer project and it was a project in I forget when it started early 80s sometime. It was a project where Japan was going to solve AI and um their methods well they were using particularly a language called prologue which is sort of a lispish kind of thing but it had a particular idea about problem solving which doesn't work out so well in the end. Um but that was another kind of injection of you know AI is coming type thing.
You know Japan is going to solve AI type thing. Um there's another piece. Another piece in the 1980s was uh the kind of the as I mentioned sort of the rebirth of interest in neural networks and uh a bunch of people including the people who sort of came you know continued working on that till the present day people like Jeff Hinton and my friend Terry Senovski who were kind of the the early people who got interested in could neural nets really be made to do something interesting. They talked about it as parallel distributed processing and
it was a a sort of big mixture of thinking about things computationally and thinking about actual brains and dissecting brains and trying to figure out how they worked and and so on. So again that was sort of a thing that was happening in the 80s. By I would say end of the 80s early 90s this stuff basically hadn't worked out. Um, and people said, "Ah, AI is is doomed." And everybody was kind of, you know, everybody who might have been saying they were doing AI didn't say they were doing AI anymore. And and AI was
really at a at a very low EB for quite a long time. The thing that relaunched AI was something happened in 2011 actually that a forementioned person Jeff Hinton uh had been continuing to study deep neural nets and something people including myself were just like I don't know I don't think anything interesting is going to happen here but um he had been studying trying to do image recognition trying to tell is this a picture of a cat or a dog or whatever there was a big collection of images ImageNet that existed and there were these
annual competitions for uh you know who can recognize images best. Well, a student of of of Jeff Hinton's left a neural net training kind of by mistake for a month trying to, you know, going through millions of images saying with a neural net being told this is a cat, this is a dog, there's a cat, there's a dog, and you know, trying to tweak the neural net. That's how neural net training works. Tweak the neural net to get the answers right uh more and more often. So, it was like, okay, it was kind of a
mistake. The computer was just sitting doing this. It was an early GPU computer. Um and uh it was uh you know before throwing the neural net away I guess I don't I don't actually know the the every detail of of what happened in those days so to speak but um it was like okay let's just try this neural net see how it does it did pretty well it won the imageet competition that year that was a big wakeup call that you know neural nets are back and they're able to do interesting things kind of the
buzzword of the time was deep learning and so that time people got very excited about that. I have to say I myself had been uh just about to we to use image processing to just sort of make little functions that would count you know I don't know people in an image and things like this and do that by kind of a bunch of image processing hacks and I I my my friend Sarah had said no no no we're going to be able to do it with neural nets one day uh it actually arrived within the
year and I'm happy I didn't do that project because that project would have been kind of dead meat. Um so in any case the the um what then happened uh was sort of a big sort of wave of excitement about neural nets and deep learning and so on particularly applied to image identification and that problem was kind of solved at some level and by solved I mean you know you get it right 90 something% of the time or whatever and that's kind of typically the story of machine learning you get it right you know 90
80 95% of the time something like that it's not 100% but it's not even clear what you mean by getting it 100% right. I mean it's like you know is that dog that has been given a cat suit or something should that be a dog or a cat? It's hard to define what the answer should be. So it's hard to know whether you were like got it exactly right. In any case, the the thing that's also interesting to notice in terms of kind of the the evolution of AI is that the things that one had
by 2012 or so, we were making image identification systems and so on using these neural net ideas, they're not that much worse than what one has today. In other words, in the last 13 years or so, things haven't in that particular domain haven't improved that much. It's it's like you reach a threshold, you start to be able to do something, then that works and then you build that capability into a bunch of systems and it becomes useful, but it's not as if that capability itself, you know, just because it made that one jump doesn't mean
it's going to make lots of other jumps. It's interesting you say that because most people that are playing around with what the consumer version of what they think AI is from Chaja BT or Gemini, people would think that there's been like people think this is pretty much AI. Like this is the last two three years people just think AI just kind of popped out of nowhere, right? Like from 2021 before that AI didn't even exist. like it just started coming because now they can play around with it. You mentioned that it kind of tipp there
was a tipping point of from the image recognition perspective. So what was the difference between what was it that made people see and feel the new revolution of AI in the last couple of years? What was the segment particularly that improved? This is a very shaggy story. It's a long story. You asked what's the history of this? I it's it's quite a long complicated story. I mean the it um for sure. Yeah. So another I mean the big thing that happened at the end of of uh 2022 was chatgbt and the idea that you could
get a neural net to write text based on a prompt. That had a long history that has a history going back into the 1960s even into the 1950s. um people had well okay there there's a couple of branches here I can I can let me let me explain the um uh well yeah yeah yeah right no summarize it also if I kind of interesting perhaps people will find it I'm not sure this this history is well well told actually um but uh the the um yeah you can you can you go through the whole thing
let's talk about a couple of branches Um I well there's a branch that I was involved in. There's a branch I wasn't involved in. The branch that I was involved in was this thing that I've been interested in doing since before 1980 or so of can one answer questions about the world from the knowledge that we've accumulated in the world. And I had made some sort of science advances that made me convinced that there wasn't sort of a bright line difference between intelligence and mere computation. And so it's like well if I'm if I'm right
about that sort of almost philosophical claim, it should be the case that we can build a system that does this kind of answering of questions based on the knowledge that our civilization has accumulated. So in the mid warts I decided okay it's time to actually try and do this and so that led to this system that uh came out in 2009 called wolf from alpha and wolf from alpha is kind of the idea is to take pure natural language turn it into a precise computational language then be able to compute answers from that and and
tell them to people. And when when we were introducing Wolf from Alpha, it was sort of interesting because AI was absolutely dead at the time. Everybody thought nothing like this is going to work. People had tried to make question answering systems with various kinds of AI techniques, statistical, symbolic, whatever. They tried to do that for decades. It had never worked. It was kind of particularly notable. I mean, I, you know, to tell a sort of story that indicates what was happening. I was a couple of weeks before we released Wolf from Alpha. I happened to
see Marvin Minsky, who I mentioned earlier, who was sort of a big AI pioneer. And so I say to Marvin, we got this cool new thing that's coming out, you know, let me show it to you. So I kind of show him a couple of things and he's like, changes the subject. Not interested. It's like because for him, he'd seen a zillion examples of people saying, "I built a question answering system." And so I said, "Look, Marvin, you know, you should look more carefully. This time it actually works." And so he types a few more
things and then he's like, "Oh my god, it actually works." And he's running around this event that we were at telling people, "You've got to see this. You got to see this. It actually works." It was interesting because it was a time when, as I say, that was kind of a a a kind of a a snapshot of the fact that at the time people just thought AI was dead um in 2009. And uh it was uh that was you know and and so you know with wolf alpha we kind of for the first time
sort of solved the natural language understanding problem of taking kind of plain text and people had sort of thought we'll just make a computer understand that but it wasn't clear what it meant to make a computer understand it. I only realized this sort of after we' we'd built what we built. We had this huge advantage because we already had this underlying computational language that we built starting came out in 1988 Mathematica and what's now wolf from language which is this kind of way of representing things in the world computationally. We already had that kind of
precise computational representation of the world. And so our natural language understanding wasn't just abstractly get the computer to understand this. It was translate what those pesky humans say into this precise computational language that we can then do computations with. So so that was that was kind of one piece of that story. Now the next question the next thing is is sort of what was the tradition that chatbt came out of? is a different very different tradition very different kind of line of work kind of started I guess in the 1940s when people were doing crypton
analysis in World War II and the question was uh when you do crypton analysis you know cryp when you encrypt a message you're turning it from meaningful English letters or English sequence of English letters to something where the letters look completely random what you have to do to fish out the English is to sort of statistically figure out what was done. And so the sort of critical idea was, well, English isn't just a random sequence of letters. English has certain statistical regularities. Like if you see a Q, there's going to be a U next, probably.
You see, yeah, right. You see more E than you see, you know, X's and so on. Okay. So, a chap called Claude Shannon worked out this thing he called information theory. um uh they actually worked out I think during the war probably Alan Turing was somewhat involved in this also uh wrote this this paper in 1948 I think introducing information theory and this idea of sort of the statistics of things like language so given that you had the statistics of language you knew which letters were more common which pairs of letters were more common and
so on you could start imagining generating language statistically you say okay if you happen to randomly pick a Q, the next letter is going to be a U, etc., etc., etc. So, that was kind of a a a a way of thinking about language. And and so the the thought immediately came, well, maybe we can get language to we can just generate language statistically and it'll be meaningful. Well, it didn't work out that way. There was a whole sequence of of efforts. It was particularly of interest um people doing speech recognition where you're trying to
figure out you know we hear these speech sounds and we can sort of statistically work out oh that speech sound is roughly a vowel but then the question was well how would you assemble those those sort of somewhat imperfect things about well that might have been an L that might have been an R whatever else how would you assemble those rather imperfect kind of guesses about what those phonemes what those fragments of speech were like into something which could actually be a meaningful piece of text and so people were very interested in kind of the
statistical structure of text to be able to do those kinds of things. So that became sort of a whole area. Well, what happened in the um uh what was it by the around maybe um the early 2010s was people had done there were speech recognition systems that worked by taking speech, breaking it down into phonemes, different speech sounds, recognizing particular patterns of phonemes and so on, trying to do the statistical reconstruction of language. It was a a big sort of painful process. Um people started just trying to use neural nets to just go straight from
the the audio you hear to the text that's being generated. And turns out it worked. And so just as image recognition got solved, so kind of speech to text kind of got solved too. And that was yeah by by 2010 uh when we were when when Siri came into the world and uh we were kind of the the computational backend for Siri and it had a kind of voice recognition system at the front end and we were we were constantly frustrated because it would send uh you know whenever people were asking about math and you
know pi in math the voiceto text system was sending pie as the word that they were saying but that got solved. Um but uh um it was so that was that was sort of the next success next big success of neural nets. Then there was the question of could you do what was called sequence prediction could you do a better job of knowing given that you have a piece of text that starts this way what will come next and what existed even in like 2021 times like that of sequence prediction was really cruddy. It really
didn't work well at all. You know, we tried to use it a bunch for doing oh things like predicting uh pieces of code for autocompletes, things like this really did not work well. Um and then uh well the guys at OpenAI uh basically collected this huge and and well they collected this huge amount of training data. There was one kind of technical idea which I'm not sure how significant it will be in the long view of history. This idea of transformer nets this have been there have been sort of the question is if you look
at a slice of brain so to speak I'm too squeamish to actually do that in real life but uh um you'll see that the neurons are all sort of very randomly connected to each other. It's as if every neuron is more or less connected to every other. But there are areas like on the retina for example uh where the the where kind of the well in the retina it's photo receptors but then in the visual cortex it's it's it's actual neurons where things are connected more locally. If you're dealing with an image the image has
all its pixels kind of laid out in two dimensions and you want to kind of have your neurons kind of laid out in two dimensions as well. that led to so-called convolutional neural nets which are the things that we used for this image image identification problem. When it comes to things like language, language has the feature that it's sequential. You know, we it's just this stream of, you know, letters or words or sounds or whatever coming out. And there were the these things called transformer nets which are kind of are networks that are sort of
that only have connections sequentially but their connections can be quite long range just as you know a word somewhere in a sentence can kind of refer back to a word that's much earlier in the sentence. So anyway big sort of training effort was done using lot of lot of material from the web and elsewhere. Um it's uh and didn't they take that from Google as well? Didn't Google have some open source information out there that open AI used for their initial algorithm? You know, I I don't know. Google had been when the original kind of
image identification thing was done, Google had swooped in and sort of bought the company that was created overnight, so to speak, by the by the academic researchers who'd been working on that. So Google had sort of developed that uh developed that and and and I think we're using it quite a bit for search because when you're when you're ranking it's it's always this problem of how do you rank the the pages that come out of a search you know what's the most relevant and so on and there are many different signals you can use and
figuring out what s how to combine those signals is something that's a pretty good case for using neural nets and and I think they were using neural nets for a while for for doing that. Um and that was uh um but but yes the the um uh there had been I mean really there was quite a lot of excitement about neural nets going to be able to do things but it wasn't clear what they were going to be able to do and you know when chat GBT sort of arrived I remember chatting with the folks
who worked on it just after it came out and sort of my obvious question was did you know it was going to work they were like no Um, and in fact, probably had they known it was going to work as well as it did, they would have tried to constrain it in a lot more ways than they did. And a lot of things they did were kind of sort of what you do if you're just trying to do the experiment, not what you do if you're trying to build a production system just to see if
it works. So then the big surprise was that it worked. I mean, I I kind of liken it to sort of a history of technology thing, which was the invention of the telephone. You know, people had known that in principle you could transmit voice over you could transmit sounds over over over electrical wires. But when people had done that, you know, you would try and listen at one end of the wire and you wouldn't understand anything was being said. And eventually, you know, Alexander Graanbell found a bunch of hacks that again, I don't think he
knew were going to work, but suddenly people could actually understand what was being said. I don't think it sounded very good, but it was good enough that people could understand it. And I think the same kind of thing happened with neural nets. It kind of it got good enough that it was like you could read the text and it read like, you know, meaningful text that a human might write. And that was uh sort of the chat GBT moment where uh kind of the the being able to generate meaningful text, being able to kind of
start from a prompt, you know, if if on on the web it was like this question was asked, this was the answer given, you could expect it to sort of statistically follow that. Now there was an additional trick which was this idea of reinforcement learning and particularly reinforcement learning with human feedback which was went beyond just how would this sentence continue. you know, the cat sat on the what's the next word? Statistically from the web, it's probably Matt, etc., etc., etc., but the idea of get the thing to actually do what it's told and do
things like answer questions. Um, that was a thing that was sort of a a a another layer of of training that uh uh that was done and and it worked out. And you know I was when right when chat GBT came out oh so many people were asking me how does this work you know why is this working etc etc etc. I eventually wrote what turned into a little book about um what is it called? what what what is chatbt doing and why does it work which became very popular but it was kind of in
a sense disappointing for me because it took me a week to write and I there are lots of other things I've written that took me you know years to write and uh the one that took me a week to write people people seem to like very much now translated into lots of languages and things I don't know whether done by by humans or AIs I'm not sure but but anyway they're there they're always fun to see the book covers from different uh different different countries and so But um the you know the thing that that
that um you know I kind of the two things that sort of came out of that one was why does it actually work and the other was how does it sort of fit into the ecosystem of the world. The why does it work question the thing I realized is that in a sense the reason it works and the thing that surprises us so much is a little bit like the thing with the tanks in the daylight or not. There's something about language that we hadn't noticed that actually is more regular than we had we had
imagined. And you know, we're all used to the idea that we do grammar and we know that you know in English sentences are formed with you know noun verb noun with different parts of speech and different combinations. But there's more to it. Most sentences that are just noun verb noun are completely meaningless. But there's a kind of semantic grammar, a grammar that's based on meaning that says what noun, what verb, what noun can go together. And there's there's, I think, a lot of regularity in that structure. There's a kind of a a semantic grammar of
language. That is effectively what CHGBT discovered statistically by looking at the web. And people were very surprised that, for example, it it could discover logic, that it could make arguments that were made logical sense. I think the way it did that is the same way that you know Aristotle discovered logic back a couple of thousand years ago which was you just look at a bunch of sentences that people say and you ask what is the kind of structural pattern that they have led to you know syllogisms in Aristotle's time you know somewhere the statistical knowledge
that chatbt has of things like syllogisms and that is okay so it's it's able to produce something that is logical just like it can produce that the next word after the cat sat on the is mapped. But it's it's it's kind of learned sort of the semantic grammar of of language which includes things like logic. So the thing that was was very clear was that you know chatbt good at generating text. Um what it was also good at generating well one of the things that we had from Wolf from Alpha was a system that could
take text and compute from it. So that was a pretty nice combination because you can take the things that Chad GBT is producing as text which includes Chad GBT making up a question that it wants to get answered. And the thing we realized well very immediately actually was that you could use English effectively as the transport layer between the AI and our computational system and you could have chatbt kind of call wolfam alpha as a tool. One of the surprises early on was that not only could it do that using English as a transport layer,
also our wolf language, which you know I'd been building for so many years and and lots and lots of people use as a language for for doing computation as a way to represent kind of one's thinking computationally. Well, chatbt could do the same thing. it can write wolf from language code and uh so so we built you know at first it was kind of a plug-in to chat GBT and then lots and lots of other things where kind of the picture is what what the LLM what the large language model is doing is um uh
uh by by way maybe historical point that perhaps is of interest is you know why are they called large language models well it's because there were also language models which were the things that people used to try and figure out, you know, what would the next word be uh statistically to be able to disentangle speech to text and so on. So there had been a long tradition of language models often so-called markoff models where you essentially have these different states and you're going from uh you know several different sort of states of a system deciding
what the next letter will be. So large language models were like let's do this with neural nets and put billions of neural net weights in there. That's a large language model. So LLM's large language models of which CHGBT is an example. There are now many many others. We have a nice little website where we every week do a kind of a rating of of how how these various LLMs are doing. And uh it's kind of remarkable. It's an active enough field that practically every week there are changes at the top so to speak in terms
of what the uh uh what the winning LLM is uh this particular week. But but it's it it's um uh the thing that sort of has emerged is that LLMs are really good at doing this kind of linguistic interface at dealing with things language sort of for languages sake. They're not good at doing things which require computation. That's not what they're intended for. That's not what their structure is. It's actually we have a big clue that they're not going to be good at that because humans aren't good at that either. you know, we don't get
to be able to do, you know, run programs in our minds and things like this. And so, it's been it's been nice for us because, you know, I just spent four decades building kind of this whole system for being able to do broad kinds of computations about the world. I mean, programming languages are intended as a way to kind of take what computers can do and give humans a way to sort of tell the computer in its terms what to do. what what I've been building for a long time is this kind of computational language
which tries to describe the world computationally where the world could be you know cities or movies or images or or geography or or whatever else and be able to have be able to represent the world computationally so that one can compute about it and that's something where it's it's sort of a a higher level thing than has been attempted with programming languages. This has been I mean I I use the system we built all the time. So do a few million other people. Um you know mostly for research and development and and those kinds of
things although there are an increasing number of kind of practical systems in the world that that use our tech underneath. Um it's always it's always so strange when you've built a bunch of tech and then you are a consumer user of some big system in the world and you know that your tech is underneath it but doesn't do you any good. still the the consumer system does what it does. But in any case, the the um the thing that um uh you know it's it's been interesting for us because there's been sort of this you
know we used to have human users primarily. Now we also have AI users where you know LLMs are interfacing with humans through natural language but underneath they're doing these computational things using using our tools as uh it's been it's been an interesting thing to to watch that happen and it's something where you know what LLM kind of are readily able to do is is quite different from what you can readily do with sort of raw computation. the combination is really powerful and that combination you know one of the things that will eventually happen although I
don't yet know how to do it is to make a more sort of fine grained uh integration of those things because right now the LLM is going along it's generating a bunch of text then it generates kind of the the things it needs to call our system as a tool gets the results back then keeps going and uh uh and takes those results and sort of weaves them into the text that it's writing. What's the gap then from the what we have right now with what people are now coining you know AGR artificial general intelligence
what is the technological gap between what we have to and first actually maybe define what AGI means for you before we do that and then what is the gap that we have from where we are today to what we need to be where we need to be to get to AGI and what is that going to look like for the consumer's perspective because most people don't really know the technical details I guess Nobody knows what AGI really means. It's a it's a buzzword that um it's kind of like uh you know, first AI was a
buzz word and people didn't quite know what that meant except they thought it meant things that do stuff that's kind of like what people do. And it's like well now we've got things that do lots of stuff that people do whether it's you know being able to generate language, drive cars, whatever else. Um but still there's got to be something else that people do and and that must be AGI, some general intelligence thing. It's kind of ironic that um actually I just realized this as I'm talking to you that the the term general intelligence was
I think coined in the 1930s and um the uh it came in when uh at the same time as the concept of IQ came in. a very in my opinion a very troublesome concept but um it was at a time when people were trying to figure out I don't know for recruits for the army and things like this it's like was there a way just like you could say this person is you know 5'9 in tall it's you know let's give a number let's let's give a metric to you know quotes how smart are they
in my opinion a rather doomed concept but that led to this thing called G the coefficient of general intelligence and then people started making tests. So I think that's where this comes from which is a very doomed kind of place. But anyway that that but what does it mean and and what um you know there are the thing is that there are you have to define what do humans do? How do we make something that does more of what humans do? And you know, one of the things that's difficult there is if you say, well,
do I have an AI that can pick something up with its hand? Okay, that's a human-like thing. Do I have an AI that can have an emotional response? That's a human-like thing. Do I have an AI that um uh has all of these other Do I have an AI that feels its own mortality? You know, all these things. In the end, the only thing that is going to check all the boxes for being like a human is a human. So, you know, you'll be able to check more boxes that the thing that's the the box
of electronics sitting on your desk. It's going to have certain things that can be humanlike, but other things that won't be. And in the end, if you say, well, you know, what we mean by AGI is something that's humanlike in all respects, that's kind of a a doomed idea because the only thing that's going to be humanlike in all respects is a human. So now the question is well what happens you know what does it look like as we make as we go in the trajectory that we're going on so far of you know we've
got these neural nets they've got billions of weights in them they've got millions of neurons they've got um you know what happens when we make that bigger um for example and and we have kind of a model of that which is a little bit kind of strange to think about which is you know you start off with you know I don't know a fruitly for example It has 130,000 neurons in its brain. We have about 100 billion neurons in our brains. You know, cats and dogs have I don't know how many, maybe a billion or
so neurons in their brains. Well, you know, we get to do some stuff. We get to do a whole bunch of stuff that fruit flies don't get to do. And in particular, one of the ones we're proud of is human language. That's one of the things that sort of made our civilization possible and so on. And you know, cats and dogs don't quite have that. They maybe have, you know, sit, fetch, and so on, but they don't have the sort of compositional language that we have, being able to put words together in arbitrary combinations to
make sentences and so on. And so a question, a reasonable question to ask is, well, you know, if we're going in the trajectory we're going and we've got these neural nets that have, you know, human brainish numbers of neurons and so on, what would happen if there were even more than that? What is the next step from from cats and dogs to humans to the next level of minds? What does that look like? And that's kind of a uh you know that's kind of a thing that you might wonder sort of you know we've got
something that is humanlike at some level right now. we can go on checking the boxes of giving it, you know, letting it be, you know, my my guess is within, you know, the next big thing that will get solved in AI is robotics. Um, and you know, it's been super difficult to get, you know, robot hands to to pick things up and, you know, be able to pack boxes with whatever stuff you want from a warehouse, things like this. Um, it's it's something we humans manage to do fairly easily. It's something that it's been a
little difficult to get training data, you know, for the stuff that humans write while there's, you know, a trillion words on the web and things like this. Um, it's it's it's a little hard to get information. You know, you can watch videos and things like that to see what, you know, how one manipulates things in the world. I don't know exactly how that's going to work out, but my guess is that that will be another thing that people will be like, "Oh my gosh, this can now be done, you know, with AI and that will
have a whole bunch of consequences for practical things that are possible in the world." Uh, but but that's that's sort of a different story. But is that AGI or is that just a AI in a different form than a physical form through hardware? I mean, the only thing that's that's you know, what is the limit of AI? What I'm what I'm saying is the limit of AI if you define intelligence to be the thing that humans have is pretty much humans. Now the question is if you're going that trajectory sort of technologically can you zoom
right past the humans? Can you get to something that is sort of the the the the greater I don't know whether we call it intelligence but the greater capability that humans have and I think the answer is well yes but then the question is well okay what does that what is that like when you know if we have a thing that is computing much more than humans compute you know we have 100 billion neurons we have a thing with 100 trillion neurons we have something that's doing not the number of computations that we do in
our brains every second but zillions of times more than that. What is that like? Well, the fundamental thing is it's not very humanlike. And we have a really good model for what it's like, which is the natural world. The natural world is absolutely full of things that compute much faster than brains compute. that you know you look at any kind of you know babbling brook or something like this all that fluid turbulence and so on that's happening in the water that you can think of as computation just like the the electrical signals in our brains
uh we can think of as computations there's lots of computation going on in the babbling brook it's computation that isn't very humanlike it's not the same kind of thing as happens in brains but it's a lot of computation and so I think one of the things that one one one sees is is that you know it's getting more computation is something that one can readily expect one can it's in fact one of the things I've done in science for a long time one of sort of my big discoveries and directions in science is to understand
what what computation in the wild looks like. You know when we do computation usually we write programs we set up programs to do particular things that we want to do so to speak. But a question that I got interested in the beginning of the 1980s was what does a program that you just pick at random, what does it typically do? You might assume if it's a tiny little program that it would just do tiny little simple things. The huge surprise that took me a while to kind of really get used to is that's just not
true. In the computational universe of possible programs, even very simple programs can do incredibly complicated things. And I sort of realized that from a science point of view, that's kind of the secret that nature uses to make all this complexity we see in nature. But it's also something that is in a sense it's it's sort of computation. It's even what you might think of as intelligence, but happening in a very non-human way. It's it's just there is a lot going on. You can look at these things. You can say, "Wow, that's really complex and and
intricate and interesting, but I don't really understand it." I mean, we can think of science, in fact, the mission of science in a sense is to take what exists in the natural world and kind of make a bridge between that and what we can understand with our finite minds. I mean it's like saying oh you know we're not going to in our minds we're not going to understand what every molecule in the in the you know in the river does for example but we can say we roughly can talk about certain laws of fluid mechanics
that govern roughly what the what the river does. We're making a bridge between what's actually happening in nature and what fits in our finite minds. So now we we have this picture of what's happening of in um uh the um uh I'm realizing your your podcast is called growth minds and I think this is a a highly appropriate topic for um the question of what higher minds are like seems like a highly appropriate topic for your uh for your podcast. Yes. Yeah. Yeah. Hence the conversation particularly this notion of what what does the what does
the greater mind kind of look like and um you know there's this sort of question of what if we if we put in more more neurons if we make a bigger mind so to speak what kind of thing will it be like and I think the thing we have to realize is it will be very non-human and the question of whether it will you It may be that it does all these things that we could try and make a bridge to from what it does to what we understand and we could say, "Wow, we're very
impressed. That's scientifically interesting." But it's like what it's doing is it's not like a human just you know running you know there'll be some things where it will be a human running faster but mostly I think it will be things that are extremely nonhuman and where the real question is how do we how do we take that nonhuman computation and kind of lasso it into something we care about. It's very analogous. Yes. So the definition of AGI of people saying AI reaching human intelligence or human level intelligence, do do you find that's like the wrong
way of even looking at the progression of AI because AI is heading towards a place where it'll actually be the like very non-human intelligence from what you're saying it'll just transcend it. When I first started paying attention to AI late 1970s, there was this kind of checklist of things. When one has this, we will have AI. Like one of the things was being able to do symbolic mathematics. Another was being able to do question answering. Okay, you go down the checklist. We've got those things. But still people say, "But there's something different about humans." Well,
yes, there's something different about humans. Humans are humans and they, you know, they eat and drink and and, you know, and and die and those kinds of things. It's different from what this electronic device software system whatever does. And so you know people I mean throughout history people have kind of wanted to search for what is so incredibly special about us humans that is sort of fundamentally special. Not special in the particulars of we are humans that happen to be the way we are but something where we're sort of on some on some chart that
you could make. we're this data point that's just way out abstractly. And I don't think that I mean, you know, the the lesson of the history of science has been we keep on getting humbled in the fact that no, we're not special in that way or this way or or whatever. And I think the way in which we're special is that we are precisely the way we are, so to speak. We have all the particulars that we have of, you know, two eyes and ears and all this kind of thing. We have we are the
particular thing that we are. And yes, you can make sort of an AI that that more and more closely approximates the particular thing we are. You know, the humanoid robot that has, you know, its experience of the world is similar to ours because it's walking around the same way we are. The thing with two eyes that has a similar experience. If we had a thing with a million eyes that was seeing all kinds of things from from everywhere around the world, that wouldn't be a very human-like experience. The kinds of things that the millioneyed AI
would kind of think were useful to think about are probably very different from the ones we would think were useful to think about. Yeah, it's interesting that that there's so much fear brewing around society around how AI is going to be replacing humans. Yet we keep putting this goalpost up of trying to see how much we can replicate what AI is with human characteristics as if like we are actually trying to replace ourselves yet there's fear brewing at the same time. It's kind of ironic if one is saying what's our technological objective? Oh, it's to
make this thing that's humanlike. That's a that's a you know that's a goalpost you can see type thing. But that's the wrong goalpost you're saying. If the whole idea of us fe you know fear brewing that AI is going to replace us shouldn't we try to make sure that humans exist as our individual selves and make AI this completely different thing that can help us rather than trying to create human level intelligence where we will actually just be we are negligible relative to nature but we seem to have just a fine time in that position
you know relative to the relative to the universe we are negligible relative to even the computation that happens, you know, around us on the earth, we're negligible. Um, yet we feel pretty pleased with ourselves regardless. And I think that's the that's the way to think about it is that, you know, it's like you could uh, you know, you could we exist in a certain niche that is the one that is the human niche. And we can say, well, gosh, we'd like to be, you know, on other planets. we'd like to be this that and the
other. Um it doesn't uh you know it's still the case that that the one the niche we care the most about is this human niche. Um now you could say abstractly for the universe it's more significant to the universe if our AIs are running around you know going to other stars or whatever else. I don't know what it means to say it's more significant for the universe because that's not like there's an abstract definition of you know you're kind of asking for the ethics of the universe which is something that doesn't really have a definition.
These kinds of questions of what's it significant for, what's it meaningful for, what's it kind of right for. Those are questions which in the end they're anchored in us. They you know you can't ask those questions abstractly in a meaningful way. I mean I think that uh so so this you know one of the things to realize is you know what is technology? Technology is an attempt to take what exists in the world and kind of take pieces of it that we can use for human purposes and you know different kinds of things. You know
when when people discovered magnetic rocks I don't think they knew what was what they were good for at the beginning. When people discovered liquid crystals, they certainly didn't know what they were good for at the beginning. And then it was realized, well, you can make a display out of liquid crystals and that was a use case for that. And so similarly, there's there's sort of a vast there's there's all this stuff in the computational universe. All of this computation in the wild, most of it we don't know how to use it for anything that we
humans care about. What happens in the progress of science and technology and society is that there starts to be, you know, we gradually get to have more and more things that we think we care about and we probably forget about other ones that we don't care about anymore. I mean, you know, there are there are lots of things where people would say, you know, I can't imagine that, you know, you could make a living, you know, playing video games, streaming video games or something. Um it's it's a thing where you know or or even you
know it's I can't imagine people will find it interesting to do this thing that's a little video game or something. You know there are these things that we uh that happen that that cause us to decide there is a purpose to that. I mean, if we look at what we do today and we imagine it from a a thousand year ago lens, a lot of what we do today would seem absolutely pointless to somebody from a thousand years ago. I mean, I I think one of my favorite examples of that is, you know, walking on
a treadmill. You know, explain to somebody from a thousand years ago why you walk on a treadmill. Well, it's, you know, it's to improve my health so I'll live longer. So, this, so that it's like, why does that make any sense? you know, we're we do what we do for the greater glory of God or whatever. And we are, you know, this is our sort of brief time on earth type thing. And why are we why are you there many, you know, you you could you could look at different different things that people might have
thought were significant a thousand years ago. None of which would explain why you would walk on a treadmill, why you, you know, walking and not getting anywhere type thing. And I think that's that's something you see in the progress of of society and civilization is you see sort of the things that seem purposeful and meaningful gradually change and you know the the thing and what what does you know AI or automation in general do it takes the things that humans want to do and it somehow makes them easier to do. And the thing that you
might say was that maybe there'll be no need for the humans because everything's going to be easy. But then you still have the question, well, what do you actually want to do? And that's something where there's no there's no AI that's going to answer that question because there is no answer. There's no abstract answer to the question of what should you do? What should the universe do? The universe does what the universe does. It doesn't, you know, that there's no there's this choice of what to do is something that is sort of the ultimately quintessentially
human thing because there are many things you could do and it's it's up to well you could pick you know it could be cats and dogs that were deciding too or it could be aliens that were deciding but something some arbitrary thing has to decide what it is that you want to do. So I mean I I think I the way I see it the sort of the future of well the the the the sort of arc of technology has always been to take things humans want to do and make them more automatic, easier to
do. And so what does that do for the humans? Well, you know, I I looked a while ago at what's happened to jobs that humans do. Like in the US for example, there's data back to like 1850 or so of what jobs do people do. Back in 1850, most people were doing agriculture. Most people were actually, you know, plowing fields and things like this. That all got almost all got automated. So then what happened to the economy? Well, what happened is that what was a big chunk of the pie fragmented into a zillion different areas.
And you see that typically as economies get more advanced. there are more different categories of jobs that develop and that's that's sort of typically the pattern of you know some category of thing was hard to do a lot of people were just on the ground doing it then we automated it then those then that very process of automation opened up lots of other things that were possible and you know it was um and that then you know was that Then people found things to do that in the end the front line usually ends up being
things people have to make choices about. That's places where you can't feed in. There's no abstract way to feed that in. Now you can be in a situation where people say, "Well, let's just abdicate those choices to the AIs." That's kind of a bad situation because then in a sense you're saying let's just take society and civilization as it has been as captured by the AIS and let's just run the same thing over and over and over again. It's kind of the humans never get to sort of do anything that is that is uh kind
of you know new and different. Now you know then you ask the question well why can't the AIs be creative? It's very trivial for an AI to be creative. It just has to pick a random number and that's doing something that you know is creative. Now the question is is the random thing that it picks is that something a human will care about. An interesting question that's kind of a frontier question right now is for example something I've looked at quite a bit is if you look at mathematical theorems it's pretty easy to get a
computational system to just go spewing out zillions and zillions of theorems, billions of theorems. They're all true theorems. You might say, "Wow, that's exciting. That's making progress in math." But it's not making progress in math that people care about. Because most of those theorems, people look at them and say, "Ah, okay, I guess it's true, but so what?" And what counts as kind of math that humans care about tends to be this kind of prong that gets built, this tower that gets built. Humans care about this. Okay, given that they care about that, now they
care about this. and you're kind of building it up that way. It's the same same kind of thing. Well, you see it all over the place. I mean, in I did a thing a year or so ago, I was curious about um uh sort of alien minds and their the mental imagery of alien minds. So, what does that mean? So, the you know, you can make you can get an image generation AI system. You tell it a description, you know, a cat in a party hat. it will generate a picture. It will generate zillions of
pictures that are all would match the description a cat in a party hat. Okay, so that's what the AI would do, you know, having learned from humans who've labeled their pictures and so on. So now you ask the question, well what if you take sort of the if you modify the innodes of the AI or alternatively if you take kind of the internal description that it has of cat in a party hat and you start tweaking that. Well, you move away from the concept of cat in a party hat into what I was calling interconcept
space. You move away from these human-defined concepts to concepts which are which exist in the mind of the AI but are not familiar to us humans. And what you see I was referring to it as cat island. You see this kind of island in that interconcept space that is around the cat concept. We humans recognize that as those are pictures of cats. Then you move off into this interconcept space which is a space that is kind of in a sense just as meaningful to the AI mind as the the cat point in into concept space
but it's not very meaningful to us. You look at these pictures and it's like I don't know it's you know it's a picture with a bunch of stripes and dots and squares and circles and this and that. I don't know what it's of. I don't know why I care about it. But you also got to realize that some of those pictures actually I was was amused to see that somebody took some of the pictures that I made in that post and they're now in some art exhibit somewhere in Paris. Um so that you know that
style of making pictures is now at least in one one minor case I think actually there been some others of this as well but considered as art. So what was an something in interconcept space might one day develop into the such and such style of art that then we would know about and then it becomes a concept that's part of sort of the concept space for us humans. So, so the thing to realize there is that the AIs are frolicking around into concept space, so to speak. They're, you know, what happens inside an AI is
full of these things from interconcept space. They're just extremely non-human, just like nature is doing lots of non-human stuff. and the question of whether you know it's a uh the you know if we open up an AI as it exists today and we say what's going on inside here is this something I've done a bunch of work on recently actually it's uh it is interesting because it is essentially there are these kind of lumps of irreducible computation that you find there and what's happened in the training of the AI is it's kind of fitted together
these lumps of irreducible computation to do the things we want it to do. Kind of the analogy I've been using is it's kind of like it's building a wall out of rocks. It's building a stone wall, so to speak. It's taking these lumps of computation that sort of happen to fit in and more or less correspond to the thing you need to tell cats from dogs or whatever, and it's putting a bunch of those together so that you get something that is uh kind of uh that that achieves the objective we want like distinguish cats
from dogs. What's happening inside is something that in some ways is sort of randomly picked because it's which rock happened to be lying around. In some ways it's incomprehensible because it's full of these kind of lumps of irreducible computation. And you know you can say well gosh what will the world be like when it's dominated by AIs that are doing these things that are incomprehensible to humans? I think that feeling is very much like the feeling of living in the natural world. I mean there's been this brief period in history where the technology we build
we expect to be able to understand. You know in the past when people were you know getting transported around by riding horses you could know something about how to get the horse to do what you want but knowing how the horse works inside was not something you really cared about. You were able to use the horse for something that was useful to you but you didn't know mechanistically how the horse worked inside. You know, post-industrial revolution, for a brief time, we've been operating machines that are simple enough that we know what's going on inside. You
know, we're now back to a situation where to make systems that can really make use of computation as it can best be made use of, we sort of inevitably have to deal with this kind of irreducible computation that we can't readily understand with our minds. We can't have a kind of narrative explanation of what's happening inside. So now the question is, well, how do you deal with that? Well, it's the same thing that we've done with technology forever and ever, which is there are things that are in principle possible that might be possible in the
natural world that might be possible in the computational world. Now, how do we use those for things that align with the things we care about? And some part of what will happen will be stuff we don't understand that doesn't align with what we want. I mean, in the natural world, there are tornadoes and things that don't align with what we want. we end up, you know, being able to predict them and, you know, having tornado shelters and things like that to kind of exist alongside them. And no doubt there will be things like that that
come out of the computational universe, so to speak. Perhaps in some sense there already are. Um, and it it's, you know, I think that's the view of of what uh, you know, sort of how to how to conceptualize what it's like to kind of coexist with something which you could think of as a greater intelligence. I suppose it's certainly greater computationally, but that's something extremely familiar from the natural world. as someone that has a a very high level of intelligence that probably understands things that most people don't really at a deep level. Even with AI
and how it's evolved, you have a very good understanding of it. Do you fear a world of AI evolvement that exists to not understanding? you know that world I make these little tiny programs where I look at the program and it's totally trivial and then I run it and I it does things that I don't understand I mean I've lived that myself for 45 years or so. So at the beginning that was very bizarre. It was very much like this can't be happening. You know it can't be that this tiny little program makes this all
this amazing stuff. But you know you kind of that's the way nature is. That's one sort of gets used to it. Now you know for me the you know my general approach to things is use any tool you can you can. And so you know for me you know one of the things that's funny I was just realizing this actually people are kind of sort of say well if the AIs get to be really smart they'll you know what will that be like for humans so to speak? You know, I myself happen to have been
in this situation that I've sort of created for myself where I've been building tools to sort of that that can enhance my ability to think about things. I've been building such tools for for I don't know 45 years or so. And so, you know, the tools I built allow me to take ideas and sort of see their consequences really pretty efficiently. I mean in you know I can see that if you know once I have an idea once I know what direction I want to go in it's like type type and I'll have a little
wolf from language program and I'll run it and it'll sort of let me work out that idea. It's a very short process. It's actually got even shorter recently because we built this notebook assistant system that is based on LLMs and other technology that we made um that allows one to kind of even more efficiently go from kind of a thought you have to computational language code that can actually run. So it's a funny thing because I I guess I've lived this perhaps ahead of where other people would have lived it. Um although there are plenty
of people who use our tech who probably are in a similar position to mine where it's like you have an idea you want to see its consequences that's a very short path it's something where normally you know if you didn't have you know I've been building technology partly for myself partly for everybody else that really shortens that path it really automates the the the the going from idea to reality at least for for things that you can implement on a computer. It's not, you know, building rockets and things like that. It's it's just doing the
kinds of things that I'm interested in from an intellectual point of view. But but so, you know, that's that's a sort of this thing about you imagine it and then you make it real. That's a thing I've been living for a long time. I think people will increasingly come to expect that. And that's not a that's not sort of a dehumanizing thing in a sense. It's a human amplifying thing because what gets more important is well what is it you want to do? What's the idea that you have? Um and I think that the question
of well how does it work inside? I mean that ship sailed a long time ago. I mean you know people you know who understands what in detail is going on inside their computer. even you know you know as I say when I do science and set up these simple programs they're always doing things I don't understand always I mean you know every when I'm working on those kinds of things practically every day I'll be sort of humbled by the fact that I imagine what the thing is going to do and then it does something I
didn't imagine right h how do you answer that though from like a maybe like more of a nihilistic or doomsday level questioning you mentioned nature doesn't always we don't understand nature but nature can certainly kill us. How do you think about that question when you have you mentioned not understanding how things are going to do what how things are going to react from a software perspective but as we get into hardware and robotics things that can physically hurt us do you what what are your thoughts around that as as we become less necessary perhaps for
there are plenty of self-driving vehicles right now planes trains some cars things like that there's plenty of uh you know we have sort of given over to various ious levels of AI, many kinds of things. We've uh uh you know it's again that this question of sort of you know we perhaps humanize you know we say well maybe there'll be this kind of AI that rises up and wants to kill us all. That's a very complicated concept because I mean look I think from a practical point of view the there are stupid things that one
could set up to abdicate to AI. One could connect AIs to lots of systems in the world and just sort of and say oh everything's going to be fine and it won't be. But, you know, I don't think that's I mean, one thing to realize is it's not like there's one AI in the world. There could have been could have been that that you know, there was in science fiction and in even people's early conception of computers, it could have been we just build this one giant computer and that's all there is. Um, that's a
little bit of a different situation. It would be like saying there's, you know, if there was one uh, you know, organism and there was no competition between organisms and so on, it would be a different situation. I don't think that the uh sort of the so we're not in the one big AI situation. So now the question is what you know people imagine things like I mean I don't know the to my mind I'm not a big you know people say things like well you know think about the IQ of a of an AI and
think about the fact that it can improve itself and its IQ is going to run off to infinity. Well, you know, I I I think of many people who would be able to do IQ tests really well who I know who uh I can I can be quite sure and not you know that that on its own doesn't you know you it it isn't the the ticket to sort of you know be take over the world type thing. I think that the um uh the thing I mean there are there are many features of the
world that are you know there are many physical constraints in the world. Yes, you can have an AI that's figured out all kinds of things, but it's still subject to the laws of physics. It's still uh you know there's and there's things where people might think, oh, it's going to figure everything out. Well, actually, you have to try a bunch of things. The the physical world is partly because it is doing all this computation. You can't figure out in advance things about what's going to happen in the physical world. You actually have to try experiments
and so on to see what's going to happen. That's another piece that sort of slows down the oh it's just going to figure everything out and and and take over. But but I think the most the most significant thing is that you know there is an there is a people tend to project onto AIs the idea that the AI is going to want to do this or that thing. But you know just as they project onto other people that other people are going to want to do this or that thing. The only thing we ever
know for sure ourselves is how we're feeling internally ourselves. You know, everything else is kind of an assumption, a projection. And the concept that, you know, the AI that has all of this sort of computational ability is going to want to do something is is a very weird concept. I mean the the generalization of wanting to for example all these programs that I've studied in the computational universe I might as a human say oh it seems like it wants to you know fill this thing with black squares or something but that's really a very weird
description. It's a very humanized description of something that really isn't very human. I mean I I think that the uh uh you know this question you know I was having a a conversation with a person who's kind of a one of the leading AI doom folk uh chap called Elazi Yukovski um that the um he had a long conversation and I I think I finally understood his his view of kind of the the the scenario of doom and and honestly as I as I said I just don't think It's right. I mean, so his theory
is this. His theory is uh AIS will be able to optimize the doing of almost anything. I agree with that. The up to the constraints of the physical world. You know, in other words, if there is a thing you can define that you want to do to be able to do it, one will be able to optimize the path to be able to do it. I mean that's as I say that's what I've lived for the last you know four decades in software that I've built is you know can I can I automate that can
I optimize that so I I take that as as a reasonable thing second statement is AIS will sort of have objectives that are kind of a wide range of different objectives if you can define them as having objectives ives. I think that's kind of true, but I'm not sure what it means to say that they have objectives. Well, that so then the next claim would be most of those possible objectives don't leave room for humans. That's a much more bizarre claim, I think, because it's like saying that, you know, it's very it's it's difficult to
define this notion of an objective for something that doesn't have the kind of thinking that that isn't humanlike, that isn't it's it's like I can say I've got this little program and it does what it does, but does it have an objective? Well, no, not really. It just does what it does. Now, do do humans have objectives? Well, at some level, we just do what we do. The nerve firings in our brains cause us to do what what they cause us to do. We from the outside will have a description that says that human is
doing that because of this and that and the other. But that's a from the outside description. That is an imposed notion of an objective. Now we may feel in our internal thinking about our own thinking that we describe our internal thinking in terms of objectives. That's possible. I'm not sure that that's a thing. It's an interesting question to what extent the description of what we do in terms of objectives is something that is innate and natural or whether that's something that we learn just as we learn language to for us to describe what we do
as I'm doing that because blah blah blah you know it might be that you know when we're all you know babies or whatever we just do what we do and it is a a higher layer that is our description of objectives so to speak just as it's a higher layer to be able to to describe things in terms of language. So you know I think this notion that we can just sort of say in the space of possible objectives here the the AI is going to pick this and this and this then they're going to
tighten the string and when they tighten the string humans will be sort of locked out of the picture. I just think that's a very I I just don't think that's the right picture of what it means to think about sort of a space of objectives. I don't even know what it means to talk about an objective. It's like you know asking what I mean one gets very quickly into a lot of kind of complicated ethical questions about kind of what um I mean I think Eleaser has sort of an idea about uh uh sort of
um that we have a responsibility to keep the universe interesting so to speak but I have no idea what that means. In other words, spreading life through the universe is that there is a sort of ethical obligation to the universe to spread life through the universe. I simply don't get that at all. I mean, ethics is a human thing. There is no abstract ethics. You know, people get confused because people try to take ethical questions and couch them as scientific questions. For example, famous one is, you know, the trolley problem. you are trying to decide
are you going to you know make this well these days it' be a self-driving car you know kill five llamas or something or one endangered lizard or whatever you know how do you decide that well the point is that the thing that's that's the thing that the cheat in that in that problem is well in science one of the things that makes science possible is that we can do kind of controlled experiments. We can say we're going to do an experiment on this little tiny piece of the world, ignoring everything else that's happening in the
world. We don't have to. But in ethics, I don't think that's possible. In other words, there is no answer to the question of the llamas or the endangered lizard without knowing the whole story of sort of the connections of the llamas, whether that one of those llamas was somebody's pet llama, whether there was a, you know, a group that worships llamas, and whether the endangered lizard was a, you know, all kinds of things. It kind of quickly in in, you know, entangles everything in the world. you no longer get to do this kind of sciency
thing of making the controlled experiment the abstract uh uh kind of thing. I think you were asking earlier about what you know what a person like me feels like in terms of kind of you know is AI just going to be able to do their job for them and so on. Um, you know, my attitude towards that is I I spent my life trying to use tools that exist to leverage the things that I can do and to get me on the sort of fastest path from thinking of something to do to being able to
execute it. And, you know, for me, talking to the AI to have it help me write a piece of code or something like this is a pure, you know, it just it just helps that process. Now there's another thing that I've just started to do which I'm not quite sure how well it's going to work out but it's this. For example, I'm interested. We made a big progress in fundamental physics about 5 years ago and there's a big question about whether there are experimental consequences of this theory of physics that uh uh that are made
um that uh whether they're experimental consequences that one can figure out what they are. Maybe there are experimental consequences where the experiment was already done and people just didn't know how to interpret it. Well, there are millions of physics papers out there in the world. I haven't read all of them. I couldn't read all of them. So, a question is, can I use AI to essentially thematically analyze all of those papers and tell me sort of thematically? I mean, it's one thing to sort of do statistics and say, I've got this whole pile of numbers,
you know, and uh seven out of 10, you know, giraffes have long necks or whatever, whatever it is. Um, you know, with numbers, you can do sort of statistics. But there's a thing that's now become possible with AI, which is to say, let's take a million pieces of text and let's get kind of the let's try and extract something thematic from all of that text. not what's the average of the numbers, but what's the mood of the text or something. And so, you know, it's something I'm just starting to try to do. And it'll be
interesting to see how well it works. I I suspect it's going to work fairly well. And I suspect that's a kind of, you know, is it is it a discontinuity from what we've had before? Not really, but it's a it's another big step. I mean back in the in the 1970s I was already using you know online uh database services where you could do you know that people had uploaded the abstracts of all scientific papers. You could do a keyword search, you know, you could find papers. That became a lot easier when the web came
along and search engines came along, but it was already possible in the 1970s. Um, you know, the web made it easier to do full text searching and so on, search engines and such like. This is another step. I don't know, you know, how significant a step it will be. I don't really know. I mean, there are many use cases that are kind of interesting. I mean, one that that I've been curious about is medical diagnosis. uh actually diagnosis of almost anything. Diagnosis of you know problems with your computer things where there's a body of knowledge
about things that can happen and you have certain symptoms and you're trying to match those symptoms to what's known and I kind of have this suspicion that you know the current round of AIs is going to do quite well at that possibly in a quite superhuman way. Um because you know we humans I mean it's it's it's this thing about sort of thematic searching of what's out there in the world and it's it's you know some parts of that thematic searching actually don't even use the kind of most AIish parts of things like LLMs they
use things like you know take a big piece of text and instead of grinding it up just into words grind it up into these arrays of numbers that somehow represent the meanings of of of of sentences and then say well do I have a sentence that I'm now asking about that's close in meaning as revealed by being close in numbers to something which was already there yeah you mentioned something interesting which is like what the fundamental question is what do humans want to do you talked about this idea of jobs there was a a quote
I think on an essay around John Mayor Kees he said that in the 1930s he said that in a 100 years due to the advancement of technology that the productivity is going to increase so much that humans are only going to need to work more than less than 15 hours per week and you mentioned with AI and the evolvement of AI humans are pretty much the AI is going to be able to do what everything the humans are going to do yet here we are pretty much 100 years later humans are still working 40 to
40 50 hours a week so nothing's really changed there What does that say about how we identify as purpose and what are the things that we need to unlearn as society if AI can really do everything that we can? Well, I think you know in the time of of uh of keynes you know lots of people were doing agriculture. I mean lots of people were working very hard at things that got automated and you know as I was saying before I mean what we actually see in the data is what happened is those people fragmented
into a zillion other jobs now you know it's it's it's like it's like a lot of these things it's like the paperless office myth you know when when there are uh you know when you can make documents electronically there won't be any paper now there actually now there isn't much paper but that took a while. There was a big burst of more paper and I think that um you know what I think we see is that the things you know there will be I think more emphasis on what do humans do? Humans make choices. Humans
interact with other humans. These are things which are sort of uniquely human and independent. Even the interacting with other humans, I'm not sure how that's going to play out. I mean we have a big project right now to build an AI tutor. You know people have been trying to do sort of computerized education for 70 years and it's basically always failed to do you know one can get computers to help but to be the prime teacher has never worked. It looks promising this time around. I'm not saying for sure it will work. We'll know. We'll
probably release it in a few months and then we'll know whether it works or not. um like Khan Academy style or what's what's kind of the Well, I mean, so so this is literally you're simply interacting with with with particular thing we're targeting is algebra. Uh because that seems to be a thing people have a lot of trouble with. It's it's frustrating for me because usually we build products and I'm in the target market for those products. This is practically the first product we've ever built where I'm not in the target market and I I
have a pretty hard time kind of internalizing what the interaction looks like. I I'll be the better user because I'm terrible with algebra. Well, that that's uh yeah, you know, it's it's a it's a thing where where you know the question is I mean I think what you know the the center of something like Khan Academy is is you know videos that explain things. Um and I know they've done some experiments with with LLMs. The raw LLM does not do very well at this. the um uh the thing that um you know people might say
the raw LLM is going to be able to to be a tutor. It is true that if you like upload the class assignment that you had or the the notes from your class and you tell the LLM, ask me questions based on these notes, it'll do a reasonable job at that. But if you say lead me through this whole course and sort of keep me on track and and so on, that's a thing that at least in our observation seems to need a whole lot of superructure, a whole lot of kind of I mean for
us right now, interesting statistic I just learned a couple of days ago uh in our AI tutor, there's four times more sort of AI work going on in behind the scenes than the AI that's actually interacting with the student. So there's four times as much kind of the machinery that's keeping the whole thing on track and that's defining what should happen than there is in the actual you know interaction with the student. And that gives you sort of a sense of the fact that and it's a typical thing I think that's happening with sort of
AI is there's a component of a task that the current generation of AI is really quite helpful at and it really you know enables things which were never possible before but there's still a big kind of you know how do you fit that into a sort of harness of how do you fit that into a bigger machine that um that can do the full task you want to do. But but you know, but back to sort of so I I don't know that as far as I'm concerned, I'll know more in a few months whether
sort of things that involve sort of things like teaching that seem to involve humans getting convinced of things by humans, humans getting motivated by interaction with humans. I don't know how much of that will turn out to be sort of AIable. Not sure. But specifically but specifically around jobs and and humans defining their identity around jobs when AI is replacing everything. Does the 9to-5 need to change? Like what happens when no one needs to work anymore? Well, it depends what people you know if people are what does it mean to work? You know if you're
you know if you're playing video games and that's how you make your living. Is that working? You know, what I do for a living, so to speak, I don't consider working particularly. You know, I I do what I do because I like doing it. I find it interesting. And, you know, it happens to be a commercially successful thing. Much of what I do is some of what I do is basic science that might be commercially successful in 200 years, but that's not really really uh really the the the point, so to speak. Um I mean
I think that people people have you know the set of experiences people can have the set of things people can do will be deeply leveraged by more technology and as they have been so far. I mean you know people who spend their time on social media you know or make a living interacting on social media or doing podcasts or whatever else these are things that have been enabled by technology. you know what we're doing right now couldn't have been done without a bunch of technology and the in you know I think what we'll what we'll
see is more and more things that are possible for humans to do and you know yes it's it's it's conceivable that humans will just all become couch potatoes and just you know and and just be sort of pure consumers. But certainly the the you know what has tended to be the case is there's there's you know there's certainly things for humans to do. It's not the case that there's that I mean if if you ask the question do humans need to work you know in other words is can the world sort of operate without humans
doing anything that anybody thinks of as work? I mean it's it's possible. I don't think people will I don't think that's how things will run because I think people will it's like well we could say we don't need all those television shows that somebody had to invent. We don't need all those podcasts that people are doing and so on. The world could run without that. If we were if we ran if we were back a 100 years we would say yeah you know what we really need to do is to is to automate agriculture. Once
we've done that nobody's going to need to work. it's all it's all good. You know, there's, you know, we can just hang out and and, you know, have food delivered to our table type thing and never have to worry about anything else. But that hasn't been the, you know, that's that's not what has been the history of our species, so to speak. I mean, we end up with things where we we find things that um uh, you know, suddenly something becomes possible. It's like, well, let's, you know, people will do that. There's always a certain
driver in there's there's certain kinds of scarcity that I think continue to drive things. It's like, you know, you could say, well, you know, it's going to be automatic to discover this thing in science, let's say. Well, but, you know, if if you decide to go in that direction, there's always going to be the first person who discovers that. And that's kind of an exciting thing to be the first person who does this or that thing. There's always sort of a built-in scarcity to to what's out there. I think that the thing to understand perhaps
is sort of the computational universe of possibilities is is infinite. You might have thought that you know there'll come a time when we've made every invention that can be made. That time will never come. It's you know a century ago people were like yeah we've almost made every invention that could be made. well turned out was not true. But we now know, you know, bunch of things I've done in sort of theoretical science make it very clear that in in no sense could we will we ever be able to say every mathematical theorem that could
be proved has been proved. Every invention that could be made has been made. That will never happen. There is an infinite frontier of these things. there's an infinite and every time there is an invention that can be made there's something sort of new and different that you can do in the world and then the question is well humans might humans might just say we don't care you know it might be that uh that humans just decide you know if we if we look at human society people have different different beliefs about what we should be
doing I mean it's it's like you could say I don't believe in anything that was invented in the last hundred years I'm not going to live my life in such a way that I'm using things that were invented in the last 100 years, last thousand years, whatever. Um, you know, you can make a decision that says I'm just going to lock out the things that are now possible. Um, I think, uh, you know, just as one could have sort of the the society of the couch potatoes, so to speak, that have just decided they're never
going to try and I do anything. I I don't think that is the way the human condition is going to play out but it's not something that I mean you know it is the case well in you know depends on where you live in the world and how many people there are in the area but you know there there are if you're in a place where there are enough you know natural resources that you can you know just mine them out of the ground and you know make make a living so to speak just by
the fact that there there are these natural resources there there some parts the world where that's the case. And it's an interesting question. What do people do in that situation? What do they do when they don't sort of need to do anything? And the thing to realize is, you know, the human condition seems to be such that we still seem to seek some things that, for example, are scarce or whatever else. And I, you know, I kind of think that it is somewhat on us. I mean yes it it is already the case at least
for some small part of of of our species that you kind of don't need to do anything so to speak and that would be you know but I think that the question of whether you uh you know it I think it's more a choice than it is oh the AIS are going to take over there's nothing for us to do the fact that the AIs have enabled more things just puts us on a taller platform there's more that we can then do and and at least, you know, in at least my view of life, that's
a thing that's that's kind of great and lets one go much further. It's not a thing where I say, "Oh gosh, I should just give up. The AI is going to do everything I can do." Um, I I think, you know, again, that's that's a somewhat, you know, uh it might be somewhat person dependent thing. It's a uh you know it's my perhaps uh uh kind of my rosy view of the human condition perhaps if you think about it that way as as uh as something where people uh sort of always seek things to do
and you know whether they have to do it from the point of view of of uh you know if they don't they will die um or or starve or or whatever else it's a little bit of a different thing than is that what they choose to do. I I agree with you from like the leisure perspective once you've met those basic necessities. But every human needs to put food on the table. Every human needs a roof. Most humans need a roof over their tops and be able to meet their basic necessities. And throughout history, we've
always exchanged some form of value in the economy and work to be able to put currency like the currency of money in our case so that we can actually pay to meet those basic necessities. But what happens when there is not enough jobs and people don't have the ability to provide value to be able to actually exchange and meet those basic necessities? Is that is that a problem that you can solve through other economic means like UBI or how do you think about that? You know, I I think maybe I have a uh uh Okay,
so the first point is that you know, a lot of what people pay for in the world today isn't basic necess but not in all countries but but in in you know in there are many segments of the world in which what people are mostly paying for isn't basic necessities. So there is a thing that people care about paying for that is something much more ethereal than just getting enough food to eat so to speak. So that that you know the question of what fraction of what has to be kind of paid for you know
there are a lot of things okay here's a surprise something that surprised me okay is back I don't know 40 years ago or something 45 years ago I used to have fancy computers I used to have access to computers that were much fancier than the typical person's computers were but in fact consumer electronics became cheap enough that everybody has the same kinds of computers now. It's not it's very flat and you know this question of it could have been the case that there was a huge you know uh sort of uh sort of consumer electronics
was this thing that was a big sort of mountain where you only got to the top with a lot of effort you know things like basic necessities they'll get cheaper probably you know through automation they'll get cheaper they've gotten cheaper I mean the you know when fertilizer was invented you know people thought the world was going to run out of food because there weren't going to be enough crops produced to deal with the population increase. But then things like fertilizer, crop breeding and so on were were were invented and uh you know and that problem
went away through something and because food effectively became cheaper and uh food became easier to produce and I think you know this question of what uh you know what happens when you've kind of zeroed out certain kinds of basic necessities. Well, people want other things, you know. People want to see that uh, you know, they want to watch that amazing special effects television program or something. They want to see this thing. They want to be uh kind of um uh they want to have this or that experience. They want to do these things which are
kind of um uh you know that they I I mean again it's been the it's been the experience. Now, you know, it could be the case that at some moment sort of our species or some segment of our species just decides we've got enough. We don't have to invent anything new. We just hang out. I think it's happened in the history of our species. I mean people say I don't know the detailed anthropology of it but people say there have been periods of of thousands of years where there have been you know groups where well
here's the funny thing where people say you know their basic necessities were taken care of they lived in a place where they could you know just pick the pick berries off off bushes to eat and and so on and uh and then then you ask well what did the people do in that situation and the a common anthropological statement I haven't really dug dug deep in this. So I don't know I don't know how much I believe this but the statement that's made is well people do these go into these very ritualistic kinds of behaviors
which is to say from the outside doesn't look like much is going on. It's just people doing quotes ritualistic kinds of things. I'm amused to realize the extent to which so many of the things we do today could be seen as ritualistic. I mean from the point of view of you don't know why you're doing it you know I'm sitting in front of a computer getting weird pictures coming up on the screen this seems like a devotional ritualistic kind of activity if you don't you know if you don't have a thread of understanding what the
point is so to speak you know by I mean in other words if you can't connect that kind of activity to something that you intrinsically know what the point of of it is you can't make that kind of thread of connection it just looks quotes ritualistic to you So I think you know the the uh you know the the thing that sort of what do the the humans whose whose needs have been met what do they what do they do? Um the answer might be from our view today they look like they're just doing ritualistic
kinds of things but to their internal view they're doing things that are tremendously significant. I mean, when I, you know, if I see some kid who's worrying about something about some social media interaction they're having, it's like, you know, I don't know. I don't know why you care about this, you know, but to them, it's something very important. And, uh, you know, from the outside or from a different time in history, it might not look important, but in the moment, in the internal sort of uh experience, it can look significant. And I and my guess
is that that you know I I there's a view of sort of the future which says well we'll figure out brain uploading and all this kind of thing and pretty soon what the future of humanity will be you know a trillion souls in a box playing video games for the rest of eternity. Okay. Yeah. and and you might say, "Gosh, that's a terrible outcome." From our point of view today, from our experiences today, from the things we care about today, that seems like a terrible outcome. But my guess is that it in the internal experience
of that disembodied soul, you know, playing video game, playing quotes video games will be perfectly meaningful. and and they'll look back at our time and say, "Gosh, you know, that must have been, you know, they couldn't do all these amazing things we can now do and you know, what a uh what a boring, meaningless existence, so to speak." Yeah. Just like a thousand years ago, us running around treadmills seems crazy like you mentioned. Uh so as the trajectory of technology advancements makes luxury items more accessible you talked about I mean now we have everybody has
a private car through Uber you can you know rent an Airbnb on on wherever you want in the world and just go into someone's homes soon you'll be able to have a robot in your house and tens of robots assistants can do pretty much anything for you. So if if that's the case and you're saying that basic necessities are going to be met, what's what's going to be the perspective of how we think about money and wealth in general? Like do are people going to want to be wealthy in the future like we do today
or is that relationship with money going to change? There'll always be scarcity. Not because you know there'll always be the person who lives at the top of the mountain. There's only one top of the mountain. Maybe you can build another mountain eventually. Um, you know, there'll always be the first person who does X, you know. So, there'll always be things that are not u, you know, that are not genericizable. You may not care about them, but there will be things that somebody might think was was worth kind of bidding up, so to speak. So, I
think that's a that's one thing to to realize. I think that you know in um I mean one's attitude about uh uh you know I've lived a life where I like to do interesting things things I find you know fulfilling I've you know I'm practical enough that that activity has made me a very decent amount of money but it is not you know for me there have been a vast number of forks in the road where it's kind of like do the more interesting thing. Do the thing that makes more money. I'll always pick the
more interesting thing, you know, to to some people's kind of uh horror in a sense. But I think the um uh it it's a um it's a thing where I mean people um in you know this this question of of of whether I think there's a lot of currency in the world that isn't money that exists right now. I mean there's currencies, there's social currency of you know do you have friends you like there's there's kind of fame currency there's a lot of things which are not money related um there's already lots of different kinds
of things and the question of the sort of the money as the thing that buys you food and so on may or may not be kind of the uh the uh uh the the thing I think an interesting question the the very concept that there could be a single sort of store value and its money is an interesting concept that's worked pretty well in thinking about economics. I'm not sure. I don't know. It's it's a thing I I hoping to think about. Actually, I haven't figured it out. Figured it out, but but um um you
know, it's it's like people sometimes say you can buy anything with money, but it's actually not true in the world today. It's um you know and and uh it's like you know the things I don't know I've done in figuring out stuff in science for example. I've had a really good time doing some of that. You know you could pay as much money as you want. It's not going to help you have that experience so to speak. It's um uh you know it's something where you know there are things you can do that sort of
on-ramp to that but it's not really it's not a thing you can buy with money kind of thing. And I think that that um you know that's that's one thing to realize in in the way that uh uh in you know there's there's what you need for the basic necessities and the sort of economic model that exists. I mean this whole question of of of uh what value really is in the world. It's an interesting question. I'm actually some science that I've done looks like it begins to tell you a bunch of things about how
to think about economics. This is a very different topic which we are not going to have a chance to to really dive into I think here but but and also I haven't figured it out. Yeah, I mean I've been I've been um you know it's a it's a very strange thing that this physics project that um uh kind of got launched five years ago has led to kind of a formalism for thinking about things that in fact in the last year has let me understand a bunch more about biology and biological evolution a bunch more
about machine learning I am pretty sure that it has a bunch of things to say about economics but I don't yet know exactly what those are but things that are kind of exercises in a sense for for that economics is things like, you know, if you take cryptocurrency, is it really worth anything or does it have to be the case? Which, by the way, relates to the question you're asking because people might say, well, cryptocurrency isn't worth anything really because I can't go, you know, buy food with it. You know, it's not practical to go
buy food with it. So, therefore, it isn't really worth anything. I think that argument is not correct. And that relates to this. I mean, in a sense, it's worth something because there's this whole network of things that depend on on it, on its existence and so on. And I think that that's again an example of something where you could say, well, you know, all the value in the world is the fact that we have, you know, houses and and food and things like this, but yet there are these other kinds of value that seem to
exist and and and that um so, you know, I'm I I guess I I feel like the uh uh this question of the importance of sort you know, I mean, in in times in the past, having enough food still a problem in some parts of the world, but in many parts of the world, you know, having enough food to eat isn't the problem. In fact, the problem is usually you eat far too much. Um, and uh, you know, in um, whereas at a different time in history, that would have been the big stretch goal is,
you know, get myself enough food. I mean I think you know when there are all these portraits of Henry VII looking extremely rotunded you know that was a sign of uh that was a you know a a sign of success that you could be extremely rotunded because most people you know didn't have couldn't get enough food to eat at the time. It's I think um you know in in um uh you know I I I I think this question of of sort of what will people be doing with themselves in the future? It's um will
they you know will work look the same? Work has changed a lot. I mean before the industrial revolution work didn't look anything like the nine-to-five work looks today. I mean mostly people were sort of working for themselves you know you know tending crops and things for their own use things like this then things got streamlined and centralized and you know now there's a little bit more of you know of people doing more sort of working for themselves so to speak. I don't know whether the the the picture of and there's a little bit more you
know another thing I've noticed I don't know I don't really have data about it but you know more people have more different things that they do I mean there was a period of time when you say what do you do it's like well you know I'm a I'm a this turns out well I'm kind of a this that's my day job but then I do podcasting and then I do you know competitive you know bicycle racing or thing and then I do whatever and it's it's you know that I think even independent of the economy
fragmenting into many different job categories even people's individual lives people are ending up having more tracks in what they do and I don't really know how that you know maybe part of the reason that's possible is because the cost of getting into those tracks has gone down that is you've uh you know it was you know take podcasting for example if If you wanted to be kind of a person who would broadcast things to the world, that was, you know, you had to build this whole stack of things and you had to go work for
a radio station or whatever else it is. But because of technology, the cost of getting into the business of podcasting went way down. So, so you know, we can do that as a gig, so to speak. We don't have to have made it our whole life. And my my guess is that that's a you know that's probably a trend that um uh of people sort of doing a bunch of different things. You know another interesting question is whether you know there was a period of time when people would do one job for their whole life.
one you know sometimes and the question is you know and that's often still a great thing sometimes you can be working in one place but what you do can change completely but I kind of think this idea you know I think this fragmentation thing which is sort of again relates to it's more about the choice than it is about the mechanics of what's done I think that um you know the mechanics of what's done getting automated just means means there are more choices and maybe that's manifest in both more kinds of jobs and more jobs
done by individual people and so on. That's my guess. I mean I I'm you know the idea as I say there could be a choice to just sit around as a couch potato and maybe there'll be you know a segment of society that does that and um you know an interesting I I don't know what um uh and maybe from our point of view today that looks like a pretty bad outcome just like the you know the trillion uploaded souls looked like a bad outcome but I don't know you know for me I'm I'm not
a a very good couch potato, so to speak. And I don't, you know, for me, if I'm like, you know, there are plenty of things that lots of people find interesting that I don't, like I don't watch television, I don't play games, I don't uh, you know, do, you know, it's it's just those are things that I personally don't happen to find interesting, but other people find those things absolutely fulfilling and interesting. And it's not I wouldn't make any claim that there's anything about my particular interests that is in any way more wonderful than than
other people's. It's just the particulars of what I'm interested in. And you know, I think again that's probably another thing that will be true as more gets automated. Each of us with our particular interests and foibless is capable of of sort of pursuing those things in a way that wasn't possible before because the the sort of the barrier to entry the amount of mechanical stuff that had to be done was so great that you know well like podcasting is a good example really of of where you just couldn't have done that without the levels of
automation that exist today and um you know I think that the uh uh that's sort of a a you know in a sense maybe it's a you know perhaps this is a an overly optimistic view of things but you know we're all kind of forged with different interests and and objectives and and things we care about and so on. You know that's a a pretty arbitrary thing that will be some kind of mixture of genetics and physiology together with our experiences. But you know, we're all in this position where there are things we care about
doing. And one could argue that what's happening as more gets automated as the world becomes more and more ergonomic for us to be able to do what we want to do. In other words, in the past it was like, well, we have this and this and this thing we want to do, but gosh, I'm never going to be able to do that because it's just too hard. But as more and more gets easy, it's like, well, there's this and this and this thing that I want to do. Well, okay, great. Now I can actually do
it. I mean again story of my life has been that the things I've been really keen to do I have built a big tower of automation that makes those things reachable and without that I wouldn't have been able to do these things and it's it's um you know it's been the the you know even in in recent times actually I've been doing particularly a lot of science that's quite diverse in the kinds of things I've been doing and it's you know from a point of view without the technology ology tower I'd built and without a
certain amount of scientific knowledge that that sort of I've accumulated, it would be completely inconceivable to go across all those different areas. Just like that's just crazy. You can't, you know, write something about biology one month and about machine learning another month and about foundations of mathematics another month. That's been made possible because I automated a whole bunch of stuff. And um you know I think I think that's what you know that's sort of the the the thing that I as I as I'm it's as I'm talking to you about it I'm I'm uh I'm
realizing this really is I think the um the picture is is it's kind of like you know we all have lots of things we want to do and the barrier to entry to most of those things has been too high for us to actually do them. As more gets automated we'll be able to do them. Now what that will mean in terms of you know what do we what do we get for that? Well, we might get money. We might get some other kind of currency. We might just get internal fulfillment, which is perhaps its
own currency, so to speak. It's sort of the personal currency. And uh, you know, but I think in um and you know, some aspects just like the putting food on the table became less of a stretch, you know, over time. So it may very well be the case that certain kinds of things that are you know as I say this this whole question about like consumer electronics you know I could have you know if I you know okay as a as a practical person who you know makes choices in their life and so on and
it's like well okay I could do this and I would make a bunch of money doing it. I at a very practical level have thought about what do you get to do at this level of money that level of money and so on and you know it's interesting there are these different levels of of uh you know what's possible now you know I'm been fortunate enough that I'm not dealing with the you know make the rent payment type level but you know I'm but it's still it's sort of interesting that there are things at different
levels where it's like yeah you know one could you know, get the money to buy a yacht, but I don't care. That's not something I'm interested in. One could do this or that thing that that um you know, and and plenty of the things that I do, for example, are things where it's just not a question of money. It's a question of, you know, to be able to do it is a question of other kinds of things that are not, you know, are not directly money. What do you think is the right amount of money
for someone to make where people can just stop caring about that and just pursue what they want to do for let's say the US? It's an interesting question. I don't know. I mean, I think that um uh you know, I think there was an image right in the 1950s or something of the of the little house with the white picket fence and so on and that people you know that was that was kind of the image of sort of the normal thing people would be you know would would want so to speak and I you
know I I would say it's uh um sometimes it's a complicated be careful what you wish for you know I have I have my main house is a is a great house that we built years ago that's that's really big and our kids have all moved out now and now it's kind of too big and a big pain in the neck and uh um but uh you know so it's it's sometimes it's a um uh uh you know it you know I don't know a number because it depends on I mean I think if if
I'm to look at my own you know for me. I mean, you know, it depends on the level of what you want to do. For me, you know, I do projects where I might burn a few million dollars and where the project might not work out. And you're like, oh, there it is. Well, and you know that that's the but if somebody says, "Here's a project where it would cost $100 million," I'm like, "I can't do that." Right. So in um uh it's a a thing where um it's a um now you know if I
I suppose that the things that I'm interested in doing are somewhat titrated by the resources that I have to do them. I mean I don't happen to be interested in you know building rockets to go to to Mars or whatever a very expensive endeavor. I you know it's kind of an interesting thing perhaps you know in this physics project of mine I might suddenly become really interested in doing a physics experiment that costs $100 million and so what happens then maybe what's that what happens then are are you going to care more about making money
then just so you can pursue those paths I don't know I think that'll be a kind of here it is world you know you can I mean in that particular case a funny situation because for myself I'm kind of convinced vinced enough that this theory of physics that we have is right that seeing the experiments happen will be cool but it'll be mostly a okay world now you can believe me type thing which is not something I care that much about. So so I think I don't know we'll we'll have to see. It's always it's
very hard to predict how one will feel about things until they actually happen. But I think in that case it's kind of like it'll be okay well you can do this. It's going to cost $100 million. I think it's worth doing. If you care about finding out whether I'm right or not, go do it. You know, not on my dime, so to speak. But uh I might change my mind. I mean, I might um you know, I might be um uh um I think in um you know, it's it's a thing where where it's always
a a complicated thing. What you you know, there there are plenty of things that I've done. For example, the the science that I do is mostly not very expensive. I mean, it's it's kind of uh you know, the people who help me with it and so on and and that's uh it costs a certain amount, but the theoretical science is not terribly expensive. In fact, sometimes these things defeat themselves by doing being too big. Now, in other words, uh even you know, it happens with with lots of things, but it happens, you know, if you
have if you've got this small number of people working intensely on something and there's a lot of flexibility in what's going on, you say, well, actually, I'm going to have a thousand people work on this. You end up with necessary structure. Otherwise, just the it just becomes a total mess. You have structure. that structure kind of reduces the the the kind of the level of flexibility and innovation that becomes possible and it's then a complicated sort of management issue to sort of carve off the piece that's going to be the innovative piece together with the
piece, you know, the the that's going to be the mechanical piece that actually gets the thing done. But, you know, I I think I've I'm um uh I'm I'm bad at um you know, I'd have to go look at the uh the cost of living data that we have in in Wolf Alpha and so on to have any kind of meaningful thing to say about what um uh I mean, I'm sure if you you'd embarrass me if you started quizzing me about how how much do different grocery items cost and so on. I I have
no idea. I mean, I it's it's a I you know, I know that it's um you know, it's just like it's there's a for everybody. I think there's a there's a uh you know, there there's a there's a level of I don't really care about the pennies type thing. And you know, where that set point is depends on, you know, lots of things about how you lead your life. But I think in um uh you know this question of of when will it be the case that people say most people say I've got enough money
I don't need anymore. That will never happen because there'll always be scarce things that are scarce where people say I want to make more money so I can get some people will say I want to make more money so I can get that scarce thing. I think that, you know, for myself, for example, you know, I guess that for me, it's like I've, you know, I I kind of feel like I know, you know, I' I've been fortunate in that I've generally been in a position where I've kind of can do the things I want
to do. I have the resources to do the things that I want to do. Now, maybe I'm kidding myself because really, if I had more resources, I'd I'd think of a lot more things that I could do. I don't think so. I think that um uh you know it's kind of a thing where where you know well the other thing that I I see happen a lot I remember well you know you see people saying I can't do that because I don't have enough money to do it. Sometimes that's true but a lot of the
time it's just not true. A lot of the time it's just deciding you're going to do it and there'll be a way to do it and it doesn't really have anything to do with the money. That's just an excuse, right? It's just an excuse for for, you know, I don't really have the initiative to do that thing. Oh, if I had more money, my life would be cushier and then I' then I'd get the initiative to do that thing. I you know that maybe that happens that way for some people. I mean look I know
that for example if I there have been times when our company has done particularly well and where I I you know at least one of them is correlated with a time when I sort of put more effort into basic science and um so perhaps and I can't say you know for me that wasn't really a causal connection but maybe it was maybe there was a a psychological connection there of oh I don't have to put so much effort into, you know, technology development because, you know, because we did really well recently on that and so
that sort of tips me into more, oh, I can put effort into basic science. I'm not sure. Um, the but but uh uh so, you know, it's hard to tell even from the inside. It's hard to tell what exactly is leading you to different kinds of motivations, but I I you know, I have to say I've seen an awful lot of cases where people explain, "Oh, I can't do that because I need more money." And it's just, you know, not true. It's a, you know, it's a matter of initiative. And um it's uh uh I
had a a charming case actually many years ago now. I I I kind of have a hobby of doing uh kind of CEO counseling and um of advising companies and so on. And I also have always found an interest in kind of mentoring kids. And I I kind of noticed at some point that those two categories, both of which I found interesting, were the the category of kids and the category of of CEOs, they're types of people who believe that anything is possible. And there are lots of other people who don't believe that anything is
possible. They're they're kind of like we're we're stuck in this in this particular track we're in. But th those two uh felt that you know it's kind of a more in the anything's possible and I happen to have this. I was doing, you know, I I have this try and make my life efficient and when I'm driving from here to there, I'll make phone calls and things and and this was, you know, I had a phone call with one kid and then one CEO and the kid was explaining that um uh you know, couldn't do
this or that because they didn't have enough money, etc., etc., etc. And I, you know, the CEO I was about to talk to was about to sell their company for and about to make about $50 million. And I I was telling this kid, you know, this next phone call I'm going to do, this person, because I already knew what some of the issues were, was going to tell me they can't do this and that and the other thing. And they said, but clearly money is not their issue. So the reason they can't do these things
is not because you know they don't have enough money which is what the kid was saying but because of something else that's some internal kind of uh you know block of I just can't do this because I don't feel confident in doing it I don't have the initiative to do it etc etc etc. This kid told me that was a useful conversation and he went and did the thing that he was thinking about doing and it worked out pretty well. So, but it was um you know, it's sort of interesting to me, you know, you
know, you see those cases where it's um it's a uh uh you know, people I don't know. I I'm so I'm not answering the question. And I, you know, if you ask me, you know, if we could distribute UBI from AI to everybody, would the world be a better place? I doubt it. I mean I think that people you know the uh the experiments people have done the the the situations people have where it's just like you know now you reset the base level and everybody has this still people are going to seek the scarcity
and so on. Some people are going to seek the scarcity some people are going to I mean it's it's a thing that's always you know one of the things that is kind of a a negative value of money story. I mean, I always notice there's positive value of money and there's negative value of money. You know, sometimes you see, you know, I I know plenty of people who've been bitten in so many ways by the negative value of money, so to speak. um you know in oh I mean just there are there are more than
more than we could enumerate of of things where but you know a very uh you know there are there are typical ones that everybody knows about about you know oh I inherited a ton of money now what am I supposed to do with my life type thing or you know or there's uh you know there's this pot of money and people who would otherwise be be friends are you know arguing to the death over it, so to speak, even though if the pot of money wasn't there, they'd just be happily friends with each other. Um,
you know, there are all these um uh all these different scenarios. And I think it's it is complicated. And it's it's something where um you know, I I don't think, you know, just like I don't think it's true that you can buy anything with money. I don't think that, you know, injecting, you know, some amount of money into the system is, you know, is, you know, I don't think that that solves everything, so to speak. But, um, you know, I I it's, uh, but this is getting far out of my usual. This is, uh, uh,
me as common sense observer of the world. Um, I'm uh I'm not um uh it's not um uh you know I I I I can only say that that um I think you know I do think that the sort of the the AI makes it easier to do things is another thing to say I suppose is that people have different skills, people have different interests. One can be lucky or unlucky in the period of history in which one lives. So for example, I consider myself lucky to have been in a period of history when computers
started being sort of usable things cuz they're a good fit for a lot of stuff that, you know, I like doing. you know, if I'd been sort of obsessed with finding, you know, exploring the surface of the Earth or something, I've lived in the wrong time in history because we got satellites that took pictures of all of it. Um, you know, it uh I think that there are times in history when if you were sort of an intellectual and you had a bunch of, you know, I really want to think about ideas. It's like sorry,
you know, you have to, you know, plow the fields to get the food or you have to fight in some army to, you know, not have your world collapse type thing. Um it's uh you know I've been lucky enough to live in a time you know be in places where you know sort of it's been sort of peace time every where I've been at least in the more or less in um you know as an example and I think you know one can be I think this question of well another thing you know there was
a time when if you were a techie type of person and you were doing things in business it was like oh you're just a techie you you know, you're off in the in the back room, so to speak. And then at some point, the techies, you know, the nerds took over, so to speak, and and then it was like, it's pretty cool to be a techie doing business kind of thing. Um, and uh, you know, you can be you could be very frustrated in a, you know, being the techie who really wants to be calling
the shots in the business and everybody's telling you, "No, no, the professional managers will do that type thing." And so I think you know what's coming is I do think that what's coming probably is a time when if you like thinking this is the time for you so to speak. If you like having ideas and so on this is the time for you. If you like kind of doing the um you know the mechanics of doing things maybe it's less the time for you. I mean and maybe that's a you know and it's a it's
a thing where I think you know there are uh because the things which are sort of more mechanical are getting automated and will get more automated um and uh you know so that that's a you know and I think for people and it relates to what people learn in education and so on because a lot of education in the last 100 years or so has been a lot about the mechanics of doing things And it should be more about, you know, learning the facts that you need to know to be able to think broadly about
things and then learning how to think broadly about things. But that's not been the sort of industrialized form of education that we've tended to have. I suspect that will become more and more important. You know, I tend to think that sort of the the extremes of kind of philosophy as a way of thinking about things and computational thinking as this kind of formalized way of thinking about things. These are two good sort of forms of thinking worth learning so to speak. And those are things which I think will kind of in the in the coming
world where where lots of them lots of the mechanics have been automated, those become things that are are very significant for people to be able to do. and you know and if you like doing those things you know now's the time for you so to speak based on this idea of what's coming and things getting automated we're entering this time now particularly in the world of software where we can put up a prompt and you can create software so like there's replet there's windsurf there's programs that are out there now where you can create an
app create a website within minutes which would normally have taken weeks or months or years even. So this idea of creating software people are saying is becoming more of a commodity and you can kind of see this world of like just the way we can publish podcasts in such a seamless way that distributes around the world that publishes on YouTube or Spotify is similar to the way that you're going to be able to publish apps on the app store and it's going to be this kind of this hybrid of like YouTube meets app store people
are saying I I don't think so quite I mean you got to understand okay you And and actually I need to go soon but but um yeah just the um uh um you know this whole business about automating software that's what I just spent the last four decades doing. People who use wolf from language they go from ideas to stuff that runs in amazingly short amounts of time. The fact that there is this whole ecosystem of people doing manual labor software development is just bizarre. I mean it's happened because of a bunch of the economics
of labor, a bunch of the ways that technology has developed but the fact is you know you say it's you know you can create an app in you know that's what I do many times a day writing tiny amounts of code you know it's that's what you know that's been my objective is to automate those things and the fact that you know I mean what's happened is that there is a craft of software engineering where it's like you get a spec, you spend two weeks implementing the spec. Often it's a funny thing to see in
in companies, the people who run them, the CTOs, the CEOs and so on actually use our tech and build prototypes of things in a very short time and then they go on and they figure out the next thing they want to do and build that as well. But the people who are now sort of in the trenches doing software engineering are like, "Well, we just got a spec. We build to that spec." Okay, now if we could do that much more quickly, which they can with our tech, then they're like, "Well, now I've got something
very difficult to do. I've got to make a new spec. That's not what my job is. My job is to grind out code." And yes, this is exactly an example of where the mechanical stuff is getting automated. I mean that particular one I know very well because I just spent the last four decades doing the automation of that one and it's it's and people have you know the fact that you can go from more like a natural language description to to code is is something which you know plays very well into our text stack. But
you know some part of that is and then you can write a blob of Python code which would just be one function in our language anyway. you wouldn't be writing that. It's but you know the AI can write that. But I think this the thing to understand about things like software is and it relates very much to what we've been talking about a lot in this in this conversation is you say you know you you snap your fingers and then there's an app. Well, what is that app supposed to do? You have to describe what
it does and that is making a choice so to speak and the description of what it does is the thing that is the thing that's going to be of value. It's not the mechanics of and you know with our text stack for example and all the people who use it the the the kind of mechanics of doing a lot of things those have been zeroed out for years you know those are not I mean it which is great it means one's been able to go further in lots of things in in you know lots of
kinds of developments and science and so on but that uh you know so the the thing it isn't just okay make me an app what the heck's the app supposed to do, right? It's it's like you have to say I want I have you have to conceptualize I want an app that's going to do this and then then you start to sort of dig into well okay how are you going to build that etc etc etc. So it's actually that is a really good example of the things I've been talking about namely you know what
becomes the human act is what do I want it to do then you know as I say in that particular case I've spent the last four decades trying to do that automation AIS add another level of automation to that um but it doesn't get you around the you know you kind of uh you know it's it's like there's still a there's still some somewhere you have to decide what the app is supposed to do. And I think that's the um uh and yes, that's a thing where where you're um uh you know, again, that's that's
what I've been living for many decades now is getting to the point where it's mostly about imagining what the app is supposed to do, not about the mechanics of actually writing it. Wouldn't that still regardlessly increase the supply of software significantly just like podcasts have significantly increased? And if that's the case, what's the moat for a software entrepreneur that you're advising if software used to be hard and now it's easier and there's more supply than ever? Well, I mean, it's a question of what does it do, right? there's a so in in what I've done
in my life probably the thing of highest value I think is a design of our computational language in other words the implementation sure that's valuable it's cost a huge amount of money to do it um and you know people use it you know every day all the time um but for me uh it's actually an interesting thing you should say that because for me the thing that is really of the highest value is what I've spent lots of effort on which is kind of the functional design of the language. I mean you know I actually
we we live stream many of our sort of software design meetings. So the there's now I don't know what is a thousand hours of actually what's involved in doing that um out there in the world. It's actually very interesting intellectual activity. But that's the um uh you know I suppose in a sense it's uh you know you could take that spec and you could reimplement it. Good luck with that. Um, I mean that's a uh uh that's a thing where um uh it's a um uh you know I think I think the I have an
idea, you know, there are so many different kinds of moes. I mean there are moes to do with I've got this you know social media platform and all your friends are on it. You should be on it too. Yeah. There's um there's a you know we've got this sort of unique intellectual property that that we've um that we've been able to build and that's you know that's the main tower we have and it's and that you know for example in in our tech stack uh you know we have been using various forms of machine learning
and AI to help with software development for a long time. I mean a lot of what we have to do is make meta algorithms that select between algorithms and that's a thing for which we've used machine learning for a long time. Um it's again the the strategy of what's happening inside which is a lot of human choices is still a thing that you don't get to zero out. I mean I think you know my company is what about 800 people which is really kind of tiny comp relative to what we've been able to produce. Okay.
Um, it's uh, well, it, you know, I think relative to the amount of software we've produced, how has that been possible? Well, it's because we've automated the heck out of things. And, you know, we're building on sort of a, you know, we built this tower where we're kind of recursively able to do more because we've automated the last thing we were able to do. So, I think I mean, it's an interesting question. what um uh you know I think it's really a well what do you you know that the space of possible software it's like
these little programs that I study in the computational universe everyone is a piece of software in a sense everyone does something most of them are not things that anybody would care about some of them make really pretty pictures some of them make good cryptographic systems some of them make good image processing filters you know, those are ones where we've been able to kind of mine what's out there and turn it into something that we find useful. But the leading piece to that has to be, well, what do you want it to do? You know, what
is the thing you want? So, if you can say, well, you know, is there going to be a broader set of things that people want software to do? Maybe as people's activities broaden out, there will be a need. I mean, like for example, well, take podcasting again. there's a bunch of software around podcasting that didn't need to exist until podcasting existed. Um, and you know, I think that will be, you know, in terms of the the expansion opportunities. I mean, I I always see this in terms of of technology, there sort of technology that comes
over the horizon that enables things like, you know, one that almost came over the horizon a bunch of times but still is firmly sitting at the end of the rainbow is is VR or XR in general. um you know it's kind of like one day that will really come over the horizon and that will enable a lot of new things which it's like you know how do I manage the post-it notes the virtual post-it notes that I put in my environment or whatever you know that will be there'll be an app for that so to
speak but and that you know as the general tide of technology rises so there start to be more and more things where we now have to figure out well what are we going to do what particular direction are we going to And I really think it's very much the same story as the story of AI that you know more becomes possible, more becomes automated. Now it's a question of what do you you know what do we humans choose as the next step that we want to do? And talking of next steps I need to go
and I think that's a great closing question for someone to for someone listening to ask that question is what are we supposed to do? Uh Stephen, thank you so much for coming back on the show. This was such a intellectually interesting and and thoughtprovoking qu uh conversation as usual. Where can people find you online if that's something you even care about? I don't know. stephenwolf.com is a good place to start. Yeah. And that has uh uh you can also find me on all the usual social media platforms and um you can find I do a
uh uh I guess it's a podcast. I do live streams a couple of times a week answering questions and so on. So beautiful. That's uh that's that's those are places people can find me. Beautiful. Anyway, thank you Stephen. Well, thanks thanks for lots of interesting questions. That that's been fun.