thanks for doing this my pleasure excited to chat I wish we had days but we have like 40 minutes so we'll get through as much as we can in this time uh this is a moment of a lot of public facing progress a lot of hype a lot of concern how would you describe this moment in AI uh combination of excitement and uh too many things happening that we can't we can't follow everything it's hard to keep up it is even even for me uh and and a lot of uh perhaps ideological uh debates that
are both scientific technological and even political and even moral in some way and moral yeah that's right boy I want to dig into that but I want to do just a brief background on your journey to get here is it right that you got into this reading a book about the origins of language was that how it started it was a debate between n chumsky the famous linguist and and uh je P the developmental psychologist about whether language is learned or innate so CH saying it's innate and then p on the other side saying yeah
there is a need for structure but it's mostly learned and uh there were like interesting articles by various people uh at this this conference debate that took place in France um and and one of them was on was by simr paper from MIT who was describing the perceptron which was one of the early uh machine learning models and I read this I was maybe 20 years old or something and I got fascinated by this idea that a machine could learn and that's what got me into it and so you got interested in neural Nets but
the broader Community was not interested in neural Nets no we're talking about 1980s so essentially very very very few people working on Nets then and and they were kind of not really being you know published in the main venues and anything like that so there a few um a few cognitive scientists in San Diego for example working on this David rart J mcland and then Jeffrey Hinton Who U I ended up working with after my PhD who who was interested in this but it was really kind of a bit alone there were like a few
isolated people in Japan and Germany kind of working on this kind of stuff but this it was not a field it it it it started being kind of a field again around 1986 or something like that and then there's another big AI winter and you What's the phrase you used you you and Jeffrey Hinton in yosua Benjo had a type of conspiracy you said to bring neural Nets back it was that desperate it was that hard to do this work at that point okay well the the notion of AI winter is complicated because what what's
happened since the 50s is that there's been waves of interest in one particular technique and excitement and people working on it and then people are realizing that this this new set of technique was limited and then sort of Interest WS or or people start using it for other things and lose the ambition of building intelligent machines there's been a lot of waves like this you know with um the perceptron things like that with sort of more classical computer science so logic based Ai and there was a big wave of excitement in the 80s about logic
based AI what we call rule based systems expert systems and then in the late 80s about neural Nets and then that died in the mid90s so that's the the winter that I was you know out in the cold um and so what happened in the early 2000 is that Jeff yosua and I kind of got together and said like we have to rekindle the interest of the community in those methods because we know they work uh we just have to be a little more uh like show experimentally that they work and perhaps come up with
new techniques that uh are applicable to the new world in the meantime what what's happened is that the internet took off and now we had sources of data that we didn't have before and the computers got much faster and the computers got faster and so all of that converged um towards the the end of the 2000 early 2 when we started having really good results in uh speech recognition image recognition and then a bit later natural language understanding and that really sort of sparked a you know a new wave of interest in sort of machine
learning based AI so we call we call that deep learning we didn't want to use the the word neuron Nets because it had a bad reputation so we changed the name to deep learning um it must be strange I imagine having been like on the outside even of just computer science for decades to now be at the center not just of tech but in some ways like the global conversation it's like quite a journey it is but uh I would have expected the progress to be sort of more continuous if you want uh instead of
those waves yeah I wasn't you know at all prepared for for the for for what happened there neither on the side of you know losing interest uh the the lost interest by the community for those methods and and for the incredibly fast EXP explosion of of the the renewed field in the the early 2000 you know for the last 10 12 years and now there's been this huge at least public facing explosion in the last whatever 18 months couple years and there's been this big push for government regulation that you have had concerns about what
are your concerns okay so first of all uh there's been a lot of progress in in AI deep learning uh applications over the last decade a little more than a decade but a lot of it has been a little B behind the scenes MH so on social networks it's content moderation uh you know uh protection against all kinds of attacks you know things like that that uses AI massively when Facebook knows it's my friend in the photo that's you yes but no not anymore oh not anymore there is no face recognition on Facebook anymore oh
isn't there no it was turned off several years ago oh my good I'm feel so dated uh uh but but the point being that a lot of your work is integrated in different ways into these products oh if you if you try to rip out deep learning out of meta today the entire company crumbles it's literally built around it so um so a lot of a lot of things behind the scenes um and things that are a little more visible like uh you know translation for example that uses AI massively obviously or you know generating
subtitles for the video so you can watch them silently that's speech recognition some of it is translated so that is visible but most of it is behind the scenes and in in the rest of societ is also largely behind the scenes you buy a car now and most cars have a a little camera looking at the windshield and the car will break automatically if there is an obstacle in front right that's called automatic emergency braking system it's actually a required feature in Europe like a car cannot be sold unless it has that almost every American
car as well yeah yeah and that uses deing uses conv net in fact my invention so that saves lives same for you know medical applications and things like that so that's a little more visible but still kind of behind the scenes what has changed in the last year or two is now that there is sort of AI first products that are in the hands of the public the fact that the public got so enthusiastic about it was a complete surprise to all of us including open Ai and you know Google and US yeah yeah okay
but let me get your take though on on the regulation because there's even some big players you've got Sam Alman in open AI you've got everyone at least saying publicly regulation we think it makes sense okay so there's several types of regulations right there is is regulation of products so if you want if you put one of those emergency breaking systems in your car of course it's been you know checked by a government agency that makes sure it's safe I mean it has to happen right so you need to regulate products uh that are you
know certainly the ones that are live critical in in healthare and transportation and things like that and probably in other areas as well the debate is about whether research and development should be regulated and there I'm I'm clearly very strongly of the opinion that it should should not the people who believe it should are people who are afraid of the uh who claim that there is an intrinsic danger in putting the technology in the hands of essentially everyone or every technologist and I think only the exact opposite that it's actually a huge uh beneficial effect
what's the benefit well the benefit is is that we we need to get uh AI technology to disseminate in all corners of society and the economy because it makes people smarter it makes people more creative it helps people who don't necessarily have the technique to write a nicely put together piece of text or or picture or video or music or whatever to be more creative right the creation tools essentially creation AIDS it may facilitate a lot of businesses a lot of boring you know jobs that that can be automated and so it it um has
a lot of you know beneficial effects on on the economy on entertainment you know all kinds of uh all kind of things you know making people smarter is intrinsically good you could think of it this way is um may have in the long term the similar effect as the invention of the printing press that had the effect of making people literate and smarter and and you know more informed so and some people try to regulate that too well that's true uh actually the printing press was uh was banned in the Ottoman Empire um at least
for Arabic and and some people say that like the minister of AI in the of the UAE says that uh this contributed to the decline of the Ottoman Empire um so so yeah I mean you if you want to ban technological progress you're taking a a much bigger risk than if you if you favor it you have to do it right obviously I mean there are you know side effects of technology that you have to mitigate as much as you can um but the benefits are you know far overwhelmed the the uh the dang the
EU has some proposed regulation do you think that's the right kind well so there are good things in the proposal for that regulations and there are things again uh when it when it it comes to regulating research and development and essentially making it very difficult for for for companies to open source their platforms I think are very counterproductive and in fact the the the French German and Italian governments basically have blocked the the legislation in front of the EU Parliament for that reason they really want open source and the reason they want open source is
because imagine a future where everyone's interaction with the digital world is mediated by an AI system that's where we're headed that's what we're heading so every one of us will have an AI assistant um you know within a few months you will have that in your smart glasses right you can you can get smart glasses from from MAA and you can talk to it and there's a AI assistant behind it and you can asking questions eventually it will have displays right so this things will be able to I could speak French to you and it
would be automatically translated uh in your glasses you know you'll have subtitles or or you would hear my voice but in in English um and uh and and so you know raising barriers and stuff like that you you would be in a in a place and it would you know indicate um where you should go or or information about the building you're looking at or whatever right so we'll have intelligent assistants you know living with us at all time this will amplify our intelligence this is this would be like having a human staff working for
you except they're not human and it might be even smarter than you but it's fine I mean I I work with people who are smarter than me um so so that that's the future now if you imagine this kind of future where all of our information diet is mediated by those AI systems you do not want those things to be controlled by a small number of companies on the west coast of the US it has to be an open platform kind of like the internet the internet is all the software infrastructure of the internet is
completely open source and it's not by Design it's just that it's the most efficient way to have a platform that is safe customizable you know uh Etc and and for for assistance you want those systems will constitute the repository of all human knowledge and culture you can't have that centralized right everybody has to contribute to it right so it needs it needs to be open you said at the fair 10 anniversary event that you wouldn't work for a company that didn't do it the open way why is it so important to you uh two reasons
the the first thing is science and technology progress through the quick exchange of information and and scientific information right one problem that we have to solve with AI is not a technological problem of what product do we have to build that's of course a problem yeah um but the main problem we have to solve is how do we make machines more intelligent that's a scientific question and we don't have a monopoly on good ide ideas a lot of good ideas come from Academia they come from you know other uh research Labs public or private and
so if there is a FAS exchange of information the field progresses faster and if you become secretive you fall behind because people don't want to talk to you anymore um let's talk about what you see for the future it seems like one of the big things you're trying to do is a shift from these large language models that are trained on text to looking much more at images why is that so important okay so ask yourself the question we have those llms uh it's you know amazing what they can do right they can pass the
bar exam but we still don't have sell driving cars we still don't have domestic robots like where is the domestic robot they can do what a 10-year-old can do you know clear up the dinner table and fill up the dishwasher where is the robot they can learn to do this in one shot like any 10-year-old or is the robot that can learn to drive a car in 20 hours of practice like any 17y old we don't have that right that's that tells you we're missing something really big we're training the wrong way we're not training
the wrong way but we're we're missing essential components to reach human level intelligence okay so we have systems that can absorb an enormous amount of training data from text and the problem with text is that text only represent a tiny portion of human knowledge this this sounds surprising uh but in fact most of human knowledge is things that we learn when we're babies and has nothing to do with language we learn how the world works we we learn you know intuitive physics we learn you know how people interact with each other we learn all kinds
of stuff but they they really don't have anything with language and think about animals a lot of animals are super smart in many domains where actually they're smarter than humans in some domains right uh they don't have language and they seem to do pretty well so what type of learning is taking place in human babies and in animals that allow them to understand how the world works and become really smart have common sense that no AI system today has so the the joke I I make very often is the smartest AI system we have today
are stupider than a house cat because a cat can navigate the world in a way that a chat bot certainly can a cat understands how the world Works understands causality understands that if it does something something else will happen right and so it can plan sequences of actions you ever seen a cat like you know kind of sitting at the bottom of a bunch of furniture and sort of looking around moving the head and then going jump jump jump jump jump jump that's amazing planning no robot can do this today and so we have a
lot of work to do it's it's not it's not a Sol problem we're not going to get human level AI systems uh before we get significant progress in being able to train systems to understand the world basically by watching video and you know acting in the world another thing you seem focused on is I think what you call a objective based model objective driven objective driven explain why you think that is important and I haven't been clear just in hearing you talk about it whether safety is an important component of that or safety is kind
of separate or alongside that it's part of it so um so the idea of objective driven okay let me tell you first of all what a current um find the problem is right um uh so llms really should be called Auto regressive llms the reason uh we should call them this way is that they just produce one word or one token which is a sub unit doesn't matter one word after the other without really kind of planning what they're going to say right so you give them a prompt and then you ask it what word
comes next and they produce one word and then you shift that word into their input and then say what work comes next now Etc right that's called Auto prediction it's it's a very old concept but um that that's how it works now Jeff did it like 30 years ago or something uh actually Jeff had some work on this uh with elas when he was a student a while back but that wasn't very long ago yosha bju had a very interesting paper on this in the 2000s using neural Nets to do this actually it was probably
one of the first anyway I got you uh distracted here so yeah so you can get get to what's right okay so so you produce us word one after the other without really thinking about it beforehand without knowing the system doesn't know in advance what it's going to say right it just produces those words and the problem with this is that it can hallucinate in the sense that sometimes it will produce a word that is really not part of a correct answer and then that's it um the second problem is that you can control it
so you can't tell it okay you know you're talking to a 12y old so only produce words that are understandable about 12 well you can put this in the PRP but that has kind of limited effect unless the system has been fine-tuned for that um so it's very difficult in fact to control those systems and you can never guarantee that whatever they're going to produce is not going to escape the the conditioning if you want the training that they've gone through uh to produce uh not just useful answers but answers that are non-toxic and everything
and that you know non-biased and everything um so right now that's that's done by kind of fine-tuning the system and training it on you know have lots of people kind of answering questions and rating questions it's called human feedback um there's an alternative to this and the alternative is you give the system an objective so the objective is a mathematical function that measures to what extent the the answer produced by the system conforms to a bunch of constraints that you wanted to satisfy you know is this understandable by 12-year-old is this toxic in this particular
culture does this answer the question in a way that um that I want yeah is this consistent with what uh you know my favorite newspaper was saying yesterday or whatever right so you know a bunch of things like this constraints that could be safety guard rails or or just task and then what the system does instead of just blindly producing one word after the other it plans an answer that satisfies all of those criteria okay and then and then you produce that answer that's objective driven AI That's the future in my opinion uh we haven't
made this work yet or at least not in the situation that U that that we want we people have been working on this kind of stuff for robotics for a long time that's called Model productive control or motion planning there's obviously been so much attention to Jeffrey Hinton and yosua Benjo having these concerns about what the technology could do how do you explain the three of you reaching these different conclusions okay so this uh it's a bit difficult to explain for for Jeff he had a bit of an epiphany in April where he realized that
uh the the the systems that that we have now are a lot smarter than he expected them to be and he realized uh you know oh my God we're kind of close to uh having system that have human L I disagree with this completely they're not as smart as he thinks they are right yeah right okay um and he's thinking in sort of very long term and so abstract term so I I can understand why he's saying what he's saying but I I just think he is's wrong and we we've disagreed on things before we're
good friends but we you know we've disagreed on this kind of questions before on technical questions among other things so I don't think he's thought about the the problem of you know existential risk and stuff like that for very long you know basically since April um I've been sort of you know thinking about this on the kind of philosophical moral point of view for a long time for Yoshua it's different Yoshua I think is concerned is more concerned about short-term risks that are would be due to misuse of uh of Technology by you know terrorist
group or people with bad intentions and also about the motivation of the industry developing AI MH which he sees as not necessarily line with a common good because because he claims it's motivated by by profits okay so um so that may be a bit of a kind of political s there that uh perhaps he has less trust in the Democratic institutions for doing the right thing than I have I've heard you say that that is the distinction that you have more faith in democracy and in institutions than they do I think that's the case yeah
um I mean I don't want to put words in their mouth and and I don't want to misrepresent them ultimately I think we have the same goal like we we we know that there's going to be a lot of benefits to AI technology otherwise we wouldn't be working on this uh and the question is how do you do it right um you know do we have to have like as as Yoshua advocates for you know some overarching multinational regul regulatory agency to make sure everything is safe should uh should we ban open sourcing models that
are potentially dangerous and but run the risk of basically you know slowing down progress slowing the dissemination of technology in the economy and and Society um so those are trade-offs and like reasonable people can disagree on this okay um that's the in my in my opinion the the the Criterion the reason really that um I'm really very much in favor of open platforms is is the fact that AI systems are going to constitute a a very basic uh infrastructure in the future and there has to be some way of ensuring that culturally and and in
terms of knowledge those things are diverse a bit like Wikipedia right you you can't have just Wikipedia in one language that has to cover all languages all cultures and everything yeah same same story there has been it's obviously not just the two of them it's a growing number of people who say not that it's likely but there's like a real chance like a 10 20 30 40% chance of literally wiping out Humanity which is kind of terrifying why are so many in your View getting it wrong it's a tiny tiny number of people ask the
the 40% of researchers in one poll that's no but it's a self- selected poll online like you know people s select themselves to to uh to answer those polls no um like the vast majority of people in AI research particularly in Academia or in startups uh but also in large Labs like like ours don't believe in this at all like they don't believe there is a significant risk of you know existential risk to humanity they all of us believe that there are proper ways to deploy the technology and bad ways to deploy it and that
we need to work on the proper way to do it okay um and the analogy I draw I think is the the the people who are you know really afraid of this today would be a bit like people in 1920 or 1925 saying um oh we have to ban airplanes because you know it can be misused you know someone can fly over a city and drop a bomb um and you know this can be dangerous because they can crash so you know we're never going to have planes that cross the Atlantic because it's just too
dangerous like a lot of people will die out of this right and then they will ask it you know to regulate the technology like you know you know ban the invention of the turbojet okay or regulate turbo jets in 1920 turbojets were not invented yet in 2023 human level AI has not been invented yet so the question has to you know discussing how to make this technology uh safe super human intelligence safe is the same as asking 1920 engineer you know how you going to make turbojet safe like they're not invented yet right and so
and the way to make them safe is going to be like toet there going to be years and Decades of iterative refinements and and careful engineering and of of of how to make those things uh proper and they're not going to be deployed unless they're safe so again you have to trust in the institutions of society to to make that to make that happen and just so I understand your view on the existential risk I don't think you're saying it's zero but you're saying it's quite small like below 1% you know it's uh it's below
the chances of an asteroid hitting the Earth and and you know Global nuclear war and things of that type I mean it's it's on the same order I mean there are things that you should you should worry about and there things that you know you can do anything about but in the case like natural phenomenon right there's not much you can do about it them um but things like deploying AI we have agency like we can decide not to deploy if we think there is a danger right yeah so so attributing a probability to this
makes no sense because we have agency last thing on this topic autonomous weapons how will we make those safe and not have at least the possibility of really bad outcomes with them so um autonomous weapons already exist but not in the form that they will in the future we're talking about missiles that are self-guided but that's a lot different than a soldier that's sent into battle okay the first the first example of autonomous weapon is landmines and some countries not the us but some countries banned it ban its use this International uh agreements about this
that neither the us nor Russia nor China assigned uh to to ban them and the reason for Banning them is not because they're smart it's because they're stupid they're autonomous and stupid and so they are you know they K anybody right uh guided missile the the the more guided the missile is the the less uh collateral damage it makes so so then there is a moral debate um you know is it better to actually have smarter weapons that uh you know only destroy what you need and you know doesn't kill you know hundreds of civilians
next to it um can that technology be be used to uh protect democracy like like in Ukraine Ukraine makes massive use of drones and they're starting to put AI in to it uh is it good or is it bad I think it's necessary regardless of whether you think it's good or bad autonomous weapons are necessary well for the protection of democracy in that case right but obviously the concern is what if it's Hitler who has him rather than Roosevelt well then it's the history of the world you know who has better technology is it the
good guys or the bad guys so the good guys should be doing everything they can it's again a complicated moral issue it's not my specialty I don't work on weapons okay but you're a prominent voice saying hey guys don't be worried let's go forward and this is I think one of the main concerns people have okay so I I I'm not a pacifist like like like some of my colleagues and uh I think you have to be realistic about the fact that this technology is being deployed in uh in defense and and for good things
you know and the the Ukrainian conflict has actually made this quite obvious that uh progress in technology can actually help uh protect democracy we talk generally about all the good things AI can do i' would love to the extent you can to talk really specific specifically about things that people like let's say you're middle-aged or younger can hope in their lifetime that AI will do to make their lives better uh so I uh this things in the in the short term you know safety systems for transportation for medical diagnosis you know detecting tumors and stuff
like that which you know is improved with uh with AI and then medium-term uh understanding more about how life Works which would allow us to do things like drug design more efficiently like all all the work work on you know protein folding and design of proteins synthesis of of of you know new chemical compounds and things like that so there's a lot of activity on this there not like there's not been like a huge revolutionary outcome of this yet but there are a few techniques that have been developed with the help of AI to treat
like rare uh genetic diseases for example and things of that type so this is going to make a lot of progress over the next few years um you know make people's life more enjoyable longer perhaps Etc and then beyond that um again imagine all of us would be like a um a leader in you know either science business politics or whatever it is right and we have a staff of people assisting us but there won't not be people there'll be virtual people okay working for us everybody is going to be a boss essentially um and
and everybody is going to be smarter as a consequence uh not individually smarter perhaps although they will learn from from those things um but but smarter in the sense that they will have a system that makes them smarter right make them make it easier for them to learn the right thing to access the right knowledge to make the proper decisions so we'll be in charged in charge of of AI systems we'll control them we'll you know they be subservient to us we set their goals but they can be very smart in fulfilling those goals so
um I as as you know the leader of a research lab u a lot of people at at Fair are smaller than me and that's why we hire them yeah uh and there is kind of an interesting interaction between people people is particularly between politics right the politician the the sort of visible Persona kind of makes decision and that's setting goals essentially for other people to fulfill um so that's the interaction we'll have with a systems we set goals for them and they fulfill it uh I think youve said AGI is at least a decade
away maybe farther is it something you guys are working toward or are you leaving that kind of to the other guys or is that your goal oh it's our goal of course it's always been our goal um but I guess in the last 10 years there were like so many useful things we could do in the short term that a part of the the lab so ended up being devoted to to those useful things like content moderation translation computer vision you know yeah robotics you know a lot of things that are kind of application areas
of this type what has changed in the last uh year or two is now that we have products that are AI first right assistants uh that are built on top of Lama and things like that so so things services that you know meta is deploying uh will be deploying not just on on mobile devices but also on smart glasses and arvr devices and stuff like that uh that are AI first so Nei there is a product pipeline where there is a need for a system that has essentially human level AI okay we don't call this
AI because human intelligence is actually very specialized it's not General so we call this we call this we call this Ami Advanced machine intelligence but when you say Ami you're basically meaning AGI basically it's it's the same that what people mean by AGI yeah we like it Joel and I like it because we speak French and that that's am it means friend yes Monami my friend uh so so yeah no we're totally focused on on that that's the mission of fair really whenever AGI happens it's going to change the relationship between people and machines do
you worry at all if we have to hand over control to things like corporations or governments to these smarter entities we don't hand over control we hand over the execution okay we control we we set the goals as I said before and they execute the goals it's very much like being a leader of uh a team of of people you set the goal um this this is a wild one but I find it fascinating there are some people who think even if Humanity got wiped out by these machines not a bad outcome because it would
just be the natural progression of intelligence Larry Page is apparently a a famous proponent of this according to Elon Musk would would it be terrible if we got wiped out or would there be some benefits because it's a form of progress um I don't think this is something that we should think about right now because uh predictions of this type uh that are more than let's say 10 years ahead are are complete speculation so like what you know how our descendants will will see progress or their future is not for us to decide um we
have to give them the tools to do whatever they want but um but I don't think it's for us to decide we don't have the legitimacy for for that we don't know what it's going to be that's so interesting though you don't think necessarily humans should worry about Humanity continuing I don't think it's a worry that people should have at the moment no um I mean okay so you can rely also like you know how long has Humanity existed about 300,000 years it's a it's very short so if you project 300,000 years in the future
what will humans then look like given the progress of Technology we can yeah we can figure it out and you know probably the biggest changes will not be through AI it probably be through genetic engineering or something like that which currently is banned Pro probably for the for for reasons that you know we we don't know the the potential dentes of that uh last thing because I know our time is running out do you see kind of a middle path that acknowledges more of the concerns at least considers maybe you're wrong and to an extent
this other group is right and still maintains the things that are important to you around open use of AI is there kind of a compromise so the there certainly uh potential dangers in the medium term of that are essentially due to potential misuse of the technology and the more available you make the technology the the more people you you you know make make it accessible to more people so you have a higher chance of people with bad intentions being able to use it so the question is what countermeasures do you use for that so some
people are worried about things like you know massive uh flood of misinformation for example that is generated by AI you know what measures can you take against that so what we're working on is things like watermarking so that you know when a system you know a piece of data has been generated by a system another thing that we're extremely familiar with at meta is detecting false accounts um uh but uh you know divisive speech that is sometimes generated sometimes just type by people with bad intentions um you know hate speech you know dangerous misinformation we
already have systems in place to protect against this on social networks and the thing that people should understand is that those systems make massive use of AI so you know hate speech uh uh takedown and detection in all languages in the world was not possible 5 years ago because the technology was just not there and now it's much much better because of the progress in AI so um you know same for cyber security you know you can use AI systems to kind of try to attack computer system but that means you can also use it
to protect so you know every attack has a countermeasure and they both make use of AI so it's a cat and mouse game as it's always been nothing nothing new there okay so that's for the short to medium-term uh dangers um and then there is the the long-term danger of you know risk of existential uh risk and I I just do not believe in this at all because we have agency so we you know it's not a natural phenomenon that we can't stop this is something that we do we're not going to distinguish ourselves by
accident the the reason why people think this among other things is because of a scenario that has been popularized by science fiction which I received the name fum okay and what that means is that one day someone is going to discover the secret of AGI whatever you want to call it super human intelligence is going to turn on the system and two minutes later uh that system will take over the entire world destroy Humanity make so you know such fast progress in technology and science that we're all dead uh and some people actually are predicting
this in the next three months which is insane um so I mean it's is not happening uh so that scenario is completely realistic this is not the way things work the the progress towards human level AI is going to be slow and incremental and and we're going to start by having systems that may have the ability to potentially reach human level AI but at first they're going to be as smart as a rat or a cat something like that and then you know we're going to crank them up and put some more guard rails to
make sure they safe and then work our way through smarter and smarter system that are more and more controllable and Etc it's not going to it's going to be like the same process we used to make turbo Jets safe it took decades and now you can fly across the Pacific on the two engine airplane you couldn't do this 10 years ago you had to have three engines or four uh because the reliability of tur Jets was not that high so it's going to be the same thing a lot of engineering you know a lot of
really complicated engineering we're out of time for today but if we're all still here in three months maybe we'll do it again my pleasure thanks a lot