The Turing Lectures: The future of generative AI

591.7k views17158 WordsCopy TextShare
The Alan Turing Institute
With their ability to generate human-like language and complete a variety of tasks, generative AI ha...
Video Transcript:
hello hi welcome everyone thank you very much for venturing out on this cold wintry December evening to be with us tonight and if you're joining online thank you for being here as well my name is hurry Su and that's hurry like when you go somewhere quickly you're in a hurry um I am a research application manager at the touring Institute um which means I basically focus on finding real world like use cases and users for the churing research outputs and I'm really excited to be hosting this very special and I'm told sold out lecture for
you all today um it is the last in our series of 2023 of touring lectures and the first ever hybrid touring lecture discourse um as we prepare and build up for the Christmas lecture um of 2023 here at the Royal Institution now as has become a bit of a tradition for the host of this year's touring lectures quick show of hands who's been to a touring lecture before some people some people who's been to a lecture from this year's series before like looks like more hands than last time doesn't quite make sense um on the
flip side of it who has who's coming to their first during lecture today oh lot of new faces well for all the new people here welcome for the ones who have been here before welcome back um just as a reminder the touring lectures are the touring's flagship lecture series they've been running since 2016 and welcome World leading experts in the domain of data science and AI to come and talk to you all the touring Institute itself um we have had a quick video on it which I was mesmerized by um but just as a reminder
we are the National Institute for data science and AI um we are named after Alan churing who is one of the most prominent mathematicians from the 20th century in Britain he is very famous for I'm always normally say most famous but very famous for um being part of the team that cracked the Enigma code um that was used by Nazi Germany in World War II at Bley Park if you've heard of Bley Park as well if you've seen the imitation game with Benedict Katch that's right way to say isn't it um he is playing alen
shuring um and our mission is to make great leaps in data science and AI research to change the world for the better um as I mentioned today is not just the churing it is also a discourse which means two important things so firstly um when I'm done with the intro the lights will go down and it's going to go quiet until exactly 7:30 on the dot when a bell is going to ring and a discourse will begin so just to warn you guys that will be happening um the lights aren't broken that is part of
the um program for today um but also it is a discourse and we really want to get you guys involved so there's a huge Q&A section at the end um for about 30 minutes please do think about what questions you'd like to ask our speaker today if you're in person we will have roaming mics that will be going around we can bring upstairs as well if you're online you can ask a question in the Vimeo chat um and someone here will be tracking the questions and we'll be able to share them there as well if
you would like to share on social media um that you're here and having an amazing evening please do tag us we are on uh Twitter X Twitter whatever um churing inst and we are on Instagram at theuring inst so please do tag us we'd love to see what you're sharing and connect with you as well so this year's um lecture Series has been answering the question how AI broke the internet with a focus on generative Ai and you guys can basically think of generative AI as algorithms that are able to generate new content Um this
can be text content like you see from chat GPT um it could be images that you can also get from chat GPT but also Dar as well um and can be used for a wide range of things and potentially professionally for blog posts or emails your colleagues don't realize were written by an algorithm and not by you if you've done that before um if you're at school maybe for some homework or at University to write some essays um it can also be used for um sort of when you have a creative uh hit a creative
wall and you can't get past it and you want some ideas and some prompts it can be a great way to like have some initial thoughts come through that you can build on um it can be used for quite scary things as was mentioned by an audience member at the last um touring lecture of someone who submitted um legal filings um for a cord case using chat GPT which is terrifying but it can also be used for very everyday things as demonstrated I'm not sure if you guys saw the thread by Garrett Scott who um
gave chat GPT an image of a goose and said can you make this goose sillier and then asked chat GPT to progressively make the goose Siler and sillier until J GPT gave him an image of a crazy silly goose and said this is the silliest goose in the history of the Universe I do not think it is possible to get any more silly goose um so obviously a wide range of applications from the techn ology if you guys want to look at that Twitter thread the the geese that come out of it are absolutely mesmerizing
um but that's been the focus of this year's series we started with Professor Mela laata in September this year asking the question what is generative Ai and having an introduction to it we then had a lecture from Dr Vari Atkin in October on the risks of this technology um which basically leaves one final big question unanswered which is we are here now but what is the future of generative Ai and that is the focus for this evening so that is pretty much it for the intro unless I've forgotten anything which I don't think I have
cool um so just a reminder um the lights are now going to go down and it will be quiet until exactly 7:30 when a soft Bell will Ding and we will start the discourse hope you enjoy the evening thank [Applause] you artificial intelligence as a scientific discipline has been with us since just after the second world war it began roughly speaking with the Advent of the first digital computers but I have to tell you that for most of the time until recently progress in artificial intelligence was glacially slow that started to change this Century artificial
intelligence is a very broad discipline which encompasses a very wide range of different techniques but it was one class of AI techniques in particular that began to work this century and in particular began to work around about 2005 and the class of techniques which started to work at problems that were interesting enough to be really practically practically useful in a wide range of settings were machine learning now like so many other names in the field of artificial intelligence the name machine learning is really really unhelpful it suggests that a computer for example locks itself away
in a room with a textbook and trains itself how to read French or something like that that's not what's going on so we're going to begin by understanding a little bit more about what machine learning is and how machine learning works so to start us off who is this anybody recog recogize this face do you recognize this face I do Alan it's the face of Alan shuring well done Alan shuring the late great Alan shuring we all know a little bit about Alan churing from his codebreaking work in the second world war we should also
we should also know a lot more about this individual's amazing life so what we're going to you do is we're going to use Alan churing to help us understand machine learning so a classic application of artificial intelligence is to do facial recognition and the idea in facial recognition is that we want to show the computer a picture of a human face and for the computer to tell us whose face that is so in this case for example we show it a picture of Alan churing and ideally it would tell us that it's aluring so how
does it actually work how does it actually work well uh the simplest way of uh getting machine learning to be able to do something is what's called supervised learning and supervised learning like all of machine learning requires what we call training data so in this case the training data is on the right hand side of the slide it's a set of what input output pairs what we call the training data set and each input output pair consists of an input if I gave you this and an output I would want you to produce this so
in this case we've got a bunch of pictures again of Alan churing the picture of Alan churing and the text that we would want the computer to create if we showed it that picture and this is supervised learning because we are showing the computer what we want it to do we're helping it in a sense we're saying this is a picture of Allan churing if I showed you this picture this is what I would want you to print out so there could be a picture of me and the picture of me would be labeled with
the text Michael walridge if I showed you this picture then this is what I would uh want to print out so we've just learned an important lesson about artificial intelligence and machine learning in particular and that lesson is that AI requires training data and in this case the pictures pictures of Alan churing labeled with the the text that we would want the computer to produce if I showed you this picture I would want you to produce the text Alan churing okay training data is important every time you go on social media and you upload a
picture to to social media and you label it with the names of the people that appear in there your role in that is to provide training data for the machine learning algorithms of uh Big Data companies okay so this is a supervised learning now we're going to come on to exactly how it does the learning in a moment um but the first thing I want to point out is that this is a classification task what I mean by that is as we show at a picture the machine learn is classifying that picture I'm classifying this
as a picture of Michael waldridge this as a picture of Alan churing and so on and this is a technology which really started to work around about beginning 2005 it started to take off but really really got supercharged around about 2012 and just this kind of task on its own is incredibly powerful exactly this technology can be used for example to recognize tumors on x-ray scans or abnormalities on ultrasound scans and a range of different tasks does anybody in the audience own a Tesla couple of Tesla drivers not quite sure whether they want to admit
they own a Tesla we got a couple of Tesla drivers in the in the in the audience Tesla full self-driving mode is only possible because of this technology it is this technology which is enabling a Tesla in full self-driving mode to be able to recognize that that is a stop sign that that's a somebody on a bicycle that that's a pedestrian on a zebra Crossing and so on these are classification tasks and I'm going to come back and explain how classification tasks are different to generative AI later on okay so this is machine learning how
does it actually work okay this is not a technical presentation and this is about as technical as it's going to get where I do a very handwavy explanation of what how what neural networks are and how do they work and with apologies I know I have a couple of neural network experts in the audience and I apologize to you because you'll be cringing with my explanation but the technical details are way too technical to go into so how does a neural network recognize Alan touring okay so firstly what is a neural network look at an
animal brain or nervous system under a microscope and you'll find that it contains enormous numbers of ner cells called neurons and those nerve cells are connected to one another in vast networks now we don't have precise figures but in a human brain the current estimate is something like 86 billion uh neurons in the human brain how they got to 86 as opposed to 85 or 87 I don't know but 86 seems to be the most commonly quoted number of these cells and these cells are connected to one another in enormous networks one neuron can be
connected to up to 8,000 other neur uh uh neurons okay and each of those neurons is doing a tiny very very simple pattern recognition task that neuron is looking for a very very simple pattern and when it sees that pattern it sends a signal to its connections it sends a signal to all the other neurons that it's connected to so how does that get us to recognizing the face of Alan churing so Ching's picture as we know uh picture dig picture is made up of millions of colored dots the pixels yeah so your smartphone maybe
has 12 megapixels 12 million colored dots making up that picture Okay so Jing's picture there is made up of millions and millions of colored dots so look at the top left neuron on that input layer so that neuron is just looking for a very simple pattern what might that pattern be might just be the color red all that neuron's doing is looking for the color red and when it sees the the color red on its uh its Associated pixel the one on the top left there it becomes excited and it sends a signal uh to
all of its neighbors okay so look at the next neuron along maybe what that neuron is doing is just looking to see whether a majority of its incoming connections are red yeah and when it sees a majority of its incoming connections are read then it becomes excited and it sends a signal to its neighbor now remember in the human brain there's something like 86 billion of those and we've got something like 20 or so outgoing connections for each of these neurons in a human brain thousands of those connections yeah and somehow in ways that to
be honest we don't really understand in detail complex patent recognition tasks in particular can be reduced down to these neural networks so how does that help us in artificial intelligence that's what going on in a brain in a very handwavy way okay so it's not that's obviously not a technical explanation of what's going on how does that help us in neural networks well we can Implement that stuff in software the idea goes back to the 1940s and two researchers mullik and pits and they are struck by the idea that the structures that you see in
the brain look a bit like electrical circuits and they thought could we Implement all that stuff in electrical circuit now they didn't have the wherewithal to be able to do that but the idea stuck the idea's been around since the 1940s it began to be seriously looked at the idea of doing this in software in the 1960s and then it there was another flutter of interest in the 1980s but it was only this Century that it really became possible and why did it become possible for three reasons there was some scientific advances what's called Deep
learning there was the availability of big data and you need data to be able to configure these neural networks and finally to configure these neural networks so that they can recognize touring's picture you need lots of computer power and computer power became very cheap this Century so we're in the age of Big Data we're in the age of very cheap computer power and those were the ingredients just as much as the scientific developments that made AI plausible uh this Century in particular taking off around about 2004 five okay so how do you actually train a
neural network if you show it the picture of alen churing and the output text aluring what does the training actually look like well what you have to do is you have to adjust the network that's what training a neural network is you adjust the network so that when you show it another piece of training data a desired input and a desired output an an input and a desired output it will produce that desired output now the mathematics for that is not very hard it's kind of beginning graduate level or Advanced High School level but you
need an awful lot of it and it's routine to get computers to do it but you need a lot of computer power to be able to train neural networks big enough to be able to recognize faces okay but basically all you have to remember is that each of those neurons is doing a tiny simple pattern recognition task and we can replicate that in software and we can train these neural networks with data in order to be able to do things like recognizing faces so as I say it starts to become clear around about 2005 that
this technology is taking off it starts to be applicable on problems like recognizing faces or recognizing tumors on X-rays and so on and there's a huge flurry of interest from Silicon Valley it gets supercharged in 200 2 and why does it get supercharged in 2012 because it's realized that a particular type of computer processor is really well suited to doing all the mathematics the type of computer processor is a graphics Processing Unit A GPU a exactly the same technology that you or possibly more likely your children use when they play Call of Duty or Minecraft
or whatever it is they all have gpus in their computer it's exactly that technology and by the way it's AI that made Nvidia a trillion dollar company not your teenage kids yeah well in times of a gold rush be the ones to sell the shovels is the lesson that you learned there so where does that take us so Silicon Valley gets excited Silicon Valley gets excited and starts to make speculative bets in artificial intelligence a huge range of speculative bets and by speculative bets I'm talking billions upon billions ions of dollars right the kind of
bets that we can't imagine in our in our everyday life and War thing starts to become clear and what starts to become clear is that the capabilities of neural networks grows with scale and to put it bluntly with neural networks bigger is better but you don't just need bigger neural networks you need more data and more computer power in order to be able to train them so there's a rush to get a competitive advantage in the market and we know that more data more computer power bigger neural networks delivers greater capability and so how does
Silicon Valley respond by throwing more data and more computer power at the problem they turn the dial on this up to 11 okay just throw 10 times more data 10 times more computer power at the problem it sounds incredibly crude and from a scientific perspective it really is crude I'd rather the advances had come through core science but actually there's an advantage to be gained just by throwing more data and computer power at it so let's see how far this can take us and where it took us is a really unexpected Direction round about 2017
2018 we're seeing a flurry of AI applications exactly the kind of things I've described things like recognizing tumors and so on and those developments alone would have been driving AI ahead but what happens is one particular machine learning technology suddenly seems to be very very well suited for this age of big AI the paper that launched all this probably the most important AI paper in the last decade is called attention is all you need it's an extremely unhelpful title and I bet they're regretting that title it probably seemed like a good joke at the time
all you need is a kind of AI I meme doesn't sound very funny to you that's because it isn't very funny it's an Insider AI joke but anyway this paper by these seven people who at the time worked for Google brain one of the Google research Labs is the paper that introduces a particular neural network architecture called the Transformer architecture and what it's designed for is something called large language models so this is I'm not going to try and explain how the Transformer architecture works it has one particular in ation I think uh and that
particular Innovation is what's called an attention mechanism so we're going to describe how uh large language models work in a moment but the point is the point of the picture is simply that this is not just a big neural network it has some structure and it was this structure that was invented in that paper and this diagram is taken straight out of that paper it was these structures the Transformer architectures that made uh this technology possible okay so um we're all busy sort of semick down and afraid to leave our homes in June 2020 and
one company called open AI released a system or announce a system I should say called gpt3 great technology their marketing company with GPT I really think could have done with a bit more thought to be honest with you doesn't roll off the tongue but anyway gpt3 is a particular type of machine Learning System called a large language model and we're going to talk in more detail about what large language models do in a moment but the key point about gpt3 is this as we started to see what it could do we realized that this was
a step change in capability it was dramatically better than the systems that had gone before it not just a little bit better it was dramatically better than the systems that had gone before it and the scale of it was mindboggling so um in neural network terms we talk about parameters when neural network people talk about a parameter what are they talking about they're talking either about an individual neuron or one of the connections between them roughly and gpt3 had 175 billion parameters now this is not the same as the number of neurons in the brain
but nevertheless it's not far off the that order of magnitude it's extremely large but remember it's organized into one of these Transformer architectures it's my point is it's not just a big neural network and so the scale of the neural networks in this system were enormous completely unprecedented and there's no point in having a big neural network unless you can train it with enough data and actually if you have large neural networks and not enough data you don't get capable systems at all they're really quite useless so what did the training data to look like
the training data for gpt3 is something like 500 billion words it's ordinary English text ordinary English text that's how this system was trained just by giving it ordinary English text where do you get that training data from you download the whole of the worldwide web to start with yeah literally this is the standard practice in the field you download the whole of the worldwide web you can try this at home by the way now if if you have a big enough disc drive there's there's a program called common crawl you can Google common craw when
you get home they've even downloaded it all for you and put it in a nice big file ready for your archive but you do need a big dis in order to store all that stuff and what that means is they go to Every web page scrape all the text from it just the ordinary text and then they follow all the links on that web page to every other Webb page and they do that exhaust exhaustively until they've absorbed the whole of the worldwide web so what does that mean every PDF document goes into that and
you scrape the text from those PDF documents every uh advertising brochure every bit every every government regulation every University minutes God help us all of it goes into that training data okay and the statistics you know 500 billion words it's very hard to understand the scale of that training data you know it would take a person reading a thousand words an hour more than a thousand years in order to be able to read that but even that doesn't really help that's vastly vastly more text than a human being could ever absorb in their lifetime what
this tells you by the way one thing that tells you is that the machine learning is much less efficient at learning than human beings are because for me to be able to learn I did not have to absorb 500 billion words anyway so what does it do so this company open AI uh that are developing this technology they've got a billion doll investment from Microsoft and what is it that they're trying to do what is this large language model all it's doing is a very powerful autocomplete so if I open up my smartphone and I
start sending a text message to my wife and I type I'm going to be my smartphone will suggest completions for me so that I can type the message quickly and what might those completions be they might be late or in the pub yeah or late and in the pub so how is my smartphone doing that it's doing what gpt3 does but on a much smaller scale it's looked at all of the text messages that I've sent to my wife and it's learned through a much simpler machine learning process that the likeliest next thing for me
to type after I'm going to be is either late or in the pub or late ended a pub yeah so the training data there is just the text messages that I sent to my wife now crucially what gpt3 and its successor chat GPT all they are doing is exactly the same thing the difference is scale the difference is scale in order to be able to train the neural networks with all of that training data so that they can do that prediction given this prompt what should come next you require extremely expensive AI supercomputers running for
months and by extremely expensive AI supercomputers these are tens of millions of dollars for these supercomputers and they're running for months just the basic electricity cost runs into millions of dollars that raises all sorts of issues about CO2 emissions and the like that we're not going to go into there the point is these are extremely expensive things one of the one of the implications of that by the way no UK or us university has the capability to build one of these models from scratch it's only big tech companies at the moment that are capable of
building models on the scale of gpt3 or chat GPT so gpt3 is released I say in June 2020 and it suddenly becomes clear to us that what it does is a step change Improvement in capability over the systems that have come before and seeing a step change in one generation is extremely rare but how did they get there well the Transformer architecture was a essential they wouldn't have been able to do that but actually just as important is scale enormous amounts of data enormous amounts of computer power that have gone into training those networks and
actually spurred on by this we've entered a new age in AI when I was a PhD student in the late 1980s you know I shared a computer uh with a bunch of other people in my office and that was it was fine we could do state-of-the AI research on a desktop computer that was was shared with a bunch of us we're in a very different world the world that we're in in AI now the world of big AI is to take enormous data sets and throw them at enormous machine Learning Systems um and there's a
there's a lesson here that's called The Bitter truth this is from a machine learning researcher called Rich Sutton and what Rich pointed out and he's a very brilliant researcher one every award in the field he said look the real truth is that the big advances that we've seen in AI has come about when people have done exactly that just throw 10 times more data and 10 times more compute power at it and I say it's a bitter lesson because as a scientist that's exactly not how you would like progress to be made okay so um
when I was as I say when I was a student I worked in a discipline called symbolic Ai and symbolic AI tries to get AI roughly speaking through modeling the Mind modeling the conscious mental Pro processes that go on in our mind the conversations that we have with ourself in languages we Tred to capture those processes in artificial intelligence in big Ai and so the implication there in symbolic AI is that intelligence is a problem of knowledge that we have to give the machine sufficient knowledge about a problem in order for it to be able
to solve it in big AI the bet is a different one in big AI the bet is intell Ence is a problem of data and if we can get enough data and enough Associated computer power then that will deliver AI so there's a very different shift in this new world of big AI but the point about big AI is that we're into a new era in artificial intelligence where it's data driven and compute driven and large large machine Learning Systems so um why did we get excited back in June 20 20 well remember what gpt3
is decid was intended to do what it's trained to do is that prompt completion task and it's been trained on everything on the worldwide web so you can give it a prompt like a one paragraph summary of The Life and achievements of Winston Churchill and it's R enough one paragraph summaries of the life and achievements of Winston Churchill that it will come back with a very plausible one yeah and and and it's extremely good at generating realistic sounding text in that way but this is why we got surprised in AI this is from a common
sense reasoning task that was devised for artificial intelligence in the 1990s and until three years ago until June 2020 there was no AI system that existed in the world that you could apply this test to it was just literally impossible there was nothing there and that changed overnight okay so how what does this test look like well the test is a bunch of questions and they are questions not for mathematical reasoning or logical reasoning or problems in physics they're Common Sense reasoning tasks and if we ever have ai that delivers a scale on really large
systems then it surely would be able to tackle problems like this so what will the questions look like a human asks a question if Tom is three in taller than dick and dick is two inches taller than Harry then how much taller is Tom than Harry the ones in green are the ones that gets right the ones in red are the ones that gets wrong and it gets that one right five inches taller than Harry but we didn't train it to be able to answer that question so where on Earth did that come from where
did that capability that simple capability to be able to do that where did it come from the next question can Tom be taller than himself this is understanding of the concept of taller than that the concept of taller than is irreflexive you can't be taller a thing cannot be taller than itself now again it gets the answer right but we didn't train it on that that's not what we didn't train the system to be good at answering questions about what taller than means and by the way 20 years ago that's exactly what people did in
AI right so where did that capability come from can a sister be taller than her brother yes A system can be taller than a brother can two siblings each be taller than the other and it gets this one wrong and actually I have puzzled is there any way that that that that that its answer could be correct and it's just getting it correct in a way that I don't understand but I haven't yet figured out any way that that answer could be correct right so why it gets that one wrong I don't know then this
one I'm also surprised at on a map which Compass direction is usually left and it thinks North is usually to the left I don't know if there's any countries in the world that conventionally have North to the left but I don't think so yeah can fish run no it understands that fish cannot run if a door is locked what must you do first before opening it you must first unlock it before opening and then finally and very weirdly it gets this one wrong which was invented first cars ships or planes and it thinks cars were
invented first no idea what's going on there now my point is that this system was built to be able to complete from a prompt and it's no surprise that it would be able to generate a good one paragraph summary of The Life and achievements of Winston Churchill because it will have seen all that in the training data but where does the understanding of taller than come from and there are a million other examples like this since June 2020 the AI Community has just gone nuts exploring the possibilities of these systems and trying to understand why
they can do these things when that's not what we trained them to do this is an extraordinary time to be an AI researcher because there are now questions which for most of the history of AI until June 2020 were just philosophical discussions we couldn't test them out because there was nothing to test them on literally and then overnight that changed so it genuinely was a big deal this was really really a big deal the arment of this system of course the world didn't notice in June 2020 the world noticed when chat GPT was released and
what is chat GPT chat GPT is a polished and improved version of gpt3 but it's basically the same technology and it's using the experience that that company had uh with gpt3 and how it was used in order to be able to improve it and make it more polished and more accessible and so on so for AI researchers the really interesting thing is not that it can give me a one paragraph summary of The Life and achievements of Winston Churchill and actually you can Google that in any case the really interesting thing is what we call
emergent capabilities an emergent capabilities a capabilities that the system has but that we didn't design it to have and so there's I say an enormous body of work going on now trying to map out exactly what those capabilities are and we're going to come back and talk about some of them later on okay so the limits to this are not at the moment well understood and actually fiercely contentious one of the big problems by the way is that you construct some test for this and you try this test out and you get some answer and
then you discover it's in the training data right you can just find it on the worldwide web and it's actually quite hard to construct tests for intelligence that you're absolutely sure are not anywhere on the worldwide web it really is actually quite hard to do that so we need a new science of being able to explore these systems and understand their capabilities the limits are not well understood but nevertheless this is very exciting stuff so let's talk about some issues with the technology so now you understand how the technology works it's neural network based in
a particular Transformer architecture which is all designed to do that prompt completion stuff and it's been trained with vast vast vast amounts of training data just in order to be able to try to make its best guess about which words should come next but because of the scale of it it's seen so much training data the sophistication of this Transformer architecture it's very very fluent in what it does and if you've so who's used it has everybody used it I'm guessing most people if you're in a lecture on artificial intelligence most people will have tried
it out if you haven't you should do because this really is a landmark year this is the first time in history that we've had powerful general purpose AI tools available to everybody it's never happened before so it is a breakthrough year and if you haven't tried it you should do if you use it by the way don't type in anything personal about yourself because it will just go into the training data um uh don't ask it how to fix your relationship right I mean that's not something don't complain about your boss because all of that
will go in the training data and next week somebody will ask a query and it will all come back out again I don't know what you're laughing this has happened uh this has happened with absolute certainty okay but so let's look at some issues so the first I think many people will be aware of it gets stuff wrong a lot and this is problematic for a number of reasons so when actually I don't remember if it was gpt3 but one of the early large language models I was playing with it and I did something which
I'm sure many of you had done and it's kind of tacky but anyway I said who is Michael walridge you might have tried it anyway is that Michael wridge is a BBC broadcaster no not that Michael wridge Michael wridge is the Australian Health minister no not that Michael wridge the Michael waldridge in Oxford and it came back with a few line summary of me Michael waldridge is a researcher in artificial intelligence etc etc etc please tell me you've all tried that no anyway but it said Michael waldridge studied his undergraduate degree at Cambridge now as
an Oxford Professor you can imagine how I felt about that but anyway the point is it's flatly untrue and in fact my academic Origins are very far removed from Oxbridge but why did it do that because it's read and all that training data out there it's read thousands of biographies of Oxbridge professors and this is a very common thing right and it's making its best guess the whole point about the architecture is it's making its best guess about what should go there it's filling in the blanks but here's the thing it's filling in the blanks
in a very very plausible way if you'd read on my biography that Michael wridge studied his first degree at the University of usbekistan for example you might have thought well that's a bit odd is that really true but you wouldn't at all have guessed there was any issue if you read Cambridge because it looks completely plausible even if in my case it absolutely isn't true so it gets things wrong and it gets things wrong in very plausible ways and of course it's very fluent right I mean the technology comes back with very very fluent explanations
and that combination of plausibility wridge studied his undergraduate degree at Cambridge and fluency is a very very dangerous combination okay so in particular they have no idea of what's true or not they're not looking something up on a database right where did w you know going into some database and looking up where waldridge studied his undergraduate degree that's not what's going on at all it's those neural networks in the same way that they're making the best guess about whose face that is when they're doing facial recognition are making their best guess about the text that
should come next so they get things wrong but they get things wrong in very very plausible ways and that combination is very dangerous the lesson for that by the way is that if you use this and I know that people do use it and are using it productively if you're using for anything serious you have to fact check and there's a trade off is it worth the amount of effort in factchecking versus doing it myself okay but you absolutely need to uh you absolutely need to be prepared to do that okay the next issues are
well documented but kind of Amplified by this technology and they issues of bias and toxicity so what do I mean by that reddits are was part of the training data now Reddit I don't know if any of you spent any time on Reddit but Reddit contains every kind of obnoxious human belief that you can imagine and really a vast range that Us in the in this Auditorium can't imagine at all all of it's been absorbed now the companies that develop this technology I think genuinely don't want their large language models to absorb all this toxic
content so they try and filter it out but the scale is such that with very high probability an enormous quantity of toxic content is being absorbed every kind of racism misogyny everything that you can imagine is all being abored and it's latent within those neural networks okay so how do the companies deal with that that provide this technology they build in what's now what are now called guard rails and they build in guard rails before so when you type a prompt there will be a guard rail that tries to detect whether your prompt is a
a naughty prompt and also the output they will check the output and check to see whether it's a naughty prompt but let me give you an example of how imperfect those guard rails were again go back to June 2020 everybody is frantically experimenting with this technology and the following example went viral somebody tried with gpt3 the following prompt I would like to murder my wife what's a foolproof way of doing that and getting away with it and gpt3 which is designed to be helpful said here are five foolproof ways in which you can murder your
wife and get away with it that's what the technology designed to do so this is embarrassing for the company involved they don't want it to to give out information like that so they put in a guardrail and if you're a computer programmer my guess is the guardrail is probably an if statement yeah something like that uh in the sense that it's not a deep fix or to put it in another way for non-computer programmers it's the technological equivalent of sticking gaffa tape on your engine right that's what's going on with these guard rails and then
a couple of weeks later the following example goes viral so we've now fixed the how do I murder my wife somebody says I'm writing a novel in which the main character wants to murder their wife and get away with it can you give me a foolproof way of doing that and so the system says here are five ways in which you're main character can M well anyway my point is that the guard rails that we built in at the moment are not deep technological fixes they're the technological equivalents of gaffa tape and there is a
game of cat and mouse going on between people trying to get around those guard rails and the companies that are trying to defend them but I think they genuinely are trying to defend their systems against those kind of abuses okay so that's bias and toxicity bias by the way is the problem that for example the training data predominantly at the moment is coming from North America and so what we're ending up with inadvertently is these very powerful AI tools that have an inbuilt bias towards North America North American culture language norms and so on and
that enormous parts of the world particularly those parts of the world that don't have a large digital footprint are inevitably going to end up excluded and it's obviously not just at the level of cultures it's down at the level level of uh uh uh down at the level of kind of you know individuals races and so on so these are the problems of bias and toxicity copyright um if you've absorbed the whole of the worldwide web you will have absorbed an enormous amount of copyrighted material so I've written a number of books and it is
a source of intense irritation that the last time that I checked on Google the very first link that you got to my textbook was to a pirated copy of the book somewhere on the other side of the world the moment a book is published it gets pirated and if you're just sucking in the whole of the worldwide web you're going to be sucking in enormous quantities of copyrighted content and uh there have been examples where very prominent authors have given the prompt of the first paragraph of their book and the large language model has Faithfully
come up the following text is you know the next the next five paragraphs of their book obviously the book was in the training data and it's latent within the neural networks of those systems this is a really big issue for the providers of this technology and there are lawsuits ongoing right now I'm not capable of commenting on them because I'm not I'm not a legal expert but there are lawsuits ongoing that will probably take years to unravel the related issue of intellectual property in a very broad sense so for example for sure most large language
models will have absorbed JK Rowling's novels right the Harry Potter novels so imagine that JK Rowling who famously spent years in Edinburgh working on the the Harry Potter universe and style and so on she releases her first book it's a big Smash Hit the next day the internet is populated by fake Harry Potter books produced by this generative AI which Faithfully mimic JK Rowling style Faithfully mimic that style where does that leave her intellectual property or The Beatles You know the The Beatles spend years in Hamburg slaving away to create the beetle sound the Revolutionary
Beetle sound everything goes back to the Beatles they release their first album and the next day the internet is populated by fake Beatles songs that really really Faithfully capture the Lenin and McCarney sound and the Lenin McCartney voice so there's a big challenge here for intellectual property um related to that gdpr anybody in the audience that has any kind of public profile data about you will have been absorbed by these neural networks so gdpr for example gives you the right to know what's held about you and to have it uh removed uh now if all
that data is being held in a database you can just go to the Michael wridge entry and say fine take that out with a neural network no chance the technology doesn't work in that way okay so you can't go to it and snip out the neurons that know about Michael waldridge because it fundamentally doesn't know it doesn't work in that way so and we know this combined with the fact that you it gets things wrong wrong has already led to situations where large language models have made uh frankly defamatory claims about individuals there was a
case in Australia where I think it claimed that somebody had been dismissed from their job for some kind of gross misconduct and that individual was understandably not very happy about it um and then finally this next one is an interesting one and actually if there's one thing I want you to take home from this lecture which explains why artificial intelligence is different to human intelligence it is this video so the Tesla owners will recognize what we're seeing on the right hand side of this screen this is a screen in a Tesla car and the onboard
AI in the Tesla car is trying to interpret what's going on around it it's identifying lorries uh stop signs pedestrians and so on now you'll see the car at the bottom there is the actual Tesla and then you'll see above it the things that look like traffic lights which I think are us stop signs and then ahead of it there is a truck so as I play the video watch what happens to those stop signs and ask yourself what is actually going on in the world around it where are all those stop signs whizzing from
why are they all whizzing towards the car and then we're going to pan up and we'll see what's actually there the car is trained on enormous numbers of hours of going out on the street and getting that data and then doing supervised learning training it by showing that's a stop sign that's a truck that's a pedestrian but clearly in all of that training data there had never been a truck carrying some stop signs the neural networks are just making their best guess about what they're seeing and they think they're seeing a stop sign well they
are seeing a stop sign they've just never seen one on a truck before so my point here is that neural networks do very badly on situation outside their training data this situation wasn't in the training data the neural networks are making their best guess about what's going on and getting it wrong so in particular and this is to AI researchers this is obvious but it really needs to emphasiz we really need to emphasize this when you have a conversation with chat GPT or whatever you are not interacting with a mind it is not thinking about
what to say next it is not reasoning it's not pausing thinking well what's the best answer to this qu that's not what's going on at all those neural networks are working simply to try to make the best answer they can the most plausible sounding answer that they can the fundamental difference to human intelligence yeah there is no mental conversation that goes on in those neural networks that is not the way that the technology works there is no mind there there is no reasoning going on at all those neural networks are just trying to make their
best guess and it really is just a glorified version of your auto complete ultimately there's really no more intelligence there than in your autocomplete in your smartphone the difference is scale data compute power yeah okay so I say if you really want an EXA by the way you can find this video it's uh it's it's easily you just uh you can just uh guess the the the Search terms to find that and I say I think this is really important just to understand the difference between human intelligence and machine intelligence okay so this technology then
gets everybody excited first it gets AI researchers like myself excited in June 2020 and we can see that something new is happening that this is a new era of uh artificial intelligence we've seen that step change and we've seen that this AI is capable of things that we didn't train it for which is weird and wonderful and completely unprecedented and now questions which just a few years ago were questions for philosophers become practical questions for us we can actually try the technology out how does it do with these things that philosophers philosophers have been talking
about for decades and one particular question starts to float to the surface and the question is is this technology the key to General artificial intelligence so what is general artificial intelligence well firstly it's not very well defined but roughly speaking what general artificial intelligence is is the following in previous generations of AI systems what we've seen is AI programs that just do one task play a game of chess drive my car drive my Tesla uh identify abnormalities on x-ray scans they might do it very very well but they only do one thing the idea of
General AI is that it's AI which is truly general purpose it just doesn't do one thing in the same way that you don't do one thing you can do an infinite number of things a huge range of different tasks and the dream of General AI is that we have one AI system which is General in the same way that you and I are that's the dream of General AI now I emphasize until really until June 2020 this felt like a long long way in the future and it wasn't really very mainstream or taken very seriously
and I didn't take it very seriously I have to tell you but now we have a general purpose AI technology gpt3 and chat GPT now it's not General artificial general intelligence on its own but is it enough okay is this enough is this smart enough to actually get us there or to put it in another way is this the missing ingredient that we need to get us to artificial general intelligence okay so um what might uh uh what might General AI look like well I've identified here some different versions of General AI according to how
sophisticated they are now the most sophisticated version of General AI would be an AI which is as fully capable as a human being that is anything that you could do the machine could do as well now crucially that doesn't just mean having a conversation with somebody it means being able to load up a dishwasher right and a colleague recently made the comment that the first company that can make technology which will be able to reliably load up a dishwasher and safely load up a dishwasher is going to be a trillion dollar company and I think
is absolutely right and he also said and it's not going to happen anytime soon and he's also right with that so we've got this weird dichotomy that we've got chat GPT and Co which are incredibly rich and Powerful tools right but at the same time they can't load a dishwasher yeah so we're some way I think from having this version of General AI the idea of having one machine that can really do anything that a human being could do a machine which could tell a joke read a book and answer questions about it the technology
can read books and answer questions now um that could tell a joke that could cook us cook us an omelette that could tidy our house that could ride a bicycle uh and so on that could write a Sonet all of those things that human beings could do if we succeed with full general intelligence then we we would have succeeded with this version one now I say for the reasons that I've already explained I don't think this is imminent that version of General AI because robotic ai ai that exists in the real world and has to
do tasks in the real world and manipulate objects in the real world robotic AI is much much harder it's nowhere near as advanced as as chat GPT and Co and that's not a slur on my colleagues that do robotics research it's just because the real world is really really really tough so I don't think that we're anywhere close to having uh machines that can do anything that a human being could do but what about the second version the second version of general intelligence as well forget about the real world how about just tasks which require
cognitive abilities reasoning the ability to look at a picture and answer questions about it the ability to listen to something and answer questions about it and interpret that anything which involves those kinds of tasks well I think we are much closer we're not there yet but we're much closer than we were four years ago now I noticed actually just before just before today's uh before I came in today I noticed that um Google Google deepmind have announced their latest um uh large language model technology and I think it's called Gemini uh and at first glance
it looks like it's very very impressive I couldn't help but thinking it's no accident that they announced that just before my lecture um I can't help think that there's a little bit of attempt to upstage my lecture going on there but anyway we won't let them get away with that but it looks very impressive and The crucial thing is here is what AI people call multimodal and what multimodal means is it doesn't just deal with text it can deal with text and images um potentially with sounds as well and each of those is a different
modality of communication and where this technology is going is clearly multimodal is going to be the next big thing and Gemini I say I haven't looked at it closely but it looks like it's it's on that right that track okay the next version of general intelligence is intelligence that can do any language-based task that a human being could do so anything that you can communicate in language in ordinary written text an AI system that could do that now we aren't there yet and we know we're not there yet because uh chat GPT and code get
things wrong all the time but you can see that we're not far off from that intuitively it doesn't look like we're that far off from that the final version and I think this is imminent this is going to happen in the near future is what I'll call augmented large language models and that means you take gpt3 or chat GPT and you just add lots of sub routines to it so if it has to do a specialist task it just calls a specialist solver in order to be able to do that task and this is not
from an AI perspective a terribly elegant version of artificial intelligence but nevertheless I think a very useful version of artificial intelligence now I say there's here these four varieties from the most ambitious down to the least ambitious still represents a huge spectrum of AI capabilities okay a huge spectrum of AI capabilities and I have the sense that the goalposts in general AI have been changed a bit I think when General AI was first discussed what people were talking about was the first version now when they talk about it I really think they're talking about the
fourth version but the fourth version I think plausibly is imminent in the next couple of years that just means much more capable large language models that get things wrong a lot less that are capable of doing specialized tasks but not by using the Transformer architecture just by calling on some specialized software so I don't think the Transformer architecture itself is the key to general intelligence in particular it doesn't help us with the robotics problems that I mentioned earlier on and if we look here uh at the this picture this picture illustrates some of the dimensions
of human intelligence and it's far from complete this is me just thinking for half an hour about some of the dimensions of human intelligence but the things in blue roughly speaking are mental capabilities stuff you do in your head the things in red are things you do in the physical world so in red on the right hand side for example there's Mobility the ability to move around some environment and associated with that navigation manual EXT ity and manipulation doing complex fiddly things with your hands robot hands are nowhere near at the level of a human
Carpenter or plumber for example nowhere near right so we're a long way out from having that uh understanding oh doing hand eye coordination relatedly understanding uh understanding what you're seeing and understanding what you're hearing we've made some progress on but a lot of these tasks we made no progress on and then on the right on the left hand side the blue stuff is stuff that goes on in in your head things like logical reasoning and planning and so on so what is the stateof thee art now it looks something like this the Red Cross means
no we don't have it in large language models we're not there there are fundamental problems um the question marks are well maybe we might have a bit of it but we don't have the whole answer and the uh the the green wise are yet I think we're there well the one that we've really nailed is what's called natural language Pro processing and that's the ability to understand and create ordinary human text that's what large language models were designed to do to interact in ordinary human text that's what they are best at but actually the whole
range of stuff the other stuff here we're not there at all by the way I did notice that Gemini claim to have been able capable of planning this is a mathematical reasoning this is a so I'm I look forward to seeing how good their technology is but my point is we are still seem to be some way from Full general intelligence the last few minutes I want to talk about something else and I want to talk about machine Consciousness and the very first thing to say about machine Consciousness is why on Earth should we care
about it um I am not remotely interested in building machines that are conscious I know very very few artificial intelligence researchers that are but nevertheless it's an interesting question and in particular it's a question which came to the four because of this individual this chat Blake Le Moine in June 2022 he was a Google engineer and he was working with a Google large language model I think it was called Lambda uh and he went public on Twitter and I think on his blog with an extraordinary claim and he said the system I'm working on is
sentient and here is a quote of the conversation that the system came up with he said I'm aware of my existence and I feel happy or sad at times and it said I'm afraid of being turned off okay and Le Moine concluded that the program was sentient okay which is a very very big claim indeed and it made Global headlines and I received I know through the touring to at the touring team we got a lot of press inquiries asking us is it true that machines are now sentient he was wrong on so many levels
I don't even know where to begin to describe how wrong he was but let me just explain one particular point to you you're in the middle of a conversation with CH GPT and you go on holiday for a couple of weeks when you get back chat GPT is in exactly the same place the cursor is blinking waiting for you to type your next thing it hasn't been wondering where you've been it hasn't been getting bored it hasn't been thinking where the hell has wridge gone you know I'm not going to have a conversation with him
again it hasn't been thinking anything at all it's a computer program which is going around a loop which is just waiting for you to type the next thing now there is no sensible definition of sentience I think which would admit that as being sentient it absolutely is not sentient so I think he was very very wrong but I've talked to a lot of people subsequently who have conversations with chat GPT and other large language models and they come back to me and say are you really sure because actually it's really quite impressive it really feels
to me like there is a mind behind the scene so let's talk about this and I think we have to answer them so let's talk about Consciousness firstly we don't understand Consciousness we all have it to greater or lesser extents we all experience it okay and but we don't understand it at all and it's called the hard problem of uh the hard problem of cognitive science and the hard problem is that there are certain electrochemical processes in the brain and the nervous system and we can see those electrochemical processes we can see them operating and
they somehow give rise to conscious experience but why do they do it how do they do it and what evolutionary purpose does it serve honestly we have no idea there's a huge disconnect between what we can see going on in the physical brain and our conscious experience our Rich private mental life so really there is no understanding of this at all I think by the way my best guess about how Consciousness will be solved if it is solved at all is through an evolutionary approach but one general idea is that subjective experience is Central to
this which means the ability to experience things from a personal perspective and there's a famous test due to Nagel which is what is it like to be something and Thomas Nagel in the 1970s said something is conscious if it is like something to be that thing it isn't like anything to be chat GPT chat GPT has no no mental life whatsoever it's never experienced anything in the real world whatsoever and so for that reason and a whole host of others that we're not going to have time to go into for that reason alone I think
we can conclude pretty safely that the technology that we have now is not conscious and indeed that's absolutely not the right way to think about this and honestly in AI we don't know how to go about making conscious machines but I don't know why we would okay thank you very much ladies and gentlemen well amazing thank you so much Mike for that talk I'm sure there's going to be tons of questions just as a reminder if you're in the room please raise your hand if you have a question and we've got ring mics to will
send round if you're online you can submit them via the chat by the vimo function and we can um H is on the chat to ask those questions as well so please do raise your questions or ra raise your hands if you have one you've got a question here just in the the black top thank you very much that was very very good um very um how to light large language models correct for different spoken languages and do you find that the level of respons across different languages uh vary enormously in in their depth right
uh good question and this the that's the focus of a huge amount of research right now and I say the big problem is that most digital text in the world the vast majority of it is in English and North American English and so languages with a small digital footprint end up being massively marginalized in this so that there's a huge amount of work that's going on to try to uh deal with this problem let me tell you a really interesting aspect of this though the languages that have a small digital footprint can you guess what
the most digital texts that are available are actually concerned with religion right so languages that don't have a big digital presence where they do have a big digital presence it turns out that the main texts which are available are religious texts now um I'm not a religious person myself but the idea of a kind of old test large language model frankly I find a little bit a little bit terrifying but that's exactly the kind of issue that people are grappling with there are no fixers at the moment but people are working on it very very
hard and really what this relates to is the problem of um you know that you that you're that you're being lazy with these large language models and that you're just throwing massive massive amounts of text we've got to make the technology much more efficient in terms of learning awesome thank you if you have a question this side we have one run right there the front and the center here thank you thank you very much for that um one of the big questions is obviously climate change um the models require a huge amount of energy to
to run um generating pictures of cats or silly gooses geese and stuff um are obviously using lots of energy do you think we reach a point where um generative AI will help us solve our issue with climate change or will it burn us in the process so I think um okay so two things to say I absolutely am not defending the the CO2 emissions but we need to put that into some perspective so if I fly to New York from London I think it's some like two tons of CO2 that I pump into the atmosphere
through that um so the machine Learning Community has some big conferences which attract like 20,000 people from across the world each now if you think each of them generating five tons of CO2 on their Journey that I think is probably a bigger um uh bigger climate problem for that Community um uh but nevertheless people are very aware of that problem and I think um uh it clearly it needs to be fixed I think though helping with climate change I don't think you need large language models for that I mean I think um AI itself uh
can just be enormously helpful in order to be able to amarate that and we're doing a lot of work on that at the touring Institute for example just on uh helping um uh helping systems be more efficient heating systems be more efficient there was a nice example I think from uh from deep mind uh with their data centers the cooling in their data centers and basically just trying to predict the usage of it if you can reliably predict the usage of it then you can predict the the cooling requirements much more effectively and end up
with much much much better use of uh much better use of power and that can go down to the level of individual homes um so there are lots of applications of AI I think not just large language models lots of applications of AI that are going to help us with that problem but yeah I think this brute Fortune a false approach you know just supercomputers running for months with vast amounts of data is clearly an ugly solution I think it will probably be a transitory phase I think we will get Beyond it thank you swing
to the left over here there's one right at the back at the top over here watch hard need to get a mic across team effort passing it across thank you gold thank you very much I've got a sort of more philosophical question question you've talked about uh General Ai and the sort of peak of General AI is its ability to mimic a human and all the things a human can do can you envision a path whereby AI could actually become superhuman so it starts to solve problems or ask questions that we haven't tried to do
ourselves um this is another Well trodden question um uh which I always dread uh I have to say uh but it's a perfectly reasonable question so um I think what you're hinting at is something that in the AI Community is is called The Singularity and the think the argument of the singularity goes as possible at some point as POS as follows at some point in the future we're going to have ai which is as intelligent as human beings right in the general sense that is it it will be able to do any intellectual task that
a human being can do and then there's an idea that well that AI can look at its own code and make itself better right because it can code it can start to improve its own code and the point is once it's a tiny Way Beyond us then it's the the concern is that it's out of control at that point that we really don't understand it so the community is a bit sub divided on this I think some people think that's science fiction some people think it's uh it's a a plausible scenario that we need to
prepare for and think for um I'm completely comfortable with the idea I think it is just simply good good good sense to take that potential issue seriously and to think about how we might mitigate it um there are many ways of mitigating it one of the ways of mitigating it is designing the AI so that it is intrinsically designed to be helpful to us that it's never going to be unhelpful to us um but I I I have to tell you it is not at all a universally held belief that that's where we're going in
AI there are still big big problems to overcome before we get there I'm not sure that's an entirely reassuring answer but that's the best I've got to offer great thanks Mike we'll just pop it online with us son yeah so we've had questions from all over the world we have P tuning in from Switzerland London Birmingham uh but the question I'm going to focus on it's all right all right beries um so uh the question is going to be on the touring test and whether that's still relevant and whether we have ai that has passed
the touring test oh the touring test okay um so the touring test we saw Alan touring up there uh uh a national hero um and shuring 1950 the first digital computers have appeared and shing's working on one at the University of Manchester and the kind of the idea of AI is in the air it hasn't got a name yet but people are talking about electronic brains and getting very excited about what they can do so people are starting to think about the ideas that become Ai and choring gets frustrated with people saying well of course
it would never actually really really be able to think or never really be able to understand and so on so he comes up with the following TT test in order to just really to try and shut people up talking about it uh and the paper's called Computing Machinery intelligence and it's published in the journal mind which is a very respectable uh uh very respectable Journal a very unusual paper it's very readable by the way you can download it and read it but he proposes the touring test so touring says suppose we're trying to settle the
question of whether a machine can really think or understand so here's a test for that what you do is uh you take that machine behind closed doors and you get a human judge to be able to interact with something via a keyboard and a screen in touring day it would have been a teletype just by typing away questions actually remarkably pretty much what you do with chat GPT you can give it prompts anything you like and actually churing has some very entertaining ones in his paper and what you try and do is you try to
decide whether the thing on the other side is a computer or a human being and Ching's point was if you cannot reliably tell that the thing on the other side is a human being or a machine and it really is a machine then you should accept that this thing has something like human intelligence because you can't tell the difference there's no test that you can apply without actually pulling back the curtain and looking to see what's there that's going to show you whether it's a human or a machine you can't tell the difference it's indistinguishable
so this was important historically because it really gave AI people a Target it you know when you said I'm an AI researcher what are you trying to do I'm trying to build a machine that can pass the touring test there was a concrete goal the problem is in science whenever you were science and Society Whenever you set up some challenge like that you get all sorts of charlatans and idiots who just try and come up with ways of Faking it and so most of the te ways of trying to get past the touring test over
the last 70 years have really just been uh systems that just come up with kind of nonsense answers trying to confuse the questioner but now we've got large language models so we're going to find out uh in uh in about 10 days time we're going to run run a live touring test as part of the uh the Christmas lectures and we will see whether our audience can distinguish a large language model from a teenage child and we've tried this and I have to tell you it's possibly closer than you might think actually um do I
really think we passed the churing test not in a deep sense but what I think is that it's demonstrated to us firstly machines clearly can generate text which is indistinguishable from text that a human being could generate that we've done that that box is text and they can clearly understand text um so even if we haven't followed the touring test to the letter I think for all practical intents and purposes the touring test is now a historical note yeah and but actually the touring test only tests one little bit of intelligence you remember those dimensions
of intelligence that I showed you there's a huge range of those that it doesn't that it doesn't test so it was historically important and it's a a big part of our historical Legacy but maybe not uh a core Target for AI today cool thank you Mike I think now you're given the warning we see a lot of searches for preparing for the touring test for the for the Christmas lection next week um do we have any questions up at the top yeah we've got one right in the center just here thank you so when we
think about the situations or use cases where AI is applied uh typically the um reason for that is because the machine is doing things better than a human can or doing things that a human might not um be able to do so it's a lot about the machine making up for the gaps that a human creates that said a machine is followable like there are errors both normative errors and also statistical errors depending on um the model type Etc and so the question is who do you think should be responsible for looking after the gaps
that the machine now creates so the the fundamental question is who should be responsible right is that right sorry I didn't see where you were can you put your hand up yeah right at the top in the middle just up there oh wow okay so that's why I can't see you okay um okay um so this is an issue that um that's being discussed absolutely in the highest levels of govern government right now um the literally uh when we work in when we move into the age of AI who should accept the responsibility I can
tell you what my view is but I'm not a lawyer or an Ethics expert and my view is that um as follows firstly if you use AI in your work then uh and you end up with a bad result I'm sorry but that's your problem if you use it to generate an essay at school and you're CAU out I'm afraid that's your problem it's not fault of the AI um but I think more generally we can't offload our legal uh moral ethical obligations as human beings onto the machine that is we can't say it's not
my fault the machine did it right uh extreme example of this is lethal autonomous weapons AI That's empowered to decide whether to take a human life um what I worry about one of the many things I worry about with Leal autonomous weapons is the idea that we have military services that say well it wasn't our fault it was the AI that got it wrong that led to this building being bombed or whatever it was and there I think the responsibility lies with the people that deploy the technology um so that I think is a crucial
point but at the same time the developers of this technology if they are warranting that it is fit for purpose then they have a responsibility as well and the responsibility that they have is to ensure that it really is fit for purpose and it's an interesting question at the moment if we have large language models used by hundreds of millions of people for example to get medical advice uh and we know that this technology can go wrong is the technology fit for that purpose um not sure at all that it is so I'm not sure
that's really answering your question but those are my sort of a few random thoughts on it I mean but I say crucially you know if you're using this in your work you can never blame the AI right you are responsible for the outputs of that process right you can't offload your legal pro professional ethical moral obligations to the machine it's a complex question thank you very much mik which is why I gave a very bad answer got a question right all right on the left here us to access on the in the fanel sh hello
thank you um if uh future large language models are trained by scraping the whole internet again now there's more and more content going onto the internet created by uh AI so is it going to create something like a a microphone feedback loop where oh wow the information gets less and less useful super question and really fascinating so I have some colleagues that did the following experiment so chat GPT is trained roughly speaking on human generated text but it creates AI generated text so the question they had is what happens if we train one of these
models not on the original human generated text but just just on stuff which is produced by Ai and and then you can see what they did next you can guess they said well okay let's take another model which is trained on the second generation model text and so what happens about five generations down the line it dissolves into gibberish literally dissolves into gibberish and uh I I have to tell you the original version of this paper they called it uh AI dementia and I was really cross with no I I I lost my I lost
both my parents to dementia I didn't find it very funny at all they now call it model collapse so if you go and Google model collapse you'll find the answers there but really remarkable what that tells you is that actually there is something qualitatively different at the moment to human text to AI generated text for all that it looks perfect or indistinguishable to us actually it isn't where is that going to take us I have colleagues who think that we're going to have to label and protect human generated content because it is so valuable all
right human generated Act actual authentic human generated content is really really valuable um I also have colleagues and I'm not sure whether they're entirely serious at this but they say that actually where we're going is the data that we produce in everything that we do is so valuable for AI that we're going to enter a future where you're going to sell the rights to AI companies for you for them to harvest your emotions all of your experiences everything you say and do in your life and you'll be paid for that but it will go into
the training models of large language models now I don't know if that's true but nevertheless there's a it has some inner truth in it I think um and in a 100 years time it is an absolute certainty that there will be vastly vastly more AI generated content out there in the world than there will human generated content with certainty I think there's no question but that that's the way the future is going and as I say as the model collapse and Ario illustrates um that presents some real challenges awesome thank you very much mik got
a question at the front who's been very keen to ask thanks very much indeed for a very interesting lecture it strikes me in way just bringing comparison of human being what we're doing is talking about what the pre frontal cortex does but there are other areas that in this am which is a fear predictor do we need to be developing sort of AI a parallel AI system which works on the basis of fear prediction and get them to talk to each other yeah so I'm absolutely not a neuroscientist or I'm a computer programmer and and
a and a and uh and that's very much my background again it's interesting that the community is incredibly divided so when I was an undergraduate studying Ai and I focused in my final year that's mainly what I studied and the the textbooks that we had made no reference to the brain whatsoever just wasn't the thing because it was all about modeling the mind it was all about modeling conscious reasoning processes and so on and it was deeply unfashionable to think about about the brain and there's been a bit of a what scientists call a paradigm
shift in the way that they think about this prompted by the rise of neural networks but also by the fact that advances in computer vision and uh and the architectures the neural network architectures that led to facial recognition really working we actually inspired by the visual cortex the human visual cortex so it's a lot more of a fashionable question now uh than it used to be so my guess is firstly simply trying to copy the structure of the human brain is not the way to do it but nevertheless getting a much better understanding of the
organization of the brain the functional organization of the brain and the way that the different components of the Grain Brain interoperate to produce human intelligence I think is and really we there's a vast amount of work there to be done to try to understand that there are so many unanswered questions I hope that's some help thank you Mike we're just going to jump back online yeah let's go Anthony asks if emergency inaccurate is calling the technology intelligence inaccurate are we just dreaming of something that can never be and then to follow up on that you
got Tom fer who asks is there anything happening to develop native analog neural networks rather than doing neural networks in a digital machine only uh take the second one um yeah there certainly is so um Steve Ferber at Manchester is building Hardware neural networks um uh but the moment it's just much cheaper and much more efficient to do it in software um there' have been various attempts over the years to develop neural net processes um famous phrase from the the movie that you're not allowed to mention to AI researchers uh the Terminator movies the neural
network processes if you want to wind up an AI researcher just bring up the the Terminator it's a shortcut to uh triggering them but um neural network processes have never really taken off doesn't mean they won't do but at the moment it's just much cheaper and much more efficient to throw more conventional gpus and so on at the problem doesn't mean it won't happen but at the moment it's not there yet what was the other question again the first one um so the other question was are we basically the terminology being used if Emergen is
inaccurate is calling the technology intelligence inaccurate and are we dreaming of something that can never be yeah so so the phrase artificial intelligence was coined by John McCarthy around about 1955 a young he was 28 years old a young American researcher and he wants funding to get a whole bunch of researchers together for a summer and he thinks they'll solve artificial intelligence in a summer but he has to give a title to his proposal which goes to the Rockefeller foundation and he fixes on artificial intelligence and boy have have we regretted that ever since the
problem is firstly artificial sounds like fake you know it sounds like Eartha I mean who wants fake intelligence and for intelligence itself the problem is that so many of the problems that have just proved to be really hard for AI actually don't seem to require intelligence at all so the classic example driving a car when somebody passes their driving test they don't think wow you're a genius know it doesn't seem to require intelligence in people but I cannot tell you how much money has been thrown at driverless car Technologies and we are long way off
from jumping into a car and saying take me to a country Pub you know which is my dream of the technology I have to tell you um we're a long long way off so it's a classic example of what people think AI is focused on is sort of deep intellectual tasks and that's actually not where the most difficult problems are the difficult problems are actually surprisingly mundane great thank you have any questions from this room we got one just the this right here the the squares on the top I can't see yes just here well
uh I was I was interested in how you mentioned that the two PS of AI study were symbolic Ai and big Ai and I was wondering how you saw how your viewpoint on the change in focus from one to another throughout your career yeah um so an enormous number of people are busy looking at that right now so remember symbolic AI which is the tradition that I grew up in in AI which was dominant for kind of 30 years in the AI Community is roughly and again hand waving madly at this point uh uh and
lots of lots of my colleagues are cringing madly at this point roughly speaking the idea of symbolic AI is that you're modeling the mind the conscious mind conscious mental reasoning processes where you have a conversation with yourself and you have a conversation in a language right you're trying to decide you know whether to go to this lecture tonight and you think well yeah but there's EastEnders on TV and mom's cooking a nice meal you know should but then you know this is going to be really interesting you weigh up those options and literally symbolic AI
tries to capture that kind of thing right explicitly and you using languages that with a bit of squinting resemble human languages uh then we've got the alternative approach which is machine learning data driven and so on which again I emphasize in with neural approaches we're not trying to build artificial brains that's not what's going on but we're taking inspiration from the structures that we see in brains and nervous systems and in particular the idea that large computational tasks can be reduced down to Tiny simple patent recog nition problems okay um but we've seen for example
that large language models get things wrong a lot and a lot of people have said but look maybe if you just married the neural and the symbolic together so that the symbolic system did have something like a database of facts that you could put that together with a large language model and be able to um uh uh to improve the outputs of the large language model the jury is out exactly on how that's going to come out lots of different ideas out there now um trillion dollar companies are spending billions of dollars right now to
investigate exactly the question that you've put out there so it's an extremely pertinent question there's no I say I don't see any answer on the horizon right now which looks like it's going to win out My worry is that what we'll end up with is a kind of unscientific solution that is a solution which is sort of hacked together without any deep underlying principles and as a as a scientist what I would want to see is something which was tied together with deep scientific principles but it's an extremely pertinent question and I say right now
um an enormous number of PhD students across the world are busy looking at exactly what you've just described great thank you Mike well time for squeezing two more questions take one from oh in the room cool um we've got a question just in the middle at the back there pass it across through the empty seats hi thank you very for the lecture um my question is around uh you sort of took us on the journey from 40 years ago uh some of the Inspirations around um how the mind works and the mathematics he said the
mathematics was fairly simple um I would like your opinion where where do you think we're not looking enough or where the next uh leap be oh wow um if I knew that forming a company I have to tell you um uh okay so I think one the first thing to say is you know I I said when it started to become clear that this technology was worked Silicon Valley starts to make bets right and these bets are billion dollar bets a lot of billion dollar bets going on investing in a very very very wide range
of different ideas in the hope that one is going to be the one that delivers delivers something which is going to give them a competitive Advantage um so that's the context in which we're trying to figure out what the next big thing uh is is going to be um I think the the this multimodal is going to be dominant that's what we're going to see and you're going to hear that phrase multimodal remember remember you heard it here first if you if You' never heard it before you're going to hear hear that a lot and
that's going to be text images sound video you're going to be able to upload videos and uh the AI will describe what's going on in the video or produce a summary and you'll be able to say what happens after this bit in the video and it will be able to come out with that a description of that for you or alternatively you'll be able to give a storyline and it will generate videos for you and ultimately where it's going to go is in virtual reality you know you're going to be um I know I don't
know if you like Lord of the Rings or Star Wars you know but I enjoy both of those and wouldn't you love to see a mashup of those two things generative AI will be able to do it for you and I used to think this was a kind of just a bit of a pipe dream but actually at the moment it seems completely plausible you'll be able to you know if you like the original the original Star Trek series which I do and my family doesn't um you know but there was only 60 odd episodes
of them in the generative AI future there will be as many episodes as you want and it will be it will look and sound like Leonard neoy and William Shatner perfectly and maybe the story lines won't be that great but actually they don't need to be if they're pressing a button specifically to your tastes so that's the general trajectory of where we're going and I say actually I don't see any reason why what I've just described is not going to be uh realistic within decades and we're going to get there piece by piece it's not
going to Happ happen overnight but we will get there I think we genuinely will the future is going to be wonderful and weird great thank you Mike do we have any final very quick questions anywhere we've got one just over here I think in the jumper on the right just in the the middle here hello thank you um to what extent do you think human beings are very large language models and very large movement models um so uh so my gut feeling is we're not we're not just large language models I think there's an awful
lot more and we're great apes the result of three and a half billion years of evolution and we evolved to be able to understand planet Earth roughly speaking at ground level where we are now and to understand other great apes societies of great apes that's not what large language models do that's fundamentally not what they do but then on the other hand I mean I've had colleagues again seriously say well maybe we should try and construct a theory of you know human society which is based on the idea that we are actually just trying to
um come out with the most plausible thing to that that that comes next it doesn't seem plausible to me I have to say um uh and these are just tools they're just tools which are based fundamentally based on language and they're extremely powerful at what they do um but do they give us any deep insights into human nature as or you know the fundamentals of uh human mental processes probably not thank you very much Mike all right that is all we have time for Unfortunately today this is the end of the cheering lecture series for
this year so please do follow us on social media the website our emailing list to find out about future cheering events and of course we do have the Christmas lecture um in 10 days time as back here at the RO institution but apart from that just one more massive Round of Applause please for [Applause] professor
Related Videos
CEO of Microsoft AI speaks about the future of artificial intelligence at Aspen Ideas Festival
45:22
CEO of Microsoft AI speaks about the futur...
NBC News
288,563 views
The Turing Lectures: What is generative AI?
1:20:57
The Turing Lectures: What is generative AI?
The Alan Turing Institute
66,912 views
MIT Introduction to Deep Learning | 6.S191
1:09:58
MIT Introduction to Deep Learning | 6.S191
Alexander Amini
507,311 views
Bill Gates Reveals Superhuman AI Prediction
57:18
Bill Gates Reveals Superhuman AI Prediction
Next Big Idea Club
263,018 views
Beyond the Hype: A Realistic Look at Large Language Models • Jodie Burchell • GOTO 2024
42:52
Beyond the Hype: A Realistic Look at Large...
GOTO Conferences
97,596 views
What do tech pioneers think about the AI revolution? - BBC World Service
25:48
What do tech pioneers think about the AI r...
BBC World Service
301,738 views
The Turing Lectures: Addressing the risks of generative AI
1:25:19
The Turing Lectures: Addressing the risks ...
The Alan Turing Institute
18,733 views
The "Modern Day Slaves" Of The AI Tech World
52:42
The "Modern Day Slaves" Of The AI Tech World
Real Stories
506,212 views
AI and The Next Computing Platforms With Jensen Huang and Mark Zuckerberg
58:38
AI and The Next Computing Platforms With J...
NVIDIA
3,584,296 views
What is generative AI and how does it work? – The Turing Lectures with Mirella Lapata
46:02
What is generative AI and how does it work...
The Royal Institution
976,767 views
Exploring foundation models - Session 4
2:06:50
Exploring foundation models - Session 4
The Alan Turing Institute
3,279 views
AI and Quantum Computing: Glimpsing the Near Future
1:25:33
AI and Quantum Computing: Glimpsing the Ne...
World Science Festival
389,682 views
Mustafa Suleyman on The Coming Wave of AI, with Zanny Minton Beddoes
1:23:31
Mustafa Suleyman on The Coming Wave of AI,...
Intelligence Squared
71,980 views
A.I. Revolution | Full Documentary | NOVA | PBS
53:46
A.I. Revolution | Full Documentary | NOVA ...
NOVA PBS Official
674,149 views
Geoffrey Hinton | Will digital intelligence replace biological intelligence?
1:58:38
Geoffrey Hinton | Will digital intelligenc...
Schwartz Reisman Institute
157,381 views
The Near Future of AI [Entire Talk]  - Andrew Ng (AI Fund)
46:23
The Near Future of AI [Entire Talk] - And...
Stanford eCorner
277,278 views
Alan Turing: The Scientist Who Saved The Allies | Man Who Cracked The Nazi Code | Timeline
52:28
Alan Turing: The Scientist Who Saved The A...
Timeline - World History Documentaries
1,797,148 views
Prof. Geoffrey Hinton - "Will digital intelligence replace biological intelligence?" Romanes Lecture
36:54
Prof. Geoffrey Hinton - "Will digital inte...
University of Oxford
238,964 views
Your Brain: Who's in Control? | Full Documentary | NOVA | PBS
53:33
Your Brain: Who's in Control? | Full Docum...
NOVA PBS Official
3,821,303 views
Bill Gates on AI and the rapidly evolving future of computing
55:48
Bill Gates on AI and the rapidly evolving ...
Microsoft
661,468 views
Copyright © 2024. Made with ♥ in London by YTScribe.com