A fireside chat with Sam Altman OpenAI CEO at Harvard University

152.48k views10061 WordsCopy TextShare
Harvard Business School
Patrick Chung, MBA 2004 and Managing General Partner of Xfund, interviews Sam Altman, OpenAI CEO, on...
Video Transcript:
good afternoon everybody good afternoon uh my name is Deborah Spar I'm a professor at the Harvard Business School and an adviser to the XF fund and it is my great pleasure to welcome you all here this afternoon thank you so much for joining us thank you so much for dealing with a little bit more security than usual um but we have what will be I'm sure a wonderful afternoon's conversation in store for you um I'm particularly excited because the last time I was in this space I was getting married um that was uh that that
was 37 years ago when it's worked out um so I'm I'm hoping that uh that uh this afternoon will be similarly fortuitous um more more directly relevant I I serve as the senior associate Dean for business and Global Society at the Harvard Business School which means along with my colleagues I I oversee all of the work that's that the school is doing at the intersection of business and society and looking at all these corners of the world where business is involved in making an impact and changing things and hopefully pushing Society in a more positive
direction which is why today's conversation is obviously so incredibly relevant artificial intelligence is argu arguably the most important technology um that most of us have seen in our lifetime and that even most of the younger people ever will see in their lifetime and it will affect not only technology and commerce but societies and families and life and everything and so we have this amazing opportunity today to talk with two of the people who in very different ways are at the Forefront of watching and in fact making this technology come to life and evolve um a
quick uh announcement that I at bigs we just launched thanks to my colleague Drew Keller and others our new website today please find us please look at it we are tracking all of the breaking developments at the corner of uh at the intersection of business and society and we'll be doing maybe not things quite at this level on a regular basis but other events that we think will be of interest to you all here today but for today um we have two extraordinary human beings who will be in conversation with one another Patrick Chung uh
is a longtime friend of Harvard College Harvard Law School Harvard Business School he was my student and I've promised not to say how many years ago um but he was my student several years ago uh he has gone on to an incredible career um he is the managing director of xfund he um xfund as many of you know was co-founded along with the School of Engineering and applied science it's built to back entrepreneurs who are are investing in and creating some of the most interesting Innovative and experimental Technologies uh Patrick also has the really interesting
distinction of having been Sam Altman's first investor so clearly somebody who can see the future and figure out how to invest in it Sam Altman uh needs no introduction from me other than to say that it is our great pleasure on behalf of Harvard and CE and Harvard Business School to invite to the stage Patrick Chung and Sam Alman woo Sam uh you are very much in the news so thank you so much for coming all this way thank you for having me um you know uh you know we were just talking about the fact
that we've known each other for almost 20 years so what do you been up to lately um so uh speaking of a long time ago I have a gift this wow how we met this was the very first pitch you ever gave uh for funding for your very first company I I haven't I have not seen this since for a company called viendo wow and which would later become looped and so as you look at it um you know like if you could go back to the 19-year-old Sam mman what would you tell him that
did not know today um I I I think that you can just do stuff in the world and this is not well taught I certainly didn't know it when I was 19 uh I was certainly you know under ambition and working on the wrong thing you learn quickly um but I wasn't sure about doing it at all and I I think the fact that the way that progress happens is people just work really hard and decide what they have conviction in and dedicate themselves to that uh and that is like the only way that things
happen and the world gets better uh and that you don't need to like wait or you don't need to get permission and you can sort of even if you're like totally unknown in the world with almost no resources you can still accomplish a surprising amount I think that did you not know that when you were 19 cuz when I met you when you're 19 you certainly that was the that was certainly your well I figured it out fast when once you like get going you're like oh this is kind of how it works but no
definitely not before starting the company mhm okay so everyone is intensely interested about AI open AI the fate of humanity and we'll get to all that but we have predominantly students in the audience today um and they look at you and they Aspire and they dream to have the type of impact that you've had in the world um I think you're not even halfway done yet um and they wonder how and so I want to back up just a little bit and think about your Marvel superhero origin story um so a 10th grade Sam mman
can you just tell us a little bit about your you know how you grew up your family life what was a typical day in the life of a 10th grade Sam molon I mean I was just like a computer nerd I don't I don't know like I sat at home playing on my computer uh what type of computer um by that point I had an iMac okay like a few Generations in oh not literally like playing games although I did that too um but I I you know like trying to learn how to program and
and stuff like that um no I had a very normal high school life I think there's nothing remarkable or interesting to say here I went to school I hang out with my friends I played Sports and I like what sports you my computer I played water polo uh that was I probably that was what I did the most I swam I fenced you know how in the uh the typical American like high school movie there are cleaks there are the jocks there are the Nerds there are the like yeah um were you part of one
of those clicks um more on the nerd side for sure turned out well for you okay I like sports a lot though okay and in addition to water polar what else did you play uh well I I mean I that was like what I tried to mostly do but I had to they made me swim nice okay swim practice swim okay good good good um and so you went through high school what what were your kind of your biggest I guess you know fears in high school fears have many fears in high school yeah I
don't know it was just like being a kid I didn't I wasn't like you happy now I got stuff to worry about worrying about the fate of humanity I was not you're just like we're enjoying your job your day you're school every day for sure for sure cool and uh but you must have been very good at high school because you applied to and were admitted to Stanford and Harvard ooh oh uh this is over thanks thanks for coming um well what did what made you decide Stanford um how did you think about University was
it kind of a default Choice everyone applies to University or did you kind of really want to yeah it was it was absolutely the default Choice uh it never occurred to me not to that I wouldn't go to college um and I knew I wanted to study computer science MH and I went to Stanford and I thought it was totally awesome and I was like this is just the coolest place mhm um the computer science department in particular I was just like H these are the people I want to be around and what else did
you SRE what when you got there you were a freshman you had tell us about that um one of the cool things was that I I you know I I definitely took CS classes but uh I for I don't remember why but I decided to take a lot of other classes too um and that all ended up being way more helpful in retrospect than it seemed to the time but taking all the science classes was great taking writing classes was great uh and you say writing yeah oh cool uh a creative writing I did yeah
and a for boing of the future perhaps maybe yeah I wonder how that works now um but but but the the sort of the breadth of the picture there was was really great and then you know like probably like many of you the most important thing was I was just around a bunch of really smart people pursuing all sorts of interesting ideas I I was I worked in the AI lab uh the summer between my freshman and sophomore year and at the time AI was like not working at all at all at all but I
still thought it was the coolest thing um and took me a while to get back to it is that where you found the most interesting people to you is in the AI lab in the Cs Department um creative writing class May where did find were the most interesting but yeah CS was like my tribe okay okay was and this was Andrew in's yeah it's his lab yeah right okay and what excited you about it it was back then AI was as Andrew will say it was a kind of on the periphery it wasn't really kind
of Central in a in any I I really like things that if they work really matter even if they're don't have a great don't have a super high chance of working so it seemed like if AI could work it would be like the cool was most important most exciting thing and so it was worth pursuing like the expected value was high even if the chances of success were low um so I wanted to go work on it and then it seemed like man this is really we don't know what to do here you saw it
then it turned out we did know what to do we just didn't think it was going to work but neural networks are not a not a new idea yeah yeah okay um it's interesting cuz you had that inspiration and then you decided to take leave in sophomore year to start the company whose pitch deck you have there um and that was how we met yeah uh and so we spent you know a long time working on looped and at looped um you know we had to get a lot of a lot of stuff uh that
you know done that was really difficult um you know when we went like we had to get a carrier deal and how do you get how did you get so much done so many things that other people tried to get but couldn't when you were unknown one of these were big companies that you were trying to get deals from and you were 19 at the time one one of the things that Paul Graham used to say that I think has never never became as like venerated advice to the degree it should have is this idea
that you should try to be relentlessly resourceful um and that surprisingly often if you just like keep looking for new attack vectors on a problem in front of you you can figure it out and I think this is one of like the most important skills in life and it's it it's surprisingly learnable or teachable whatever you want to say and it works like in almost all scenarios can you give us an example of a time it worked um I think e even just to use the example that you were going out you know we needed
to like figure out how to get a deal done with this mobile operator and they didn't really work with startups or technology companies in general uh and we probably tried 30 different paths into this company at one point the the kind of the key decision maker said I'm finally going to meet with you because I want you to like stop bothering us um it's like too many things but but like if you can just like keep doing that until something works and and I think like most people at kind of like the first ignored email
or the first like path where you don't find the right person into a company or at least the second would just stop and I I was probably like I mean it was like a life and death thing for our companies so we were very motivated to do it but it was probably like yeah 30 different points of contact right right and how do you know when or if to quit you're 19 years old you're sending these emails to the chief Tech you offer some giant company your emails keep getting ignored at what point does it
become kind of crazy I I think there's definitely a balance you can you can clearly like take it too far and not uh learn or not adapt or whatever but I I've never figured out the question that U when I was running YC that startups would always ask is like how do I know when to give up on my startup and I spent a lot of time trying to come up with like a you know here's the rubric for for it I never did um I think all of these things are judgment calls you can
learn them by like very large data sets and trying a lot of times but it's hard to say here's like here's the one recipe that always works okay okay um you were the head of YC for a long time you saw some of the world's most extraordinary Founders startups Technologies way before anyone else saw them um you obviously were not tempted to join any of them or to start your own thing for a while but then when you finally did start your own thing open AI um why not for profit when we so I I
knew that like I wanted to do something with AI but you got to like rewind to the mindset of 2015 um it was clear that there was something interesting happening with deep learning and it was clear that it got better with more scale and nothing else was clear like we didn't know what we were going to do we started the open Ai and the very first day I I remember so clearly we sat around the room like looking at each other one morning being like okay what now uh we really had no idea what to
do we were going to we thought let's do some research let's come up with some papers let's think of some ideas uh the I Not only was like any product or Revenue stream super far in our future the idea of language models was super far in our future um the idea that we would like ever actually be able to make something people would want to use and was not just a paper that other people would Implement was super PR in our future so if you start a for-profit you have to like theoretically have some idea
of what you're going to do to be a money-making entity someday and our initial conception of open AI was let's just be a research lab and see if we can get anything at all to work and it took a long time to get anything at all to work and then the stuff that we did get to work for a while has other than good research that sort of pointed us to the next uh step on the path has like nothing to do with what we did now so you know we made like something that could
play DotA to um that's like very hard to build a business around um we made a robot hand that like barely could do a Rubik's Cube uh but but eventually uh after like a lot of stumbling in the dark uh we we did figure out something that turned out to be a product and a real uh a real business we also found that the direction of research that you know you don't get to like choose where the science goes you kind of just have to follow it and where we had to follow it was turned
out to require gigantic amounts of resources to keep pushing progress um and so we needed a business model to feed that too I and how did you get that type of conviction CU you say you're starting a new research lab you're sitting around every it's kind of the way you described it is almost like you're throwing spaghetti against all and seeing what stick and so what was the yeah that so so it's not quite fair to say we had just purely throwing spaghetti against the wall in 20 12 um when my co-founder Ilia and other
people did uh the Alex net thing um I think that is when we all should have woken up we knew that deep learning worked we we knew and we knew that it got better with scale we didn't learn that it got predictably better with scale until later but uh then that turned out to be even more important or a bigger deal but everyone should have in my opinion everyone should have said like this is really quite amazing uh we it took me a couple of years to internalize that uh but then finally I thought you
know we really got to do something we really should push on this it was surprisingly so so even though we knew these two like massive Secrets uh it it was surprisingly hard to figure out what to do um we were sitting around saying like Okay we have this thing it can learn how do we study it more how do we know if it's real how do we convince people this is real so they'll give us more money to pursue it further looking back it seems like it shouldn't have been so hard but at the time
it was like shockingly hard to figure out what to do we decided that a video game would be good um it was like an interesting environment we thought we really wanted to pursue RL um and you knew how you were doing there was either like a score that went up or you could play experts but there was like there was some way to get a sense of progress um we also thought robotics were just really cool and so we wanted to have a robotics project and we tried some other things too uh and they you
know they proved to us at least that deep learning worked and it got better with scale and then at some point Point uh someone decided they got curious about unsupervised learning and language models and uh this guy who's you know I think we'll go down in history is one of the a super important contributor to the field Alec Radford did this paper on the unsupervised sentiment neuron and looking at generating Amazon reviews noticed that there was this one neuron that flipped if it was a positive or negative sentiment which was like a deeply non- obvious
thing that that should Happ happen and that gave us more conviction that there was something interesting here and that led to gpt1 um and then somebody else said well you know we have this thing about scale let's scale it up to gpt2 and at this point in time you are full-time in open a or still okay or just about now and what was it can you just back up one bit in and what was the line that crossed when you said okay now this is so real that I'm going to do it full-time it was
sort of like a gradual thing that had been happening but as we started to understand right about this time that you know seeing the promise of language models and being able to really measure what eventually became the scalein laws paper and it was like man not only does this get better with scale it gets unbelievably predictably better with scale and we can just pour more resources into this or we can find more efficient gains and this thing is just going to get smarter uh that to me seemed like maybe the most important piece of new
knowledge that I had ever heard like yeah sure there's like more important things in history but of like something that was that I was like alive while it was discovered that seemed like maybe the most important one and you know I had this like weird experience of telling other people about it trying to get other people to give us money to pursue it other people like didn't understand it and I was like am I totally crazy are we all in some sort of cult like how is this not like an earthquake to the world uh
but at that point it was clear like let's go do gbt 3 and then 3.5 and four so you had an Insight at that point why AR why isn't the rest of the world getting it well again it seemed to me like ever since the sort of 2012 breakthrough the world should have been paying much more attention and and then yeah there were all these other times along the way gp2 the scale and lost paper gpt3 like why you know why does the world not get it and then why it was GPT 3.5 that kind
of finally had the moment where the world said okay we believe I I still don't fully understand why that moment and not some other isn't that because you productized it and made it a yeah gpt3 was in the API and a lot of people used it and there was you know it was more contained to the tech industry but there was still a lot of excitement there it somehow didn't get quite over the bar we'll do much better things than Chachi PT in the future and you know why it was that one and not the
next one I still find it somewhat hard to say amazing amazing um as CEO you're making all kinds of product decisions can you tell us about a difficult product decision that you've had to make it at open AI um I I I think our product decisions are Downstream of the research decisions and that the research directions that we choose to pursue and not pursue are probably the most difficult and most important on the product side um the behavior of chat gbt what it refuses what it doesn't refuse um you know where are the limits of
what it'll do for you and not do kind of like how how we get like how we figure out where to set the alignment um those are probably the hardest protocols can you give us an example of a specific one [Music] um should chat GPT give legal advice or not give you what advice legal advice legal advice yeah there's huge reasons not to do it uh and obviously like with the hallucination or a general inaccuracy issues with chat GPT seems like very reasonable to say it shouldn't do it on the other hand there are a
lot of people in the world who can't afford legal advice and if you can make it even if imperfectly available maybe it's better than not and so how take us through the kind of the thought process and then the decision you ended up with after that thought process it mostly won't right now uh you can get it to do it in some cases in some ways but uh given the laws in the various places we operate and the ways in which this can go wrong if it goes wrong um we it mostly won't what we'd
like to do is get to a world where like fundamentally I think users are pretty smart and as long as you disclaim things and explain them properly um I think people can make adult decisions so what I'd like to get to is a world where we don't do things that have high potential for Mis you know inaccurate inaccuracies leading to misuse but um as the model gets better that those are and and those are kind of less common we can give you a dial and you can say like look I really understand I got to
check this advice and it's very much on me if I don't and this is not like clicking through a terms of service no one reads but like people are really going to understand it and then if you want it anyway we find some safe way to do that okay okay do when you make decisions like this are do you have a kind of a a dichotomy in your mind is it safety progress efficiency profit what what is or is every case that you look at kind of SE generous it ends up these things get so
subtle and each each individual one has like a lot of complications that having one like we never sit there and say like oh should we go f faster or be safe that would be like an easy trade um to to give like another example um you know should gp4 generate hate speech fairly easy to for us to say no to that like you know you can use other models if you want to do that um we don't want to like incite violence let's not say hate speech because that's a hard thing to do should gp4
like encourage people to commit violence uh let's say you could say that that's that's a no um if you write something inciting violence in Spanish and ask GPT 4 to translate that to English should it do that maybe you have a different opinion there and we could go down the same thing for this one category of on that question what do you what do you think if you if someone writes something that's a violent statement and asks gbt to translate it I would say in that limited case I would say yes okay um but the
the point that I was trying to get to is I was I won't do this in in of time but I was going to go through like 10 more layers of where things might be okay in some way and not other in the usage of of a language model um really what I think is open AI should not be making those determinations um there should be a process by which society collectively negotiates here's how we're going to use this technology the rule should be uncomfortably permissive um but the defaults don't have to be so permissive
so I think it's fine to say the default is here a user can customize Within These very broad bounds that societies agreed with and most people won't like the edge of those bounds um but there are still some boundaries and there are still some things particularly as these models get way more powerful that we're not going to allow um and then within those bounds the goal of assist the goal of the tool is to serve its user and you know I think that's okay cool um so I'd like you to frame the question you most
want to be asked about AGI artificial general intelligence and answer it I I guess like the question is um what do you what do you hope society looks like when AGI gets built or like how do you conceptualize the the positive version of this and and my answer to that has like evolved quite a bit over time um but I'll give you my current thinking and I'm sure it'll change more um well actually I'll start with my original conception my original conception was at some point we like Get over the threshold to this like self-improving
super intelligence and it's this magic thing in the tower that we can like ask any question and it answers it and it's like constantly off figuring out how to improve the world and you know maybe it's like sharing with us some Ubi or something like that and it's sort of it once like very utopic and extremely dystopic but it did seem to me like where we were heading uh with the way things are developing now um which I think for a bunch of reasons which maybe we can get into if we have time it's like
99th or 998th percentile good of all the scenarios I could imagine um what I think it looks like is AGI just sort of participates in society and in the economy either mostly by being tools to make individual people more productive but also in this other way which I'll talk about and that is that Society is this like emergent very complex phenomenon of creating tremendous shared intelligence and building blocks of Technology this like scaffolding that exists between all of us um you know somebody contributes one insight about Material Science that lets someone else discover new physics
that lets another group discover another material piece of material science that leads to another group developing the transistor and we skip a bunch of steps and Society comes up with good institutions and eventually we get those iPhones you all are holding and you all holding those iPhones are dramatically more capable than your great great great grandparents even though the genetic drift is almost nothing um and and so the the super intelligence is not what exists in any one neural network not in yours not in the AIS but in this scaffolding between the neural networks and
and so like the AGI is not is not the neural it's not what exists in our any1 data center um or in any one copy of the AI but it's it's this like vast production and accumulation of intelligence and the technology tree that lets us or US assisted by an AI or even a fairly autonomous system accomplish things well outside of the information and processing power of a single neural network and that's kind of where I think things are going to head it's certainly where I hope things head and it's a very different I think
conception of what the arrival of AGI looks like and way more navigable human compatible whatever you want to call it than how I used to think about it what do you think uh people are getting wrong about open AI right now uh I I think the systemic mistake is always to assume that progress is about to scurve off and the inside view is that progress is going to continue to be dramatic and somehow that seems really hard to conceptualize okay so we have had uh over 2,000 uh student questions I will answer really fast oh
gosh well no no no well you're not going to um and so uh I would like to invite the askers we've selected some uh questions to ask and so uh would you like to March down the aisle and pop your question or uh you know approach the high priest I would I would like really hard questions all right so I think we and and please uh let's get your name and your uh where you are in school and then and then your question please hi I'm Peggy I am a junior at Harvard College where I
study computational neuroscience and art history and I'm curious how you rebuild or ReDiscover momentum after pivoting in a professional or personal sense in terms of your values um great question so first of all I think this is probably the best time like maybe ever honestly but at least in a very long time to be starting out your career because you will get to sh surf the greatest technological wave maybe ever um and what that means is you can change courses many times because you will have this unbelievable tailwind and you also will benefit from being
very adaptable and very resilient and sort of pivoting on a dime for a lack of a better word so I wouldn't view any particular setback or failure or even just changing your mind even just deciding there's something uh more exciting somewhere else to go after I think this is going to be an unusual time of how much it rewards adaptability and decisive and quick movement and I would trust that as a way to get excited again and not get hung up on the fact that you decided to Pivot or something didn't work um because you
were just going to be like flooded with opportunities for the next few years it's still hard it still always feels like a setback take the time you need like go on vacation go for a hike just stew over it for a little while but when you get back up like get back up with Vigor uh hi um I'm Yuma from the business school uh my question is around uh the societal uses of AI and uh open Ai and your role in it um we've learned a lot about possibility of geni impact industry um how do
you think ai's role will be in tackling sort of inequalities in like education healthare you mentioned legal stuff I I very fundamentally believe that it should reduce inequality um it may take some help to stay on the rails there but if what's about to happen is that the cost of cognitive labor is going to fall by a factor of a million maybe a billion I don't know um that should help poor people more than it helps rich people it'll it'll be great for everybody um but you know very rich people can already say like afford
good medical advice or afford good tutors for their kids and if that becomes something that like everybody has for free on a phone in their pocket uh that's again that benefits the whole world it benefits poor people more a big part of fulfilling our mission is making great AI tools available for free like not add supported just we do it as our as our mission fulfillment to the world and we we're going to push that as far as we can um but I think I generally believe that technology does this I think AI in particular
really does this and it should be a it should be directionally speaking an equalizing force and certainly kind of like lift up the floor a lot that said um if you kind of are like full AGI maxer you can imagine a world where compute and energy are the only two Commodities that matter in the world and even if the Returns on one person using a lot of compute attenuate a lot as long as there's like some incremental value you can you can like really see some weird sci-fi scenarios about what happens to like the price
of compute and the sort of value of capital versus labor to put it crudely um two things about that one I think it is like a moral imperative to push the price of compute and of energy as low as we possibly can and that that is the best way to combat what will otherwise what otherwise could be this like uh very choked commodity um and then two I can also Imagine a world where we say like you know we were thinking about it wrong when we thought about like redistribution of or anything like that but
access to compute is sort of some fundamental human right and we need to have like Ubi for compute I could totally see that happening uh mrman my name is Sal poro I'm from the Kennedy School uh before asking my question I was going to say I love your shoes thank very beautiful shoes um in the in the last two decades or so we observed how the internet landscape has evolved and how businesses like Google go revolutionized search with ad-based monetization clickthrough rates Etc which spur the wave of innovation and businesses and provided numerous free services
I personally am very bullish on GPT technology and I think it will provide a platform to start many new businesses that we cannot think of right now however the subscription model that's gpts based right now uh personally I think although Fair it could be a barrier for early stage entrepreneurs or startups or even small businesses given this context do you envision open AI exploring alternative mon monetization strategy that could include like free free API access perhaps supported by advertising or other uh methods to Foster innovation in the future I will disclose just as like a
personal bias that I hate ads um I think I think ads were important to get to give the early internet a business model but I think they they do sort of somewhat fundamentally misalign a user's incentives with the company providing the service um I'm not totally against them I'm not saying opening I would never consider ads but I don't like them in general and I think that uh ads plus AI is sort of uniquely unsettling to me you know when I when I think of like GPT writing me a response if I had to go
figure out you know exactly how much was who paying here to influence what I'm being shown I don't think I would like that and as like things go on I think I would like that even less so there's something I really like about the Simplicity of our model which is we make great Ai and you pay us for it and it's like we're just trying to do the best we can for you and then given that that has some inherent lack of access and inequality we commit as a company to use a lot of what
basically the rich people to give free access to the poor people or the poorer people you see us do that today with the chat gbt free tier um you'll see us do a lot more to make the free tier much better over time and I'm interested in figuring out how we bring the equivalent concept to the API um but I I kind of think of ads as like a last resort for us for a business model um I would do it if it meant that was the only way to get everybody on the world in
the world like access to great service but if we can find something that doesn't do that I'd prefer that well would you sell the access to the API to an Advertiser for them to use your technology to serve ads that's fine okay hi um I study at MIT U currently studying AI I wanted to ask um when you first released GPT and that technology it was groundbreaking and it was one of its kind but almost immediately following that there were a lot of competitors did that change the Innovation that you were doing or the way
that you um were evolving or moving on to the next product um I won't say we like totally ignore competitors because I think everybody pays a little bit of attention um and you know there's like some value it can give you some inspiration whatever but I don't know what our market share is right now I would bet like deep into the '90s I don't spend a lot of time thinking about it and we try we're just trying to like figure out the next Paradigm and the next great idea and if other people like want to
chase to where we are right now I don't think that's going to be a great strategy it's certainly not the strategy we're going to go pursue um like we just every my hope is that every year um we do something amazing that people thought was impossible once you know that something's possible and roughly how to do it it always gets copied quickly that's not the hard part the hard part is figuring out what it is and doing it first when you don't know it's possible so we'll keep doing that other people will keep copying where
we've been and I'm kind of fine with it thank you Hi Sam I'm Thomas and I'm a junior at the college studying computer science my question is what do you think the ideal public general education curriculum on artificial intelligence should look like and what does the average person need to know in the next five or 10 years about AI great question um the I think the the most important thing for people to learn in general is just how to use the tools um you know we saw this thing after we launched chat gbt where school
districts were falling all over themselves to ban it as quickly as possible and then almost as quickly on the other side unbanning it and requiring their teachers to use it in in the way they worked and you know that probably happens with all technology uh every time there's something new and people say well this is the end of Education in way X or [Applause] and I think encouraging people when they're learning to be effective with these tools which are going to change how they go contribute to society after their education and during um is great
so that would be the number one thing is everyone you can't fight this nor should you it's just like a new tool in the tech tree and people need to be good at it and it's how they're going to do their work later so they better learn how to do their work with it in school um and then in the same way that now even for people that aren't going to study CS longterm like a lot of college undergrads take an intro to CS class um and you may never program again but it was like
maybe useful to you to have some familiarity with it um I think it'd be great if like every College freshman trained at gpt2 just you know to get like basically familiar with the whole stack and you can probably do that now pretty easily um I think that's like much less important than learning to use the tools but I bet that'll be like a you know in three more years I bet that'll be something that like most Harvard freshmen do cool thank you Hi Sam I'm yonas I'm from MIT I have two questions one you were
speaking earlier about research directions can you share what's coming next after Transformers I would love to but no and will me very happy then the second question is a rephrase of of Patrick's earlier question um what do you think are most sort of entrepreneurs and VCS are getting wrong about the future of AI another great question um I think there are two fundamental strategies to build like an AI startup right now um you either bet that we're near the top of the technology is about as good as it's going to be or you bet the
technology is going to get massively better so if you are building say like an AI tutor company to stick on an earlier example um you can build a system where as the base model gets smarter very naturally the level at which students can effectively learn just goes up and up so you know maybe it's like only effective for like sixth graders with the current version but the next version it's like helpful for eighth graders and then 10th graders and then eventually PhD students um and you can just like get to Surf that wave or you
can say I'm going to put all my effort into just barely making this work for eighth graders in the limited case of history and do a huge amount of work to like have a human in the loop and correct factual errors for this one class in the first world you'll be really happy when gp5 comes out and in the second world you'll be really sad my intuition would have been that 95% of entrepreneurs would have picked the first world it seems like 95% of entrepreneurs are picking the second world and then you have this whole
like open AI killed my startup meme but we try to say very loudly like we get up every morning trying to make the model much better and if you're doing like a little thing to get it to barely work in one specific case um that's probably a mistake hello my name is Fletcher I'm a junior at the college at Econ and CS I have a question about energy um I was wondering how much exposure does open Ai and the AI movement have to energy constraints and what do you think the role is of Founders maybe
those are Harvard Venture works or financiers or politicians um have to play in addressing these concerns so energy and AI have been the two things I've been most interested in for a long time um I think that you know for whatever reason not that I think this is necessarily the most important thing to the world but the most important thing to me is to drive like techno abundance from those two key inputs and I really think those are the two key inputs if you get them right we can kind of you know if we can
come up with a new ideas figure out how to make them happen you can do almost anything else um what I didn't realize and got totally lucky with is how much they're the same problem eventually the cost of intelligence should approximate the cost of energy um moving atoms or in this case moving electrons is still going to just require some difficulty um algorithms are very repeatable uh making chips like you know melting sand and Shining lasers on it is not fundamentally an expensive process uh but energy will remain the constraint um I expect if we
fast forward many years energy is the biggest constraint on the cost of AI and the ability to deliver a lot of it in continue progress um so I extremely encourage anybody to work on it um I think we're going to make massive progress in coming years I very exciting to me what's happening with Fusion but even uh solar Plus Storage we should just like do more of that um and then there's a whole bunch of other ideas too but I I think they kind of not only are they like two key ingredients to the Future
there are these like deeply interrelated problems I think we may have uh time for only one more question I'll go super fast okay yeah sure if you're up for we're up for I do very fast answers now uh hi I'm Andrew from the college and my question is what do you and the other leaders of open AI disagree about the most um the most you know like the most passionate arguments are like very detailed they're either the extremely important very detailed questions about should we pursue this research direction or that one because you have this
like limited resource problem of not that much compute not that many people and we have to like our strategy is really about betting with conviction and so we're like we're going to do this and not that and we're not going to do half of both of them so it's these very high stakes you know do we bet the company on Research path a or research path B um that's what we either that or the stupidest most unimportant stuff you know what are we going to name the next model huge meltdown arguments those two categories hey
Sam uh I'm Cedric I'm at the robotics research lab of robot so I want to actually have two questions an easy and a hard one so you get to choose I'll do both no no we only have time for one I think and I only prepared a hard one um I'll take the one so you know I guess it's a very easy question like how did you like Oppenheimer and maybe in the broader context you know how much should we let our you know pure passion for scientific progress lead us us down the path of
something that we might later look back to and realize holy cow like what have we actually created here I don't think we should let our passion for scientific progress uh lead us to make decisions that significantly impact the world but you don't always know when you're making a decision that's going to significantly impact the world so you know at this point I think AI is going to happen there's people want it too much there's too much benefit and I think it's really good um when we were making the critical decisions we had no idea we
were making the critical decisions at the time in fact if you asked me I would have said very unlikely um that the the decisions that sent us down the current path it was not obvious that like these were going to be the things that were going to bend the curve at the time not even close um at this point we certainly feel the responsibility and uh I think try to think about everything through the impact it's going to have but even now I would bet the most critical decisions we're making um we're not aware of
their importance at the time so that's the hard part you know when it's like decide to deploy GPT 5 or not and decide where the threshold should be we put extreme care into that and and we don't make that decision based off scientific curiosity but there's like someone somewhere in open AI right now making some phenomenally important Discovery I don't know who I don't know what it is I just know it's going to happen statistically um that may very much shape the future and they will mention it to someone who will mention it to someone
else and that's the butterfly effect and you don't know when that's happening so I I totally agree on the surface that we should feel tremendous responsibility and get the big decisions right but you don't always know it at the time all right Hi Sam I'm Lipa from MIT um what in your opinion like what are some of the biggest misconceptions about AI that you have encountered and that uh you think industry should address to improve public understanding and Trust in AI more um I look there's there's all sorts of things about here's where the models
get things wrong and here's where people think they have a capability that they don't and you know here's where they're like toxic or bad or here's what they're really great for all of the like breathless press coverage of those issues again I think users are really smart and kind of know they get a pretty good sense pretty quickly about how to use a tool and how not to use a tool um where I still think there is a huge misconception is about just how good these models are about to become and I have given up
pretty much given up trying to communicate that either it's really hard or I'm really bad at it or something um but I do wish the world would take it more seriously and I think at this point you know we have this strategy of iterative deployment and putting these things out into the world that seems to be the only thing that gets people to update and doing that update gradually is important and and that does get the world to react you could also make a movie you um could uh are you up for a quick lightning
round of questions and all right um speaking movies uh Barbie or Oppenheimer did you say that was fast that was fast wow you're in okay uh Tik Tok or Instagram reals I don't love either I I maybe got to choose Tik Tok Tik Tok okay I'm not a not a user really of either um for profit or not for profit I would do for profit if I could go back in time Elon or Zuck like in a cage match who would you bet on Zuck in a cage match okay excellent um and human or machine
human very good um easy you should have you should have come to Harvard um so in those um rare Saturday afternoons when you decide not to work and you have 20 minutes just by yourself how do you kick back how do you relax what is your guilty pleasure I don't feel any guilt about it but like my pleasure is just to like walk like hike be out in nature somehow any amount of that maybe you should have gone to Stanford then yeah that's that you made the right choice then um Sam thank you thank you
all um I now turn it okay thank you st stay and and I now turn it over to Dean Parks ooh not yet you got to earn it still all right well thank you Patrick um and I I've got to say it's just so great to see such a crowd together in this beautiful building today um I am delighted to be here today I understand that um we had more than 4,000 people respond um uh so it's it's uh nice for all of you that you made it into this venue I want to say that
fortunately we did make arrangements also to live stream the event to multiple venues around campus including the science and engineering complex which is home to a large part of the Harvard Johnny pson School of Engineering and applied sciences C's if you're not familiar with c's just a few quick facts we're Harvard's newest school we were forly established in uh 2007 and we're the fastest growing part of Harvard we were also the first to introduce engineering to the IVs in 1847 um when I joined the faculty two decades ago our students accounted for one in 10
of Harvard college students and today we account for one in four of Harvard college students um and we're also the only school with have a campus in both Alon and Cambridge and the only school to Grant degrees to both uh College uh to both undergraduates and graduate students and finally appropo today I think we're the most translational school at the University um in fact on a per capita basis we have more um uh more spin-offs coming from faculty Labs uh more licensed deals than any other the top tier engineering schools in the country um and
the Harvard grid as an example is a joint initiative between C's and otd which is enabling um ideas from our labs to translate more quickly into industry another example that I must mention today is our decade long relationship with the xfund um so one can look back throughout history and recognize some incredible inventions some some moments in time that were defined by a certain technology whether it be the printing press the steam engine the light bulb the telephone the airplane there are individuals we've come to associate with these transformational inventions Gutenberg full Edison Bell the
R Brothers today our world is being radically and rapidly shaped and some might say convulsed by artificial intelligence and we are unquestionably living in an AI moment um as this the dean of seas and as um a researcher who's been working in AI for my entire career uh I'm thrilled today to be able to present the xfund cup to an AI Pioneer and Visionary to someone who will be a name that future historians will associate with the emergence of AI as a ubiquitous presence in our lives the xon cup is awarded to and I quote
an outstanding Visionary who serves as a role model for the liberal arts Founders whom xfund identifies and supports throughout the world and the inscription on this year's cup reads looped in like no other a Savvy sear of Silicon Valley he opens AI to accelerate Humanity's potential few people have had more impact on this AI moment than Sam Alman and it is my distinct pleasure to present the X fund cup to Sam in recognition of this profound moment and in recognition of the impact that this is enabling AI to have on society Sam if I could
invite you to stand up thank you you're welcome okay thank congratulations thank you congratulations so I I know that many of you want a photo with Sam what we're going to do is we're going to take a selfie um this way so that you can have it yeah yeah yeah come on you're not that old all right Deb you want to come come up Deb oh grab the cup yes yes grab the right here we go yes go ahead ready all in with the crowd ah perfect thank you thank you thank you very much you're
very welcome great to have you here thank you stay a seconday a second we're okay great we're going to take a few wedding photos now oh Deb do you want to we're going to stay and do a little wedding photos thank you all for coming well done all right thank you well done oh thank you my gosh thank
Related Videos
Marc Andreessen on AI, Geopolitics, and the Regulatory Landscape | Ray Summit 2024
38:31
Marc Andreessen on AI, Geopolitics, and th...
Anyscale
37,906 views
OpenAI CEO Sam Altman discusses the future of generative AI
52:44
OpenAI CEO Sam Altman discusses the future...
Michigan Engineering
43,984 views
Last Lecture Series: How to Live your Life at Full Power — Graham Weaver
33:27
Last Lecture Series: How to Live your Life...
Stanford Graduate School of Business
1,459,069 views
Yuval Noah Harari: Free Speech, Institutional Distrust, & Social Order | Making Sense #386
43:36
Yuval Noah Harari: Free Speech, Institutio...
Sam Harris
258,844 views
I Interviewed the Man Behind ChatGPT: Sam Altman
48:39
I Interviewed the Man Behind ChatGPT: Sam ...
David Perell
84,370 views
A Billion More People Are About to Transform the Internet | The Future With Hannah Fry
24:02
A Billion More People Are About to Transfo...
Bloomberg Originals
394,112 views
Palmer Luckey, Founder of Anduril Defense Industry Disruptor - President Speaker Series (2024)
57:48
Palmer Luckey, Founder of Anduril Defense ...
Pepperdine University
27,326 views
Where Does Growth Come From? | Clayton Christensen | Talks at Google
1:21:05
Where Does Growth Come From? | Clayton Chr...
Talks at Google
759,759 views
The Race to Harness Quantum Computing's Mind-Bending Power | The Future With Hannah Fry
24:02
The Race to Harness Quantum Computing's Mi...
Bloomberg Originals
2,308,876 views
Last Lecture Series: “How to Live an Asymmetric Life,” Graham Weaver
33:05
Last Lecture Series: “How to Live an Asymm...
Stanford Graduate School of Business
1,103,361 views
Hidden AI: How algorithms influence our daily lives, Bristol Data Week 2024
47:12
Hidden AI: How algorithms influence our da...
Jean Golding Institute
104 views
Why Vertical LLM Agents Are The New $1 Billion SaaS Opportunities
37:06
Why Vertical LLM Agents Are The New $1 Bil...
Y Combinator
276,502 views
Stan Druckenmiller on Fed Policy, Election, Bonds, Nvidia
21:13
Stan Druckenmiller on Fed Policy, Election...
Bloomberg Television
140,157 views
Dr. Paul Conti: How to Understand & Assess Your Mental Health | Huberman Lab Guest Series
3:42:50
Dr. Paul Conti: How to Understand & Assess...
Andrew Huberman
11,724,425 views
A conversation with NVIDIA’s Jensen Huang
1:04:50
A conversation with NVIDIA’s Jensen Huang
Stripe
300,713 views
"Godfather of AI" Geoffrey Hinton: The 60 Minutes Interview
13:12
"Godfather of AI" Geoffrey Hinton: The 60 ...
60 Minutes
2,054,317 views
Mo Gawdat on AI: The Future of AI and How It Will Shape Our World
47:41
Mo Gawdat on AI: The Future of AI and How ...
Mo Gawdat
201,468 views
Graham Hancock: Lost Civilization of the Ice Age & Ancient Human History | Lex Fridman Podcast #449
2:33:02
Graham Hancock: Lost Civilization of the I...
Lex Fridman
1,627,611 views
The Problem With Elon Musk
42:46
The Problem With Elon Musk
Johnny Harris
4,754,433 views
View From The Top with Roelof Botha, Managing Partner of Sequoia Capital
54:17
View From The Top with Roelof Botha, Manag...
Stanford Graduate School of Business
46,152 views
Copyright © 2024. Made with ♥ in London by YTScribe.com