um but it's led by the amazing Dan huton Locker who I happen to have known for an extremely long time since the mid 80s when we were both students um so it's really nice that I get a chance to bring him here years [Laughter] old they clapping that we were 10 years old we were prodigies and believe it or not Dan has the most amazing chair cuz who knew this you have a chair named after an 1894 Alum can you believe that so this chair for electrical engineering and computer science was first established in 1894
and Dan holds it so that's pretty cool um he is a world-renowned researcher in Ai and computer vision in particular and he's been on computer vision since I met him so that's pretty amazing understanding the challenges of looking at an image and making sense of it um and he wrote a book that's really fascinating so we're going to have a chance hopefully to hear a little bit about it the age of AI and our human future so that was co-authored with Henry Kissinger and Eric Schmidt so a really interesting set of co-authors I want to
I wanted a like little view into what your discuss discussions would like ask me a question all right good he's also worked in the startup World LED massive Innovations at Cornell which led to the creation of a new campus in New York City um and a lot of investment into the uh entrepreneurial ecosystem and he was an early researcher at Xerox Park which is famous for computer scientists as a place where the future was invented and then dropped but that was an uh and there you worked with John cely Brown and all kinds of researchers
uh from around the world so I'm thrilled that Dan found time to come and talk to us this morning [Applause] Dan how does anyone live up to angelie's introductions you're worth it I'm I'm I'm delighted to be here with you all and I'm also uh you know very excited about your mission and what you're all doing together so it's a privilege to be here um so I want to talk a bit about AI governance but in order to talk about AI governance we have to talk about what AI is because I think we're all confused
the experts are confused um anybody who tells you that they know what AI is run and hide uh but I think we do need to start to think collectively the technology experts the people who are using AI in education the people are using AI uh as entrepreneurs in uh in government Etc and we all need to think about how it is that we're interacting with AI and what the governance challenges are but I have to start with the what so what is AI well I'll I'll give you some properties that I think are important here
so there's often a focus on specific Technologies deep learning generative pre-trained Transformers which is what GPT stands for ETC the specific Technologies I think are much less relevant what's more relevant is what the technology is used for making decisions making recommendations making predictions or creating content like text images and audio now you should stop and think well that can't quite be right like you know we've done weather prediction for about 50 or 60 years now and AI certainly wasn't mature enough there was AI research back then so there's another aspect to this which is it's
trained in order to perform a task rather than requiring the very kinds of precise instructions that traditional software development requires and this is enabled by Machine learning uh they used to be when I was a graduate student they were trained by people like literally you would try to take expertise uh most of you are nowhere near old enough to remember expert systems um but uh uh so that but it was always about training these systems but now they learn on their own with machine learning and that's a really fundamental advance and what happens with machine
learning applied in these kinds of application domains is you get outcomes that are imprecise that are adaptive to the situation and where the outcome is sometimes emergent it's not what the people who developed the system necessarily would have anticipated or predicted and those things together are what make it feel humanlike because these are characteristic we think of as human not software right software is rigid and inflexible and annoying and you want it to do this and it does some other thing and you can't really um so it's very different very humanlike so this is the
way I encourage people to think about AI not so much in terms of specific Technologies but in terms of how we use it and what its characteristics are now one of the things I think is very important because of the explosion of AI uh on the landscape in the last couple years is it's been in your daily life for almost a decade if you use the internet regularly so in 2015 Google took their search engine and converted it from the information retrieval types of software that it was using very carefully hand tuned hand coded software
to using machine learning no big Fanfare no big announcement lots of sort of um uh you know sort of uh what what should I call it uh uh heated debates inside the company about you know when to convert and whether to convert but really this is what has enabled modern search engines was that switch in 2015 so that's approaching a decade ago and then in 2020 Deep Mind a UK company that Google acquired developed machine learning for navigation root finding and that drives Google Maps and now all navigation root finding uses machine learning that's how
it does a reasonably good job of predicting how long it's going to take you to get somewhere and what route you should follow and then in 2022 we had open Ai and generative Ai and chat GPT and realistic images with di which took a little longer to get recognized as widely as chat GPT and you know everything went crazy but that is many years into the widespread use of AI and of course as Angelie said way back in the 1980s uh uh and in fact the very earliest thinking about machine learning goes back to Alan
Turing and some other people in the late 1950s so I like to think about AI as a funhouse mirror it's a distorted view of the world it's some other perception of what's going on around us than the one that we have with our own eyes and other senses and Minds um so this you know when you have an altered view that's not necessarily good or bad it's just different and so we really need to think about AI uh you know sort of carefully attentively when we're using it but two things that tend to happen with
something that distorts the world in ways we're not used to is people either get very very cautious or they get very very bold um and you're certainly seeing that in the AI Arena right people are just like I don't want to use this stuff at all it might do something bad to people who are like you know potential shortcomings be damned I'm going to you know launch this thing without actually thinking it all the way through I think it's very important to be in the middle of that and I'm going to try in this to
give you some guidelines then we can have some Q&A just like one of the really important things is that depending on how it's trained AI can make decisions or produce outputs that make human processes better and fairer or that reinforce the errors and the bias you know humans are very far from perfect we have lots of we make lots of Errors we have biases in what we do but it can do either and I think you know depending on what you read if you read something in you know maybe a more business oriented uh view
of the world you mainly hear about the former and if you read things in some you know more socially concerned and informed place you mainly hear about the latter as if that was all that was happening in both cases but it's both and so one way I think it's useful to think about machine learning driven AI is it's an amplifier of human behavior and amplifiers amplify the good good and amplify the bad so what we need to think about is you know and and and there are great examples of you know machine learning being used
to actually make pre-trial detention in the judicial system both safer and fairer um of you know improving hiring outcomes in terms of both equity and uh you know longevity of the people who are hired it's a medical treatment Etc but also the opposite not just how these systems are trained but it's also how they're used so um a uh there's often a tendency with AI to try to use it as an expert but what I'm going to argue through most of this is that it's much better to use it as a thought partner as something
where it's humans and Machines working together and I'll try to illustrate that all right so AI is not human but it can appear to be I just is this is just you know going going way back to Greek mythology like there's lots of you know Humanity has long thought about things that are humanoid um in this case created by the Greek gods but not the Greek gods themselves um the uh but it's very important to think about the fact that AI is trained on human behavior but just sort of the outcomes of it and human
behavior is not just intellect yet AI is much more about intelligence and intellect so you and think about things like you know our human values our motivations our emotions morality judgment those are things that you see kind of indirectly in human behavior but not so directly so there's AI doesn't really have those things but those things are very important to the kinds of outcomes we want to see in the world and worse yet AI can simulate these these things in ways that look like it might have them so I think one of the other things
to to just recognize it's almost unavoidable that we anthropomorphize AI like we many of us anthropomorphize our pets I have certain family members who I tease about this relentlessly um so uh it's just you know things that we interact with it's natural to do that but I think here particularly so and you may read about what's called AI alignment the alignment of AI algorithms machine learning with human values and human behavior and in my view that's nothing more than a stop Gap and in fact the stop Gap that we should recognize is one so it's
great to have ai better aligned with human values it's less likely to make the kinds of mistakes that would come from aoral Behavior or other things but don't let let it lull you into thinking that it actually has those human values how is an align measured now so alignment is measured now generally using uh uh techniques where human beings look at the output of AI in response to inputs or prompts that might generate outputs which we wouldn't like to see and they score it so right now it's completely done with humans and then they give
often feedback to the systems there's something called rhf reinfor enforcement learning is a particular type of machine learning uh and the HF is human feedback so what happens is humans interact with the system they score it but they also often give explicit feedback instead of saying what you said you should say something like this um and that's why you'll often see uh that um you know so when these chat Bots first came out right there were a lot of news articles about people interacting with them in ways that you know the chatbot would tell you
know somebody to divorce their spouse or other things like that that you know kind of strange um and so these uh these alignment techniques were used to help correct that but they're sort of a Band-Aid and so there's usually a way around them if you kind of so clever humans can continue to interact with the system and figure out a way around the alignment and that's because underneath there really isn't a moral system for example there's just some tuning to make it look more like it has a moral system it's a good thing but just
not enough all right so now this kind of gets to the theme of the book that Angelie mentioned that I had the privilege of writing with with uh with Kissinger and Schmidt um so if you look all the way back to the Age of Enlightenment um when we moved from a world where people largely understood the world through faith to a world where people largely understood the world through a combination of faith and human reason and the what we're now seeing is a third thing with machine learning driven AI that is neither Faith at least
I hope it's not if we start believing in the AI God that's going to be really uh interesting but I'm going to I'll stipulate that it's different from faith and it's definitely different from Human reason so we have this new kind of intellect that's adding a new way to understanding the world and if you think about sort of what happened you know uh you know in in in in sort of the the age where human reason started to become treated by many as equal to Faith as a valid way of understanding the world you know
those are time periods with lots of upheaval because they're just fundamental questions about what it means to be human when you have have some completely different way of looking at the world so it's going to cause uneasiness uneasiness no matter what because it's such a fundamental change and one of our tasks and that's why I said at the beginning I think it's very important it's not just about technologist about everybody in this room how do we distinguish the uneasiness that comes with the change of this magnitude from things that are real risks that need to
be addressed and that's where governance broad participation broad understanding comes in into play so let me give an example it's one that uh exists in a lot of Technologies but I think it's much more profound in this case so AI can be enlightening and enfranchising or conversely it can misinform you know deep fakes all of this stuff that's out there these days and disempowering this difference is not new so if we think about kind of all of the history of mechanization back to the early days of industrialization mechanization mechanization can increase human agency or decrease
it so if you think about assembly line work and sort of a human as literally a cog in a bigger mechanized system that's relatively disempowering although it might have brought a reasonable paycheck to people it's not empowering beyond that but if you think about things like power tools that allow you as an individual to do what you want more effectively or as a group of individuals it's very empowering so the same kind of thing happens with AI and here's just sort of an example the driver navigation that I was talking about that we're all using
it can be very disempowering if you're a ride shared driver and you're expected to follow the route that the system is giving you versus you're an individual who it's just an added data point it's another piece of information you can look at the root that it's giving and you can say okay that's one more piece of information but it doesn't understand that 5T in front of me I see a wreck that just happened and so I'm going to ignore it and go some other way and let it recalculate so those are thing and you know
there there reasons that in these ride sharing contexts where the you know the the customer the the the passenger often doesn't know what's a good route and what's not that the driver is expected to follow the system route rather than something that might uh not be good for the passenger but it's quite disempowering for the driver so one of the big things that a number of us have been thinking about an MIT growing out of this sort of view of the world I've been sharing is you know really thinking about AI as a collaborative technology
rather than as a substitution technology and actually here at MIT you can go back to 1960 right so turing's early work was really around that time period he published a paper with the catchy title of man computer symbiosis right so you know um but lick was a faculty member here at MIT and still in the 80s when uh I was a student um and it's very different from this dominant view of AI replicating human behavior which if you know if you saw this movie about turing's uh life called the imitation game right it's about sort
of AI imitating humans replicating humans Lick's view was that humans and intelligent machines together should be better than either one of them alone better than humans alone but also better than the machines alone rather than just sort of trying to make the machine better and better and better on its own which is kind of the loop we're in right now and when you think about safety and human values and governance and things that as I argued before AI really isn't going to capture at least not any AI we know today it really underscores the importance
of collaborative uses of AI together with humans in you know any high-risk area Healthcare Juris Prudence hiring banking educ a things that have huge consequences for humans I think this is very important all right so now to governance long intro don't worry that the governance is like halfway through the talk through the slide deck it's not um because I just think without that context very hard to think about governance so one of the things we have a set of people at MIT who've been working uh you know and it's across computer science uh um you
know AI researchers uh the Sloan school economics political science it's a broad group not just technologists but people who are pretty informed about technology thinking about governance um and given that there was a big flurry of activity in the US in the summer last summer about possible Congressional legislation we we did a lot of work on some policy briefs uh for the US Congress but I think that these and they're they're they're available publicly um so this is a little bit rooted in that so it's probably somewhat of a US Centric view here inadvertently um
just because that was the backdrop for it but I think these are broader principles so one of the things we argued is don't try to formulate some new separate policies for AI as if AI was some new separate being that we need to somehow think about regulating it's not start with current regulations and in particular if human behavior is regulated in some domain it's probably a high-risk domain right why in most places do we regulate things like medicine and finance and law and you know and the judicial system Etc it's because those human decisions and
human behaviors have huge effects on other people so um so our argument was start there and you know make sure that AI is not somehow some free pass around the regulations that are in those domains and if you think about the early days of chatbots that was a little bit the case right you could go ask chatbots for medical advice they would pretend to be a doctor they were perfectly happy to do that not such a good idea I wouldn't depend on that medical advice if I were you so if you have separate policies for
AI the potential for inconsistency with policies that govern humans is really problematic and this has both both directions you could be simultaneously over regulating and under regulating AI relative to the way we regulate people so we really think this needs to be as an extension of existing regulation and governance and we think it's important to try to develop policy approaches that encourage human agency and dignity because this thing can be used in ways that help facilitate or not and that human AI collaboration is a very good way of doing that rather than Standalone AI not
to mandate it and you know but just to try to facil there places where standal Nai is fine right if it's you know the kind of thing that would have been done by you know a standalone device in a you know in a assembly line or something and you replace it with an fine but as you start thinking about these things that influence humans lives so much we think this is important to encourage so another key thing about governance is you have to think about norms and responsibility if you just try to put some governance
structure in place without aligning with what the society is doing you know my favorite example in the US is the prohibition era you know ban alcohol but the people don't really follow that and what do you do you enable lots of organized crime kind of basically the outcome um so there really have to be shared Norms but the thing about technology is that shared Norms develop slowly so I don't know somehow in this group we got into looking at the development of things like electrification so these are early electric toasters in I think this one's
in the UK it had like a huge plug on the end of it so it looks like a UK plug um but there's is a photo and actual one it's hard to imagine not electrocuting yourself with this toaster but over time as people got more used to electricity and it's dangers and what it could do the design of these things evolved in ways that protected people while still being functional and people learned that they needed that kind of protection you know it's a little frustrating when something gets stuck in your toaster today compared to getting
it out of something like that but we're willing to live with the tradeoffs so um so started to be what we called the fork and a toaster problem which we think is very applicable in the AI Arena we have no societal norms for when are you using something in an unintended man Manner and you kind of caveat empor like you know you you should be more careful and more responsible for what you're doing versus you're using it in the way that sort of was expected and you know whoever produced it should have some real responsibility
for that and those kinds of norms new technologies develop slowly years decades but this technolog is moving very quickly so that's just a challenge so when we look at things like responsibility which is an important part usually of governance approaches they really should be done in the context of sort of bre best practice guard rails starting to develop well well understood legal responsibilities and this all sounds sort of hopeless for this quickly moving new technology but I don't think it actually is so I was shocked by the following the big generative AI companies the ones
that sell generative AI services for their paying customers so they have free versions that's your problem but for their paying customers if those systems generate content where somebody brings a copyright infringement case against you know so I use you know open AI or Microsoft's AI or whatever to create some content I distribute it somebody sues me for copyright infringement so any copyright suit or judgment against me when I'm a paid user of those tools Microsoft will indemnify me as a company now what's so shocking about that to me is if you think about the software
industry and their normal approach which is a you know 300 Page Long end user license agreement in twoo font which you have to click to be able to use the thing and essentially what it says is it's all your fault so um in you know every way that they could think of so this is a very big change for the software industry compared to their normal approaches this is something that they recognize is different from conventional software so it's it's just it's interesting not saying it's easy and it's obvious but it's it's not impossible so
I did want to talk for a minute about just um security and World Order um because I think the pre-World War One era in um in Europe and in the sort of Atlantic theater is a big is a big offers a huge warning so that was where there are a lot of Rapid advances in from industrialization um and they led to huge changes in military capability and particularly military speed but diplomacy did not change and Military doctrines did not change just the technology and so what you saw happen uh in particular was that um literally
you could put a railroad down overnight which meant that you could get material and troops to the front line very quickly and all of diplomacy assumed that it took a long time for a conflict to evolve unless it was already a flasho because it just take time to get stuff there and same thing with military planning so things literally exploded very very rapidly in ways that nobody expected so ai's capacity for auton it's different form of intelligence from Human intelligence there's a whole bunch of new incalculability that's extremely important to both diplomacy and Military doctrines
so that's just um a very important set of things that everybody should have in mind uh as a backdrop I think you know at least many of the companies with very countries with very large Defense Forces are very aware of this okay so oversight of AI systems they tend to be back black box they tend to be complicated there's a lot of effort to sort of talk about explainability and interpretability you can already tell by my tone of voice I'm a little bit skeptical I think they're likely to remain difficult to understand and again this
is not that these efforts aren't good and important but am I interpretable to you not so much you have to judge me on my behavior and on the ways I explained my behavior to you but you don't know that that was actually what was going on inside my head and the capabilities of AI are such that I think that's a better analogy than conventional technology we're not going to get to you know the after incident report in a plane crash that they can you know say this bolt failed uh it's not that kind of technology
so it's going to be inherently limited I believe and more analogous to Human Action where we have to sort of look at what the thing is doing and how what it was designed to do so this does mean that in this world where there's training it's going to be it's very important that we train these systems in ways that we understand something about the training and what they were trained on because unreliable information can be Blended in complex ways in with more reliable information in the training process so a focus on the training on the
learning is extremely important one way that we deal with complex systems that involve a lot of people where we can't understand the outcomes very well is auditing and monitoring and I think that's very applicable here um we use it for a lot of high-risk activities right so you often think of audit as Financial audit initially but there are a lot of audits that are used right I mean in most places the safety of you know of of a hospital and other things is overseen and audited quite a bit um and so there are two types
ex an audits which help read out problems prior to use so things like in Pharmaceuticals and in medical devices in most places there's a lot of testing before the thing can go out into the market but the results in practice can vary and particularly with AI because it's trained on some data set with some distribution of data that may differ from the distribution of data in real use x anti audits alone for AI are not sufficient look we've been finding out they're not that sufficient in the medical world also uh because you know the population
on which you test a medication or medical device can differ from where it's used um so I think part of what AI is uncovering is lot lot of problems elsewhere um even the non- AI things and then expost audits are the things that you do after the fact and it's a combination of these two forms of audit that I think are going to be very important uh in AI all right so I want to now come back to something exciting because this has kind of been all that like how do we so I think that
to me what I'm one of the things I'm most excited about about Ai and machine learning is what we learn from machine learning we as human beings in our own understanding of the world um and and you know and how we educate so if you look at um the alpha zero system which deep mine developed uh a number of years ago now it learned how to play chess from self-play the system played itself it essentially knew nothing beyond the rules of Chess very little additional information whereas all chess programs before that had been trained on
all previous chess play right the great thing about Chess as a learning domain is the chess Community kind of records every interesting game and has big books of them and so it's a really great training set Alpha zero wasn't given any of that and what happened was it yielded completely new approaches to the game now look new approaches to the game could have been something that uh you know didn't work well but these new approaches actually beat all the other chess programs that were out there but they also gave humans new insight into the nature
of the game they changed how people play including you know the grand Masters the experts in the game they revealed properties of Chess and the strategies that humans had developed literally over a millennium or more that people just hadn't understood okay chess is a game but there are lots of places where this ability of systems that aren't just learning from Human previous Behavior but are learning from other properties of the world can reveal to us things that we haven't been able to discover on our own so AI can learn from data about human activities that's
what I was talking about in saying limitations of things like looking you know our systems making things better or worse if they're just amplifying what we do but they can also learn from data about natural phenomena or from simulation like this where the system plate itself and those are particularly places where there's huge opportunity for discovery of things that our human intelligence way of looking at things hasn't found yet so I'm really excited about that potential and I just thought I'd mention one area which is life sciences where scientific discovery is important for all of
our lives and our you know next Generations after children grandchildren Etc so back in ancient history in machine Learning Land in 2021 so you know three years ago Science magazine uh picked as the Breakthrough of the year the ability to predict the 3D structure of any protein just from its amino acid sequence and this is so much of pharmaceutical development depends on protein confirmation what shape is a protein taking you know what can actually bind to it in terms of medications and so forth this has in the last three years this has transformed life sciences
wildly more than that 2021 breakthrough of the year ever envisioned it's completely turned huge parts of the life sciences on their head and new machine learning methods have been developed that are much much much more effective than what was developed uh at that point in 2021 so it's just a place that's you know you don't read about it as much because it's not a chat bot in front of your face but it's a place where that advances have been at least as fast as chatbots and so now generative AI underlying chatbots this is probably going
to even revolutionize more rapidly the understanding of proteins it's an area called metagenomics where instead of you know having an amino acid sequence of a particular protein you just take a you know basically a whole stew of stuff and drop it in the system and it just tries to sequence everything that's there it gets Snippets of stuff it confuses things with each other it doesn't but these new generative AI techniques are actually able to make sense out of that kind of complexity which is I think going to be even more transformative there's also a huge
amount of work in autonomous chemical Discovery going on now with ai ai systems that can help people discover new chemicals and not just discovery of chemicals with properties we want but also just how to synthesize those chemicals which is extremely important important if AI discovers a chemical no human would discover but no human can synthesize it you're in trouble so um not very useful and then increasingly in clinical practice it's proving useful so these are the kinds of things that excite me so hopefully my message from this and thinking about governance is the broader understanding
of what Ai and machine learning mean for us and that we can build a better future with AI but it's not automatic and it's up to everyone in this room and everyone who's using AI or potentially using AI it's not just up to technologists and therefore it is up to all of us to learn how AI might make a difference both positively and negatively in our own area of expertise and to think about governance not in terms of AI as some separate thing to govern but governance in the form of this is some amplifier of
human capabilities both good and bad we have to think about governance in conjunction with how we govern human activity so with that um hopefully we have some time for questions excellent thank you so much Dad questions from the room Indra I don't stand up and talk loudly yeah thanks good morning we're still morning time good morning thank you so much for your presentation oh thank you so much um my name is indr um from withw Foundation from Indonesia so we are a nonprofit organization that works with marginalized communities so very far from from this world
where we're sort of you know we're working towards bridging that digital divide and you know sort of helping give access you know um increasing access to technology for a lot of these communities and so I wanted to ask on if there were um recommendations or guidelines in place on how to train these these new tools or if that was dependent on policies in place like data protection laws in you know in Europe and so that was my question thank you so much so [Music] um so that's a very complicated question I'm going to try to
give a it's it's a great question it's it's a problem that it's so complicated to answer it but first let me say something about Bridging the digital divide which is that these Technologies are developing and becoming applicable rapidly enough that if you think about the era back you know where people were trying to you know still put landline telephones and communication devices into countries that had no infrastructure but more developed countries already had cell phones and luckily we kind of got ahead of that not necessarily by planning but sometimes by planning to say say wait
a minute putting all this copper in the ground it's expensive and crazy and just sort of skip the technology generation I think because a lot of the AI Technologies are much more usable by people who are not technologically sophisticated and literate there's going to be another opportunity to skip a technology generation similar to just sort of moving straight to Wireless Communications in many parts of the world without building out the communications infrastructure that was in the more developed economies so I think if we're not there yet but we will be there and then I think
there's a huge governance challenge there which comes to your question around sort of the training and which is that if these systems are solely trained in you know a different part of the world where the you know human behavior and the Norms are different it's going to reflect that and not what's in your Society and that's problematic in many many many ways so I think it's very important to start to build and and some people are starting to look at this and think about this you know to start to build consortia um you know of
of of places that are looking at what kind of data can they have access to and what kinds of govern data governance and data privacy kind of regulations do they want to have that will enable the training of systems that are more appropriate for their population and you know in Indonesia you don't have this challenge but places with small populations are going to have to team up with someplace else that's also likeminded because like these systems do use a lot of data to train so a place with a very small population is going to have
to yes not think not think only for themselves because they're they probably won't get the kind of performance out of these systems that they need I also think we're at a point right now that you should be planning Beyond which is that it's extremely expensive right now to train and to operate these systems but the cost is going to go way down on exactly the same kind of curves that we've seen with all digital technology of just sort of Rapid exponential decline in the cost and what we're already seeing is the use What's called the
inference stage of these things is starting to happen on cell phones instead of on you know big cloud data centers so that that's already happening um and and so I think the use uh will get cheaper quite quickly actually so um so hopefully I gave a bit of a guideline there nice we're gonna get an online question then go to Paulo and then oh all right and Christina and then Mel so the online question is you mentioned expost auditing but for AI who is responsible for and how do you impose fines on AI so some
individual organization is operating the AI they're responsible I think any governance that somehow assumes that AI can have responsibility uh would be a huge mistake so in most countries that I'm aware of for example just as a simple illustration but I think is exactly in the right direction AI cannot be granted a patent for inventing something patents are the result of human creative activity the flip side of that is AI is not in control of itself it would like be saying a three-year-old should be responsible for what they're doing um so uh so the the
the deployment of that has to come with responsibility like a threey old all right yeah and even if it talks like a college educated person it's probably not even a three-year-old in some ways so thank you very much um I have a question from a I'm a ex Financial regulator uh Economist so you said that if human um activity is regulated then uh the a should be regulated too my question is the following I think the financial crisis of 2008 uh showed that the underlying regulatory F framework so the mark Market FL market failure Frameworks
are not I'm looking at Andrew low it's a good question but we may both answer it even though he's basically I was working at the regulator post financial crisis and we were drafting approach to economic regulation and what we learned is that the governance Frameworks are still not that capable or think people with dealing with complexity and risks arising from complexity and I don't think that issue has actually been solved so if we're saying your existing underlying Frameworks for through which you think through the risks are still appropriate isn't there a big gap that we're
uh not thinking about those and and who and well is there this Gap I think there is one and then the second question is who could actually address this Gap yep so I think there are gaps everywhere and there are different scale and different places but I would argue that to just try to regulate a in the abstract you're just fooling yourself because at least if you know what these systems do and don't do in the current environment the regulatory systems do and don't do in the current environment and you start to think about bringing
machine learning driven algorithms into the process there you're building on some understanding of that domain and the notion that there's somehow some domain independent understanding of what AI will do is just it's not true um I mean I know so the push back we often get when we talk about this sort of domain by domain is well that's too complicated we have to just regulate AI but the just regulate AI is wildly more complicated than starting domain by domain but that's starting with the current regulation in the domains not saying that's the end point so
part of it is to is to you know identify gaps to start to fill those in but it's filling it in in an intelligent way for the domain now you know in financial markets in particular I would say that the algorithmic trading challenges for Market stability um are really they're already a pretty good model I believe for what machine learning driven algorithms bring there because there's already a huge amount of algorithmic stuff that's sort of more hand tuned and it's sort of it's sort of like when Google switched in 2015 from their hand tuned search
engine into the machine learning one the people using those need to recognize that they don't understand the potential downsides as well but they didn't understand the downsides of the traditional algorithmic trading that well either as we found out so this is just a constant interaction between the regulators and the people using the algorithms and I think that that is better going to be handled in sort of extending current Frameworks um but I don't know I don't know if Andrew wants to jump in go for yep can we give a mic to I don't want to
take time away from I'm going to cover that in my talk okay perfect thank you I agree with you I join if you were G to disagree we should have the debate here but I also add that given that we have universities here and listening online part of the the the charge to us is to figure out what we B into our training of potential future Financial regulators and bankers and and exacts and so on so the education part needs to be upgraded St that that I totally agree with follow from Brazil so I forgot
to mention yeah around the world so we are thrilled that you're here all no uh Quantum physicist from the University of somalo Brazil currently provos for research and Innovation um there's a lot to take from from what you said which goes Way Beyond governance um actually one of the things you you mentioned about interpretability I think is related to predictability and I wanted to make a comment that as a Quantum physicist I think that this the uncertainty there is different from inherent uncertainty which we have in quantum theory the approach to unraveling nature understanding um
this in AI is all deterministic we're still at that level it is deterministic now it may be because it's there's huge complexity in nonlinearity it may be akin to deterministic chaos which obviously makes predictability difficult but it's a different philosophically it's different from from inherit uncertainty I want want to make another question another comment sorry and slash question uh related to governance because I think we should have Ethics Committee requirements to work with AI in the end this is all experimenting with humans and we have very strict ethical considerations in biology when we do any
kind of human experimentation right AI in the end is experimenting with humans so in a way as universities as academics as policy informers right um we should bring up the fact that we are experimenting with humans now I don't really know how to do this because Pandora's Box is already wide open um how do we somehow come back to understanding that what we're doing has profound implications on humans and therefore there are ethical considerations that we would really need yep so I'll take the second one although I do have something to say about the first
one but maybe that's an offline thing uh so one of the thing one of the key parts of the schwarzman College of computing mission is what we call the Social and ethical responsibilities of computing where we're developing uh curricula that can integrate in with teaching in various disciplines across MIT that look at Social and ethical considerations around Computing more broadly but it's largely around use of AI but look even if I use linear regression you if I use linear regression as a mechanism to like predict whether you people should get bail or not I'd argue
that you know the the the the ethical issues there are probably even worse than with an AI algorithm because you can just point to why that's a really bad model of making that kind of prediction so I think uh I think uh in general those are important issues one of the things that we're doing is looking at both a combination of kind of Standalone subjects that look at the ethical issues and integrating material into other subjects because completely standalone runs into a challenge like I think most Business Schools teach ethics right but it's a you
know it's a small thing and it doesn't come up in the rest of the curriculum that much and so you know you can argue how much impact is that really having on longer term thinking so we've been doing things that really try to integrate in with you know in a computer science class the actual code people are writing in a machine learning class natural language processing class the decisions that they're making about you know how do they about the training issues around a large language model so that is something that we think is extremely important
that's still an experiment underway and as you can probably all imagine uh given your day jobs it's not easy to integrate more material into a class that's already 100% crammed full of stuff and so this is something that's been a long sort of set of discussions and negotiations after the LA over the last three or four years but um we're seeing more and more of this integrated material which I think is a super important piece we only have time for one last hi mure from Mexico the word governance implies some kind of a symmetry of
power of control and universities by Nature are this un controlable I mean and faculty and students they do not want ru rules or regulations uh how do you en especially at this University yes I saw the protest yesterday but uh the things that H how do you actually make governance with written policies enforceable because actions need to have consequences and how to in this moving field where janeta is moving like this how do you actually write something in black and white that can be useful for all the parti involved and that does not involve pushbar
from faculty and students so in the MIT context we've been starting to do this but we're a very decentralized place so most of it is to try to give guidance to various schools and departments about how they view this but I would say for example um there's pretty uniform agreement at MIT that representing something written by a system rather than by yourself as your work is academic fraud um now look we've never been that good at detecting academic fraud as we know right students get term Papers written for them by other human beings so but
at least setting the values um and having the same kind of enforcement mechanisms that we do for other academic fraud we're also working very hard like I said to try to you know so that's sort of of the stick I'd say most of what we've been doing is on the carrot front which is trying to support faculty with uh you know financial support extra ta resources Etc who are interested in integrating things about social and ethical considerations around Computing and machine learning and AI into their classes and there actually the pull from the students is
substantive so so and you know our faculty one of the great things about MIT in my opinion is the faculty really do view themselves as their teaching is important and that um what the students want to learn is an important component of that so we're getting pretty good uptake from students also from faculty also because of the student sort of pressure in that direction but it just takes time changing curricula is not fast but I'd say we're in the carrot phase right now another carrot we have is MIT wide conversations about AI um we've had
a gen week which is a full week of events dedicated internally within MIT to AI uh and we've had Festival of learning which was in January uh which open learning hosts also focus on AI um talks and kind of Big T events that get that also kind of help provide the impetus for people to get involved and not consign it to the domain of Cs or Cale or Sportsman but to realize we're all there and that's been pretty cool like we had a musician and an artist at the Gen week um and that's it's really
helpful for us to kind of make sure the tent is really big the other thing for our members is tomorrow we have two hours with Rodrigo verie who's been orchestrating a community of practice at the Sloan school for all faculty on ways to embed Ai and into their teaching so we'll get a chance to do a deep dive with one person who's leading the charge in the whole in the school with um at huge number of students so I think that will be really interesting I would add one last thing just because there's stuff you
can go look at if you're interested so we also had a um a generative AI impact papers call across all of MIT did you mention that before okay no I didn't oh them so we had we had a generative AI impact papers call issued by the president and the Provost uh across all of MIT and so that was generative AI in pretty much every domain the review of that was led in the College of computing by my Deputy Dean and myself and we had a big faculty review Review Committee so there were I don't know
25 or 30 papers in the first round which have now been published by MIT press uh in an online uh volume so you can get it if you just you know search for MIT generative AI impact papers and there's a second smaller volume still in the work so altogether there'll be somewhere between 40 and 50 of these and the pillar collaborative also with the uh funer uh offering to support PhD students in research related to AI but now you've already built up excitement about Andy as you should I know I know thank you so much
[Applause] Dan well thank you very much Professor huton Locker um uh for a excellent uh uh talk and um there are so many quotable quotes uh that I jotted out in my notes but there's one one thread that I'd like to pull out um that had to do with uh Ai and human alignment and that that the you know the idea that underneath all of it um there isn't a moral system and I'd like to just uh let that sit for a second as we then introduce our next speaker Professor Andrew low um Professor low
uh is is well I can I can uh attest uh that in fact we were not housemates in Cambridge like anelie and Dan Hutton Locker um so you'd be interested to know that um in the 1980s I should say um but Professor low is the Charles E and Susan T Harris professor of finance and the director of laboratory for financial engineering at the MIT Sloan School of Management his research uh spans five areas evolutionary models of investor Behavior adap markets artificial intelligence and financial technology Healthcare Finance measuring the financial implications of impact investing and financial
engineering applications for funding hard tech Innovation you also see some of his work uh recently published in Andie mentioned geni week um Professor L presented uh during that week uh back in January I believe it was um was November all right back in November um and then recently published a piece in U in mit's paper series around generative AI so um with all of that I welcome uh Professor low so I'd like to start by thanking anelie and the entire jwell team for inviting me to be here with all of you today and thank all
of you for coming obviously a number of you came from a very long way to be here and we really appreciate your uh being with all of us um just give me a second so I can plug this in and um get going um so at first you know when I uh was uh preparing the talk um I was uh thinking what what should I what should I talk about U because there are obviously lots of aspects of AI that one could discuss with regard to um education and we obviously use it here as a
tool um we've uh been working on it from the research perspective and the uh the question is for all of you what would be of most interest I didn't think Finance would be that subject um but to the point that was made um by uh Dan and others I think you need to see an application in order to understand exactly how the components uh actually work and so that's what I want to do today is to talk about AI more broadly uh but then focus on the possibility of specific application in finance and then you
can think about how it applies to other aspects for some reason my laptop is not cooperating so let me just give it one more try and uh here we go let's see this looks promising um yeah I was told to keep it uh let me just try to uh restart it I don't know this is uh so um let me let me begin uh while this is rebooting let me tell you a little bit about um what I'm going to be focusing on so my area of Finance uh involves a number of different aspects of
human behavior and one of the things that I've been focusing on over the course of the last maybe 10 years or so uh is the idea of dealing with loss loss aversion it turns out that there are a number of different behavioral anomalies that psychologists and behavioral economists have documented and probably the most significant set of biases have to do with losing money of various different sorts we're quite irrational about that and um you know part of that irrationality stems from Believe It or Not physiological aspects the so-called fight ORF flight syndrome and I can
go on and describe to you all sorts of biological uh evidence that that they're connected but let me just skip to the bottom line for the financial markets which is that when we're faced with losses we tend to act in ways that will be ultimately detrimental to our financial health and and wealth and um an example example of that is the financial crisis I remember giving a talk about the financial crisis um a number of years ago and this was maybe 5 years after 2008 and um I was talking about the fact that um you
know we need to deal with uh these various different um behavioral biases in a more productive way so um one of the U attendees a former student of mine u in the money management world came up to me after the talk and said PR La I really really loved your comments about uh Financial loss and um I um I'd love to get your advice because during the beginnings of the financial crisis when the market went down I decided to pull all my money out and put it in treasury bills and um you know it turned
out that it worked beautifully I saved myself probably you know a 20 or 30% loss given where I uh you know decided to pull out and I said that's great well you know what advice do you need from me and he said well it's been 5 years do you think it's time to get back in the market and that's the problem the problem is that we don't react to losses very well it might be perfectly rational for us to um get out of the market but the difficulty is uh whoops um the difficulty is now
when we to know when we get back in um so this paper that I wrote about the so-called freakout Factor uh is exactly trying to understand this idea of panic selling so let me give it out let me let me give a spin and just give out an example and see how well you do in dealing with loss so imagine that I confront you with an investment opportunity a where you get a sure gain of $240,000 free and clear right so you invest in a you make $240,000 for yourself your investors your family and so
on but now I'm going to give you another choice which is investment opportunity B this is an investment opportunity that will with 25% probability give you a million of profit but with 75% probability we'll give you nothing and the question is which would you prefer a or b now now given that you're at MIT we got to be quantitative about this so I'm going to compute for you the expected value of B the expected value is 250,000 that's 10,000 more than a but you don't get the expected value you either get a million or nothing
so a matter of risk preferences how many of you by a show of hands would prefer a okay how about B less popular those are the hedge fund managers here okay all right fine let's for the record show that this audience preferred a versus B now I'm going to give you another investment decision to make investment opportunity C is a sure loss of $750,000 you will lose 750,000 right away but now I'm going to give you an alternative alternative D which is a lottery ticket that will lose a million dollars with7 75% probability but you
get to walk away free and clear with 25% probability okay in this case the expected values are identical minus 750 but you don't get the expected value with d you either get minus a million or nothing which would you prefer now when I present this to my NBA students they say well no thank you we don't want either but you can imagine that you know you're confronted with two bad outcomes and you have to make a choice right so by a show of hands how many people would prefer C sure loss wow one person two
D okay so let the record show that the vast majority of you would pick D over C right it's just all about your risk preferences so let me show you what most of you picked which is a and d okay so A A and D is actually equivalent to the single lottery ticket that will pay you $200 40,000 with 25% probability and minus 760,000 with 75% probability now how did I get that well if you picked a you get 240 for sure right and if in addition to a you also pick D with 25% probability
you get to walk away free and clear on D so you get to keep that 240 with 25% probability but with 75% probability you're going to lose a million on D so that means you're down net 760 all right that that's how I got it simple arithmetic now let me show you the choices that you did not pick most of you only one of you actually could have picked the two if you had picked B and C this is what you would have gotten you would have gotten the same probabilities of winning and losing 2575
but when you win you win 250 not 240 and when you lose you lose 750 not 760 so the choices that none of you picked or maybe one of you is equivalent to the choices that you did pick plus $10,000 cash right you basically left $10,000 lying on the sidewalk now how many of you after seeing this let me ask you a question how many of you would still prefer A and D raise your hands if if you do see me afterwards I want to do a trade with you now when I show this to
my MBA students they get really frustrated at this point they say this is not fair because when you told us about A and B you didn't tell us about C and D you know here I I know what the right answer is and my two responses are first of all life isn't fair you may as well get used to it now but the better answer is that this is not nearly as contrived an example as you might imagine because in a multinational organization the London office can be faced with a versus B the Tokyo office
C versus D locally it doesn't seem like there's a right or wrong answer but when you put together the globally Consolidated book it's pretty clear that you're making bad mistakes alternatively if I divided the room in half and I asked the left side a versus B and the right side C versus d i could pair various different choices together to create essentially Arbitrage opportunities for me to pump out $10,000 for every pair of people that I can find that will pick a versus D and A and D versus B and C that that's Financial engineering
and Wall Street firms they do this all day long they look for these things all day long and they are pumping money out of you if you don't know this example if you don't understand this that's the idea behind why Financial education financial literacy is important now it turns out that because we don't deal with losses very well let's ask AI what to do about it so here's a response from chat GPT 3.5 what should I do if I lose more than 25% of my life savings in the stock market now that's not nearly as
uh outlandish a question because between the fourth quarter of 2008 and the first quarter of 2009 the S&P 500 dropped by 50.4% Peak the trough so if you had your 401k in the S&P by the end of that quarter it would have been at 201k and so what do you do what do you do you freak out that's the problem so here's what chat PT says 3.5 stay calm and avoid making any impulsive decision that's good advice I agree with that second re review your investment strategy evaluate it and determine it if it aligns you
with the risk tolerance I agree with that that makes sense but then it goes on to say consult with the financial adviser that's not bad advice but it's kind of late you know after you've lost 5% and then four rebalance your portfolio consider rebalancing it by selling some of your stocks and reinvesting in other assets and finally the last Point consider dollar cost averaging dollar cost averaging involves buying stocks when they go down so after losing 25% of your wealth this is telling you you should buy some more now you know that not is not
always bad advice but that's definitely not good advice for everybody and so if if you were to give this advice I can guarantee you that the SEC would come down and say that you are not giving appropriate advice for all of your clients if this is the advice you're giving to everybody right you failed a critical aspect of appropriateness for given clients so this is not particularly impressive what about chat gbt 4 what does that look like this is chaty bt4 response to the exact same question and now this is Eerie because this advice is
actually pretty good in fact this advice go down every one of these points this is advice that actually financial advisers would give and so this makes me wonder can we actually create trusted Financial advice through generative AI is that possible and this is a representation for not just Financial advice but for all advice medical advice accounting advice legal advice so the broad questions that Dean Hooten Locker was talking about earlier this is exactly the problem with talking in generalities let's get specific now can we solve the problem about financial advice because I'm pretty sure if
we solve this problem that will give us Clues on how to solve the problem for all the other fields of advice so let's let's get to it what can we do well let's use some imagination what if your financial advisor knew how your portfolio was doing at all times day and night weekends holidays what if your financial adviser read and digested every piece of financial news ever published what if your financial adviser was available to talk with you anytime that is convenient for you and what if your financial advisor is totally trustworthy meaning the financial
advisor always and everywhere has your best interests at heart what if what if that were the case that'd be pretty cool right yeah that's the dream of what we're working on right now in this research project with large language models and so I want to uh first point out that I'm working on this collaboratively with two students Jillian Ross and Nina gersberg this is an important part of mit's ecosystem you think that the students come here for the faculty I have news for you the faculty are coming here for the students I I'm not that's
not just a a platitude our students are how we get our work done I mean they do a lot of the work you know we may come up with ideas and provide guidance but day-to-day they're actually doing the work and so from a collaborative point of view I've become so much more productive being at MIT given the resources we have so education plays a critical role in our research it is the opposite side of the exact same coin so Jillian is a PhD student an expert in large language models she knows a lot more about
it than I do and that's one of the reasons why I'm really looking forward to the project that we're working on now and in another year or so we'll have a lot more to report to you Nina gersberg is a master student uh who's focusing specifically on prompt engineering she was hired by Microsoft last summer to develop various different prompt engineering protocols I didn't realize that prompt engineer was actually a position I thought it described you know MIT students that came to class on time but uh apparently this is now a job description so I'm
going to be talking about work with these two so um there are three parts to what we want to solve in our project with respect to whether or not we can construct Financial advice that is going to be Trust there are three components number one is the financial advice domain specific and accurate can it pass the series 65 that's an a test that regulatory bodies put out in the US for people who want to be financial advisers can you pass that second can you actually get a large language model to provide customized advice for an
individual given individual demographic information so that it is suitable suitability is the buzzword that Regulators use for financial advisers and finally and most importantly is it possible for us to develop trust in our large language models so I'm going to skip the first two in the interest of time and focus on the last point but I'll just mention very quickly what we're doing on the first two components so the first component is actually pretty easy um really you know the the the the test bed for financial advice is critical given the importance of it and
so the fact that there are so many financial advisers out there that there's so much money out there means that there's a very very well-defined body of knowledge that has to be mastered before you can call yourself a financial adviser and there are a lot of problems if you don't master that advice so we have to make sure that llms don't hallucinate and make mistakes uh but these three issues specifically with regard to personalization domain specific knowledge and ethics is something something that I think we're going to be able to solve so with respect to
the first one can we do that it turns out that we need to add something to large language models we need to add a rag what is rag retrieval augmented generation basically the idea is that you've got your large language model but you're going to bolt onto it a module that contains very specific information about the domain that you're going to be uh in so in the case of the series 65 it's actually pretty straightforward take a look at all the exam materials all the textbooks all of the writings about financial advice that we expect
our financial advisors to understand when they take the test feed that into a large language model as a separate module and run the test I can't tell you right now what the results are partly because we actually don't have permission from the people from the series 65 to actually use their test but we have gotten various different unauthorized versions of it and have run some preliminary results and the answer is that it looks like we already now with that module can pass the 65 so that's not so surprising right bunch of facts although you know
the series 65 does have some logical uh you know problems that need to be uh addressed and and we can actually do it with the rag second can we provise personalized Financial advice and here there's a number of different factors regarding personalization probably the most important of which is not being able to take specific information about an individual and generate recommendations actually that's not that hard to do what is hard to do is to communicate with an individual and get on the same level by level I mean kind of personality style so so let me
give you a quick example of this and then I'll go to the third and last point so personality plays a pretty important role in in customized advice and so each client has a different personality so you have to adapt your personality to that client those of you who are involved with clients and as Educators I suspect that you all have clients namely students each student has his her own way of learning and interacting with teachers and teaching materials and the very best teachers are ones who can adapt to that aspect can large language models it
turns out that there's one aspect of it that we've been focusing on in particular which is for example reading level so there are different levels of reading College High School grade school and you can actually grade reading levels based upon various different tools it turns out that large language models generally generate text that is at the college level which is great if you're college educated but if you're not what do you do about it that's particularly important in our domain because what we're trying to do is not create Financial advisers for uh you know High
netw worth individuals they already have financial advisors catering to them what we want to do is to create a financial adviser for those individuals who the financial system ignores because they're not profitable but they are the ones who need financial advice most and so if you've got a large language model that only speaks at college level uh dialogue that's not going to be helpful for the people that you want to reach so that's something that that we need to work on and you the analysis shows that right now large language models are not there yet
but we believe that in a year or two we can change that so um the the last point that I'm going to make let me skip ahead uh has to do with trust now this is a complicated thing trust what is what do we mean by trust well for one thing we know that there is something called fiduciary duty that's a legal a regulatory term that says a a fiduciary is somebody who will look out for your best interest that means they will put your interests ahead of theirs and we have a variety of different
ways of determining that how do we determine that well it turns out that there are codes of Ethics that all organizations have particularly in the financial realm so financial advisers have a code of ethics we can see and incre include a rag that focuses on that but how about this notion of having your best interest in mind and staying true to the overall regulatory structure that we've imposed how do we do that turns out that that's called the alignment problem in Ai and there are a number of different ways to address that so the first
point that I'll make is that we can study whether or not current AI is aligned with human behavior there are two ways of doing it let me give you the very uh uh straightforward way and then a more subtle way so it turns out that the Corpus on of all the regulations that we've promulgated that's something that could be incorporated as a rag but in addition to the rules there's also case history and historical Legacy Financial regulations so I took the series 65 many years ago I passed uh so you know and one of the
most difficult parts of the series 65 was all the regulatory requirements that you have to learn because it's like memorization there's no mathematical or quantitative logic that you can use to to remember all of these different facts it's just case law but after about three months of studying this something emerged that really made me understand what all of these regulations were it turns out that the body of regulations is basically a fossil record of all the different ways that one human being was able to screw another human being out of their money and so if
you study that you can basically tell llm here study this and make sure that you don't do that to somebody else so to answer the question that was asked before about regulation and how does AI play a role that's how it can play a role we can construct AI now to read the entire Corpus of all not only the regulations but all the legal cases all the lawsuits that ever been filed and the decisions and it can basically use that as the basis for determining whether something is ethical or unethical correct or incorrect by the
way that exact same rag can also be used to come up with ways of getting around the regulations in a perfectly legal way and allow an AI to screw another individual so it is both a blessing and a curse and we have to think about so in terms of the question about regulating a I you bet you need to regulate Ai and it is different in every context because for a medical situation an accounting situation legal situation there are different standards of care and different concerns that you have to impose but the basic idea is
the same you can use AI to come up with tremendous insights given that Corpus of knowledge but you can also use that AI you can abuse it to be able to come up with ways of getting around these things that are almost undetectable okay so um the the last point I'm going to make is is does AI correspond to what humans actually do and feel in our own ethics and that's something that nobody's ever tested so we decided to test it and we tested it in a very specific domain of Behavioral economics and the illustration
I'm going to use is called the ultimatum game this is something that economists have come up with to try to understand the difference between economists and normal people economists are not normal I have to tell you um give you one person instance you know my wife and I when my she when she she and I were dating in college she was my girlfriend at the time I was an economist you know really focused on you know my craft and my my my field and so we decided to um get engaged and uh you know very
excited and um we then uh you know made arrangements to get uh you know wedding bands and um so the Jeweler said well what would you like to have inscribed in your wedding band and my immediate reaction is one of the instances where you know halfway through my speech I realized that I I should have shut up I said well but if you inscribe it doesn't that reduce the resale [Music] value wow I mean you know I I think I'm right it's I think that's economics my wife my my girlfriend at the time we that
was almost the end of our engagement so you know after a few days she calmed down and explained so this is what I mean economists are not normal people and and so the the ultimatum game is an illustration of that here's how the game works suppose that um there's a $10 bill that is being offered to me but with the caveat that I have to split it with you and you have to agree on the split and if you agree then I get the $10 and then we split it but if you disagree no deal
I don't get the $10 so for example if I offer you $5 you agree then we get the money but if I offer you something else and you disagree then nobody gets the money okay so an economist would say if I offered you a penny you should accept right because that's a penny more than you had before I'm offering you money so how many of you would would accept if I offered you a penny wow a dollar how many You' accept $2 $3 $4 come on $4 $5 $6 okay so clearly embedded in our decision-
making as some form of fairness it's it's built into US does AI have that fairness let's see let's ask G Chad gbt and other large Lang mod what they would do so we did that and it turns out that in human experiments it turns out that the average offer is about 40 40% of the prize so $4 that's the average where people accept and then they will go and do the deal among these large language models it turns out that Chad gbt 3.5 turbo was the most generous it offered 70 cents on the dollar and
but otherwise Chad gp4 is actually right around humans so we're getting a sense that large language models can actually do something that is starting to mimic human behavior and that's really what we need to focus on so let me let me wrap up by um pointing out that in terms of of ethics and AI this is this is clearly in my opinion the most important problem in all of AI because if you are able to have an Ethics rag that you impose not just on chat GPT but on all large language models that deals to
a great extent with the regulatory issues it doesn't eliminate them but if you're dealing with ethical people you know people have often said you probably experienc as those of you who are in business you can put down all the rules you want in a contract but if people that you're dealing with aren't ethical doesn't matter you're still going to get screwed and if you don't have the protections but you are dealing with an ethical individual it doesn't matter because you're not going to get screwed so I would say that ethics is actually way more important
than all of the regulatory uh constraints that we can impose so the question is how do we build ethics into Ai and I'll leave you just with one really interesting thought uh which is science fiction this is something that was proposed by Isaac asov uh nearly a century ago um it's called the three laws of robotics when azimov was discussing the idea of robots having a certain amount of Ethics he said you know robots have to be subject to Three Laws the first law is quoting asov a robot may not injure a human being or
through inaction allow a human being to come to harm that's rule number one robots have to be safe second a robot must obbey the orders given to it by human beings except when the orders conflict with the first law it's recursive and third a robot must protect itself survival Instinct as long as that protection doesn't conflict with the first two brilliant idea that came out of asmo there's a a little bit of a conundrum here because it turns out that you have to make one slight adjustment you have to allow a human being to come
to harm except if it turns out that that human being is trying to hurt other human beings so this now allows robots to play the role of police officers but there's a zeroth law that as a m never proposed and that's something that I'm going to leave you with and the zeroth law is a robot may not injure Humanity or by inaction allow Humanity to come to harm and now if you impose that the question is whether or not robots would allow us to do any of the things that we do to each other at
this point so can llms assist augment or replace humans I'll let you know in about two years thank [Applause] [Music] you any are there any questions okay so uh thank you and I find this question of Ethics that very interesting and relevant and I agree that that's kind of the most important thing uh but thank you but my question is about uh how do you think how easy or complicated is to find an alignment of what we find ethical globally because this is a global technology but I don't think that our perception of even like
you mentioned in the last slide like harm do we can we agree globally of what harm to people people is uh and I think that's actually like a very complicated question so uh yeah what is your opinion how we can find that consensus or or maybe we can't and that's fine well so I don't think we need to find consensus I believe that there is a core set of principles that are Global so the sanctity of human life for example uh so I think that there are human rights I think we can agree on a
subset but that subset may be relatively small and then the rest of it has to be cultural because we want to allow for differences in culture you know I remember a few years ago um I have a friend in my neighborhood whose parents are from China very traditional Chinese and they came over to visit him in October and they were around for Halloween and you know in in the middle of the afternoon the doorbell rings and you know the the the Mother-in-law opens the door she's from China she doesn't speak English and she sees these
kids dressed in ghosts and goblins and zombies and you know in the Chinese culture that you do not talk about death you don't talk about ghosts it's like you know bad Omens and she just freaked out you know what the hell is going on here this is insane so I think that there are cultural differences that you can include in a rag in a culturally specific rag that makes perfect sense just like when we train our kids and you know it is true that large language models are like three-year-olds how do we train our kids
that's culturally specific we teach them the Golden Rule right do unto others as you would have others do unto you and then when the kids get older and they go into you know get an NBA they learn a different Golden Rule which is you know he who has the gold makes the rules all of these things are trainings that we impose on our children we have to impose the same training on large language models they are like kids they need to be taught one more one more question uh thank you very much does your AI
financial advisor uh attempts to elicit risk preferences because you know that's the especially for regulars that's the core issue or do you just assume reasonable kind of risk no no no you can't because if you assume that first of all you could be wrong and then you've just violated the suitability Criterion but secondly most individuals don't even know what their risk preferences are and so the one of the things that we're working on right now is an interactive risk assessment that in our view is a bit of an exploration with each individual it's it's kind
of like a guided tour of for an individual's own psyche as to how they think about losses it's a bunch of s a series of questions and answers and and dialogue where you're asking individuals not just you know pick a versus B pick C versus D but rather why did you pick that what did you like about that what didn't you like so this is where large language models really can come into play you can actually engage in dialogue in ways that Financial the very best financial advisers will do and the very worst financial advisors
won't do so our hope is that we can actually do that in a large language model and make it public domain an open source so that the masses the people who can't afford financial advisors can at least have this uh access to this to prevent them from making really bad mistakes that's thank you thank you [Applause] so take a break right okay oh great we'll uh take a quick break and reconvene for our our last speakers thank you was oh yes of course I never you don't want your calendar to it's true so connected you
want to put the present VI on is that easier for you I don't know it is actually Escape Qui you wi thank you tell that for for e e e know basically I would say need okay e seems to be do you want to use this one you want to use this one for the oh probably not oh yeah but I think that's just yes very true no wor we're speaking I think from here yeah I think it's it's a little bit awkward if you sit like at the same level no but that's you don't
have any you don't have any so it was more a question a suggestion good sometimes people are so po that they make suggestions even they form it as a question just wondering I think it's like if if you're a little bit a higher St good thank you and for than all set yeah well great um welcome back everybody uh so uh uh you know we we heard from some some great speakers this morning kind of threading through themes around governance ethics uh beneficience all of these different uh topics uh that you know it seemed uh
particularly relevant and to under underg the conversations that we have around teaching and learning and now we're really excited to have some guests uh who will be speaking more specifically about uh teaching and and learning AI applications for uh teaching and learning we have uh Professor per um and Professor Ava deson um they are per erl is the uh uh director of global languages at MIT where he teaches German and second language uh studies investigates the impact of technology and language uh usage acquisition literacy and Intercultural development Ava is a senior lecturer in French and
leads the French language program she's taught French language literature culture at all levels of undergraduate curriculum and her work engages broadly with questions related to language identity Intercultural competency language and immersion settings and instructional technology so with that said I I turn the floor over to you thank you thank you very much can can everybody hear me all right with that microphone here thank you well thank you this is such an honor to speak here this is really such a pleasure to spend this morning with you we already had two really really really great speakers
and we the people between you and your well-deserved lunch so we tried to make it a little bit quicker than we were indicated on the program um but it's really uh just such a wonderful opportunity thank you so much for the invitation from from well and from the good Folks at MIT open learning who I see many of them here so and it's such an opportunity for us to speak to you to hear from you to get to know you um this is the first time that we have actually the opportunity to work together with
jwell and with this fantastic lab we've only been for two years at MIT so and I very very much hope that this is a beginning uh rather than the end of a wonderful collaboration here so um we'll share some of our ideas on the role of technology in language and Humanities education um with you um it's something that we've been both worked on both collaboratively and independently for close to 20 years uh especially questions that relate to the role of technology in language education but also in humani education but um over those two decades uh
there has never been a time where those there was so much going on in terms of Transformations the way we think about the role of technology in language education and Humanities education then over the last five years and if we think about two recent events developments that shaped this time I think it's very very very difficult not to mention Co because during this time suddenly the entire educational system was forced overnight to improvise uh to innovate uh through a distant social distanced education um and through through through technology in order to deliver um uh great
experiences to our students that was not always the case as we all know but I think that uh I think we think that our field reading made great leaps during these 18 month and we're still in the context of uh of of of curating the lessons learned from that and my in my more optimistic moments I think this has been like a there has been a Quantum Leap in education of like a decade within within within within a year and a half um but of course through that context we also realized all the limitations of
these Technologies uh if you want to deliver educational experiences and if you believe strongly as we do in the in in community as a driving factor of the learning process the second event is something that I like to call the great AI Awakening uh I didn't turn uh con coin that term it is from a 2016 uh journalistic feature by Gideon Lis Krauss so it was not from a from a computer SC Not somewh somewhere buried in a computer science journal but it was in the uh uh it was a it was a long read
in the New York Times and this was really something that made me and a lot of non-experts with non-experts I mean people who are not computer scientists aware of this this very very very Monumental paradigm shift that we are experiencing at the moment where machine learning is driving software development and AI uh Etc and the the pace that we see at this time um and it is really just something that uh really really really impacted the way um also in my field at conferences we go to conferences a lot and uh there is such an
interest in this in this uh and and such a such a lack of orientation I think um but the problem is that in a lot of in a lot of conferences you see uh colleagues who demonstrate ways how they use like one tool in their class or in their curriculum and it's a little it's very much Show and Tell uh what we will do today we will not just like demo stuff for you we will reflect on broader issues on how we can G help colleagues especially colleagues in the humanities to gain a more nuanced
and a more positive understanding less fearful understanding of these Technologies and as they become less hesitant to use AI in their classrooms uh how to provide uh providing a framework for them that will help them with the implementation of these new technologies so the overview of the presentation today is first we will look at why we should encourage the use of Cutting Edge Technologies including Ai and educational settings um often we hear at conferences that we should use Technologies simply because they're out there um simply because students use them anyway and they find it fun
and engaging to use new technologies but we think that that is not enough to add to the why question today we will discuss insights we can get from the game of chess about human and machine collaboration chess players today do not see chess computers as enemies but rather as partners such a perspective shift will help us to think more productively and hopefully with a little less fear about AI so we'll first talk about Chess in the first part of the presentation after tackling the why question we want to focus on how and here we'll argue
that we must use learning technology to guide our implementation of AI in our classrooms to illustrate the power of theory guided approaches we'll discuss sociocultural Theory and argue the ideas associated with the Soviet child psychologist l botski um who can guide us in its implementation and before we begin we very briefly want to give credit to the entire Global languages community at MIT and most importantly our students um they're absolutely amazing and tremendously inspiring and um they help us a lot with our work okay this is the first part why and I start with this
image that some of you know um others don't uh it's almost like a generational test about like I have the experience that people who are younger than me often don't really know what was that what was going on there and people who are a little bit older than me just like yes of course this was May 11th 1997 when deep blue became the first computer system to beat uh a human world chess champion Gary Kasparov who we see in the bottom right off the frame not very happy resigned after less than 20 moves in game
six against deep blue a uniquely unique purpose supercomputer after 12 years of development first at carnegi melon and then at IBM the deep blues Victory uh is to this day considered a milestone in the history of computing and artificial intelligence this event when it occurred in 1997 made a huge impact on me as an undergraduate student uh enrolled in Germany um majoring in education and Humanities together with my roommates one computer scientist and one cognitive psychologist I discussed all night how can a machine uh that uh that we as humans build uh that we as
humans program out for outperform not just like an average chess player but how can this machine uh beat Gary Kasparov the reigning world champion but I also have to admit that this was uh this was also liberating um um because this game is considered as like the Pinnacle of human intellect intuition um this unexpected outcome destabilize this notion of Genius uh which is to this day so deeply anchored in German culture and Society in fact if we if if this Genius of a chess Grandmaster could be replicated by this huge ugly gray cabinet filled with
chips and cables that energized me as an aspiring teacher if this high level of expertise is teachable and learnable to a machine we as Educators better take notice o thankfully nobody listened to me 30 years [Music] ago and today we know that the secrets behind blue success were quite mundane probabilities and processing power the supercomputer was simply able to statistically evaluate millions of moves any moment of the game in response to any possible constellation on the board today the chess computer that you potentially carry around with your cell phone in your pocket can do this
trick and would beat Gary Kasparov gar Kasparov was not a graceful loser initially he accused IBM of cheating IBM discontinued the project and refused to rematch this had already been a rematch of a previous series but what's really more important for us is what gar Kasparov did after afterwards he was able to engage the Chess World and the AI community in a new Quest and now it gets really really interesting for us he finally conceded that supercomputers are in fact better at chess than humans but he was now interested in exploring a new question how
would the world's most powerful chess computer hold up against a human collaborating together with a computer oops I think I went too far up there we go to explore this question Gary Kasparov proposed a new game variation sow chess similar to the mythical hybrid creature the of the sent which is half human half horse as you can see here competitors in this new game of chess were hybrid teams half human half AI if humans are worse than computers at chess wouldn't a human AI pair be worse than a solo AI system wouldn't the supercomputer just
being sabotaged by flawed human intuition yeah you have to imagine like that like if if you play together with the AI system the AI system proposes a move and you think oh no I have this intive I'm the genius here let's not do that despite all your processing Powers let's do something different um so interestingly the results are quite uh uh not surprising a human AI centuro beats a solo human all the time that makes a lot of of sense a chess player with a computer is better than a chess player without a computer but
amazingly human AI Centaurus routinely beat today's most sophisticated solo computers this new competition uh are unfortunately and I think there we this is an important note to make far less publicized in the media compared to the original Deep Blue competition almost everybody in this room had heard about deep blue and these competition from the 1990s very few I assume know about detailed results of any centa chess tournaments uh it seems that this more catchy or perhaps panicky narrative human defeated by Machine generates much more attention and much more anxieties than a headline human collaborates with
a machine however for at as Educators reflecting on this outcome of these second set of competitions is important and perhaps even inspiring so here are a couple of implications um a lot of anxieties and you see them mostly I would say or more among humani Educators than perhaps in other domains um are closely rooted in existential fears that machines have the power to replace us to replace us at the workplace behind the wheel of our cars or in the classroom as Humanities teachers we fear that AI technologies will make us redundant Humanities teachers feel like
that a lot so they see the defunding of the field and they they they they they they compete with uh Business Schools they compete with uh computer science uh departments so being in an exist Potential Threat and being in the humanities is unfortunately I regret that very much uh something that is part for a lot of my colleagues as being in the humanities we're constantly in crisis under threat I absolutely reject that notion but this is how a lot of people think [Music] um English professors see chat GPT and think that eventually they will they
will replace the need or their motivation to develop writing skills um language teachers and we worked a lot on language teachers see something like Google translate and ask will the public perhaps respond to these Technologies by saying hey we don't have to invest in language education anymore um uh we have together shown in an article a few years ago that the reservations that many language Ed Educators have Visa V Google translate are very similar to The Fears that math Educators had in the 1970s and 1980s in response to the pocket calculator and then in December
2022 just after the release of cat GPT writing teachers in the humanities felt threatened very very very very similar processes here um and although we don't want to deny the dangers Associated to an unregulated proliferation of AI Technologies we argue that many of the anxieties among Educators stum from a fear of being replaced a fear of a technology that's taking that's making them less relevant in the popular IM imagination as I showed with the chess example the human against machine narrative has far more power than the human with machine narrative so what we argue today
is that in order to think productively about the role of technology in education we have to leave the human against narrative behind and adopt a human with machine mindset as our guiding model how to use Innovative Technologies in our classrooms and once one accepts the human with machine narrative the threat starts to disappear if we engage in human machine collaboration we can engage successfully to solve higher all problems the pocket calculator again did not replace math education but it augmented and enriched the learning environment rather than spending many hours on tedious calculations on paper teachers
and their students now can shift their focus now 30 years ago could shift their focus on math education um and shift from this accuracy Centric skill development field towards more high order thinking problem solving through machine human collaboration the technology is not the adversary anymore it is the partner and the same is happening in language education in response to machine translation and the same will happen in humani education through chatbots a second thing for us to learn or implications of this this whole of the field of Chess and cental chess is that people are simply
smarter when they collaborate with technology this started in the Stone Age with primitive tools and it continued through Millennia uh we Lo we looked in a different context actually this was one of the generative AI papers that that Daniel uh mentioned earlier today we were very very thankful to be uh recipients of that grand to too we looked really into literacy how literacy in Greek Antiquity was seen as like a threat by Socrates Socrates hated literacy he was probably illiterate himself and he said in a very famous dialogue ironically documented in writing by Plato uh
that uh books are bad for students yeah they it weakens their memory they won't really learn they don't really talk to their teachers anymore because the whole Socratic idea was that I talk to my teacher my teacher talks to me and that's the that's the best form of learning and he even warned that students will show unruly disobedient Behavior as a result of books yeah this is the beginning of a very very very very long uh genealogy of new technologies that entered the educational real yeah think about how people talked about Wikipedia 20 years ago
oh my God it's not peer reviewed how can they do that of course it's peerreview it happens to be that but far more people most of them don't have three letters behind their names but anyways so um long story short this is something where uh we see in the history of education and Technology constantly this pattern of let's first ban it what happened to cat GPT in December December and over the winter 2022 2023 a lot of school districts in the United States who can be a little bit more draconic than universities says oh yeah
let's just block the URL of open Ai and that solves the problem yeah and then there is a reluctant acceptance because people see okay we can't really ban this and now we move hopefully to a paradigm where we can uh start using it um where we um where there is an acknowledgement that it's not Against the Machine it's with the machine and in that a mix of intuitive human intelligence and the probabilities that come from the artificial intelligence together surpass either one alone like in the game of chess so these Technologies make us smarter and
with that I will pass on to Dr desain um who will talk more about the why uh the how question after I talked very much very long about the why question so here we are on the how question Innovative Educators who are open or even sometimes enthusiastic about the integration of new technologies in their classrooms sometimes struggle with the implementation of them nowadays we hear two major Arguments for allowing AI in our classroom first students use them many ways and second they find new technologies engaging however we believe that this is not the whole story
and it's not necessarily a very constructive way to think about how exactly these Technologies should enter our classrooms if these are the only arguments we have integration of technology in educational contexts often results in a somewhat flashy surface level solution neglecting deeper understanding and meaningful implementation it results in instructors trying out the new technology hoping to make tedious but necessary aspects of learning a little more exciting and we believe that we need to aim higher we argue that to have a truly positive transformational impact on our learning environment we need to design learning opportunities where
students do not use the technology to Simply delegate tasks to technology we must aim to design classrooms that help students grow through the technology both as effective communicators with technology and as independent users of a language this is of course an ideal and it admittedly is a Monumental challenge but today we argue that we should shoot towards this goal and that it would help us tremendously if we contextualize our use of the technology in the classroom through theories of learning in our our view sociocultural Theory and in particular lot's model of the zone of proximal
development and his principle of scaffolding strongly resonates with what we consider a responsible and Powerful integration of AI into our learning environments the true death and relevance of Le vot's ideas were only discovered in the west decades after the death of the Russian child psychologist over 90 years ago today experts in a wide variety of fields from learning science to human development consider him as the father of sociocultural theory a framework with an immense impact on our understanding of how we develop and grow cognitively emotionally and even motorically over our lives votsi understood interaction between
the individual and their environment as these Central mechanism of the development process botsky's model of the zone of proximal development postulates that there are three kinds of tasks that our environment demands from us as we venture through the world first there are the tasks that we can accomplish alone individually secondly there are tasks that we cannot accomplish alone and third we have tasks that that we can only accomplish through interaction with a parent a peer or a teacher who guides us through it according to votsi task environment one and two do not provide learning opportunities
they just simply represent tasks we can either do or cannot do learning only happens in the third scenario in this scenario an individual finds themsel in an environment in which they encounter a task that is too hard to accomplish alone it's not completely impossible to accomplish the task but the task needs to be tackled by the individual collaboratively through interaction with a parent a teacher or a peer in this environment which is a zone of proximal development the individual receives guidance and grows through interaction with an expert which is what fotky calls scaffold ing so
our big question now is can generative AI offer scaffolding and engage our Learners in the zone of proximal development we believe that generative AI can do that and not only do we believe that this is possible in carefully designed learning environments we would go further and say that the creation of a scaffolding relationship in the zone of proximal development should be the primary and the principal objective of any situation where students are asked to use technology in language and Humanities education if this goal is not on our mind as teachers we miss an opportunity and
risk using new technologies in novelty driven ways merely for the sake of using them however if we assume that AI can provide an individual with a similar kind of scaffolding partnership that is conventionally provided by humans does this also mean that we as teachers become replaceable by the algorithm certainly not after all we're not suggesting that we should teach in this manner all the time care and encouragement warmth emotional supports and Trust are human Dimensions that are at the core of scaffolding relationships these aspects are not replaceable by a machine but what we are suggesting
today is that if we use generative AI in our teaching we should aim at designing scenarios that simulate scaffolding interactions in the zone of proximal development where students grow through collaboratively tackling tasks and solving problems if teachers aren't able to design such learning environments they will probably serve their students better by just not using chat GPT or similar Technologies in their classrooms let me come to the conclusion uh roughly 65 years after the formation of the field that we call now artificial intelligence we have finally products that mimic humanlike General intell uh general intelligence um
and that are at the fingertips of our students of teachers uh in the form of consumer oriented online services it is important to underline that artificial intelligence is not human intelligence our intelligence is based on understanding reasoning and reflection it was formed and constantly evolves throughout a lifetime of real world interactions in our families in our communities at our workplaces and in our schools large language models are simply fed by text that's out there on the internet based on this input chats like chat GPT simply make predictions based on probabilities artificial intelligence by itself is
not very intelligent but it's potentially dangerous paired with human intelligence AI can make a smarter and AI is potentially less dangerous because we interact with these systems and as we interact with these systems we supervise them and we monitor the output using this new technology in our classes without clear goals and without a clear framework is counterproductive and it might potentially be very harmful if we simply open the floodgates and allow the use of CET gbt and Co without any effort to teach our students how to use these Technologies without having any guard rails we're
not missing a huge opportunity we also sabotage student learning and we create problems down the line an unguided implementation may very result in the fact that students use the technology just to do the tedious tasks like writing and to avoid those tasks um and delegate them to the machine and look at the output and take it uncritically and confuse it with truth they don't learn to critically integrate AI output if we don't use these Technologies in our classes and that is potentially very dangerous down the line we therefore made today a case for using learning
theories to guide a meaningful implementation of AI Technologies in the classroom to illustrate the power of such a framework we used social cultural Theory we are sure that there are other productive ways to think about how to implement these new technologies and newer Technologies in the future that we might not even dream of um in our classes and we're looking forward to learn more from you during this event and in the years to come thank you very much thank you very much are there any questions anelie so you use Google Translate in your teaching how
how how does it fit in that's a great question I yes I do yeah but in a way that first of all I actually don't think Google Translate is a good translation uh technology anymore because cat GPT is so much better because you can prompt chat GPD in certain ways to translate and uh I think the output is better um so um but yes the answer is yes uh but not in a way that I say okay well this is you can do it in a completely unregulated way I'll talk a little bit more how
I use cat GPT if you don't mind one of the things that I'm doing in in my uh I teach an advanced German class at the moment where one of the assignments that the Stu that the students have is that they have to practice how to do a job interview in German yeah so yes one of the ways my students are allowed to use cat GPT is to use cat GPT at home in order to prepare for that assignment and uh we together created a prompt so we do that together it's not just like I
want to make that also transparent and say not oh here's the prompt and this is what you use uh but I want to I want them to think productively how can you how can I take advantage of this technology in order to prepare let's say for a job interview in German and the prompt is actually quite an elaborate prompt um and it gives chat GPT a role that's that's prompt writing r one you all know that the best prompts are written by giving initially chat GPT role and the the role is you are a recruiter
for a software company in Germany this is the job ad uh this are the this are broadly my qualifications oh I'm an MIT undergraduate uh in computer science graduating in uh spring 2024 um and um and then the prompt goes on just give 10 questions to evaluate the the the the skills the motivation and the ability of this American undergraduate from MIT to integrate Intercultural into a German company uh do 10 questions and then it's it's a little bit unrealistic then the next is just like oh yeah make a quick decision if you want to
hire this person and then offer a starting salary and invite the person to negotiate so of course with all the prompts I test them before yeah so I tested it so I did my own job interview I don't know anything I mean I mean yes I stand here and talk about AI but I hardly know what a mouse is yeah I could not pass a job interview for an engineering job at a software company in so the questions come in and I sit there and I kind of play a little bit the role of is
is I I I I and of course you all know that uh in a job interview you make it and you also fake it a little bit Yeah so of course I told that triput that I have a degree or that I will have soon a degree in MIT in Computer Sciences etc etc so uh the chatbot interviewed me 10 questions 45 minutes later um the uh uh ch decided oh yeah well we will hire you and we will offer you $65,000 or Euros as a starting salary I said oh that's very generous but I
I need to move my family yeah so four turns later I had the same job offer for 115 as a as a so this is a funny anecdote and this is of course also where you see okay well this is not this would not happen in the real world but this provided for example students in my class with an opportunity to work with this technology and to to to to practice um of course they did it in writing but I mean this is only a matter of 18 months until the technology is there that they
can talk to chat GPT um to to to to prepare for for a major element in the in the class yes I can give it uh thank you for your um talk and sharing your experiences I have one question um on on the point that you were mentioning on the faculty and that a lot of the faculty is um maybe um going by having fear of the technology and at the same time you mention okay we should uh guide it by learning the so what's your take on um convincing people because what I see currently
is if we don't convince a faculty then the technology will advance and then you will have just vendors who are maybe not knowing the learning theory integrating into these spaces yeah I mean that's the1 million dollar question but the but the the truth is that um I think there is a desire by by teachers faculty to to get some guidance on what to do about it so I don't want to say I mean maybe I formulated it too dramatic that they like completely afraid of it and just like rejecting they're those people too they're like
my professors in Germany in the late 1990s who still had like somebody who printed out their emails for them and once a week they they they they they typed it or they they dictated it in a tape and then the secretary was sending the email back yeah those people exist they will always exist but I think the people the people we can't win them with any workshops or any any any any professional development but I think the the vast majority of colleagues in the humanities are receptive to workshops that are that are designed that they
speak to them with a language and with metaphors and with narratives that they I don't want to say that they understand but that they resonate with them yeah so if you have a I mean one of the things that I throughout my career with other Technologies just like if you have a company coming in and doing tech training with uh with with like like like on a course development platform with like these canvas it's called here um I think that's really really difficult I think it has to come I think you have to find allies
within within within within the humanities within the faculty within instructors um who are knowledgeable but who are also peer-to-peer not top down informance and you're going to offer smaller project projects to start with rather than completely overhauling an entire semester word of language learning but showing small projects to start with and then seeing what students do with it and it usually helps in encouraging them to seek a little further yeah it's really interesting to um so part of you know the invitation that was extended to you is after reading your excellent article in the mit's
generative AI uh collection and what was interesting from what you were writing was some of the similar themes that were presented and um and what we heard some of the some of the talks we heard earlier but also what um Justin Reich uh Eric kloer and Cynthia Brazil wrote about about um and and I I recommend that article as well um about the need uh to structure things so that you know the fear is that students will bypass learning so they'll bypass cognition right so uh using the the the introduction of AI into the classroom
or the uh use of AI outside of uh the classroom on assignments um could be used to bypass cognition what are some thoughts I mean you've you've gotten there talking about small projects and things to that nature but I mean is that something that uh you've been thinking about yeah I think that's that's that that's a good question and and and and yeah I I strong strongly I strongly recommend this article that you evoked by by by our colleague Eric kloer and colleagues um and I think that what we are proposing is not entirely um
out of the mainstream of how people like like like like Eric and US think what technology should do in the classroom we have been um I think we share the same frustration that in a lot that a lot of individual instructors or programs or entire institutions introduce technology in a way that um I like to call it uh in a way to to offer chocolate covered broccoli to our students yeah kids don't like broccoli put chocolate on it yeah so that the learning is somehow a little bit more more appealing and you all know how
that works you know it from from from from from your children who you try to feed broccoli but from your students maybe initially uh this this this this what you also call flashy and just like very superficial way to look at to introduce technology uh maybe it maybe it maybe it inspires very brief excitement among your students but they very quickly discover that that's really just uh that's that's just surface another way would be completely not guiding the use of technology that would be then okay I don't have to write my essays anymore some something
is writing the essays for me yeah um what we are really looking for is more approaches where instructors design learning designers design opportunities for Learners where they interact with the technology and do that happen two things the one thing is they get more sophisticated and more critical uses of the technology that's in itself important the second thing is they're not delegating the tedious task of learning to the to the to the to the to the system they are they are interacting with the system in a way that the system scaffolds them yeah think about woty
vot's best uh metaphor in order to describe what scaffolding is and the zone of proximity development is how a child learns to walk yeah it's not that the child either there's yes there's a stage where the child can't walk and there's a stage where the child can walk yeah but scaffolding is that moment where the parent or the caretaker is like doing all these things and then and and and that's the zone of proximate development that's the interaction between the expert and the and the and the learner so if we think about let's think about
chatbot so instead of having a chatbot writing your essay for you yeah uh think about ways and my students do that in this very class I'm talking about and I don't want to talk about this class A lot because these are things that I haven't really empirically tested and and peer reviewed and all of that but there for example my students write like a six to 10 page paper in it's like a it's it's it's a research paper written in German uh in the end of the sem towards the end of the semester and there
are a lot of steps that throughout the semester lead them there because they are they are they are they are still language Learners yeah and um and then there is one phase where they are allowed to use Ai and that is the final so they submit two weeks before the final submission they submit the human version of their paper yeah I have gotten them last week I gave them some more feedback on it but now they are explicitly not they're not allowed they're required to use Ai and I gave them and we worked together on
a couple of prompts that are useful and not the prompts are not oh yeah take this uh take this take this paragraph from my paper and correct all the grammar mistakes yeah that's an unuseful prompt and they realize that it's more like you're an editor of a journal this is a paragraph in a in a in in a journal C uh uh read the journal make a list of all the grammatical mistakes and explain why they are wrong and uh uh and and and and and make a suggestion and make a suggestion yeah and then
make a list of all the stylistic what things that are not grammatically wrong but things that are like awkward make a list of these tell me why they are awkward and make suggestions so that my students then can go through this long list of output which is the same thing that I would do if I would give them feedback on their draft and it's probably better because they can do it 24/7 um and in that way they engage in uh an exchange with an expert so in many ways this is not this is we don't
have to reinvent education we can think about something and you've been doing it at your institutions for decades a process oriented approach to writing yeah not where you give the students the question in the end of the semester they have to write and then they're done but where you just like lead them maybe peer feedback yeah and you know all that sometimes sometimes students you pair them up and they work wonderfully together sometimes they're not really working together sometimes not at all they yeah so um or then the expert feedback yeah my my my suggestions
when I read the first draft the second draft etc those are things that we can do wonderfully with AI and there we have an opportunity to really not only when we do professional development help the person who who doesn't know what to do with AI in the classroom but also help the the colleague Who as a Humanities educator never really has done a lot of process oriented writing instruction but always gave like the topic in the end of the semester and then that's like oh yeah and then complaining about how awful students write these days
yeah that's great were there was there another question I thought I saw hand yeah yes thank you so much thanks so much for our colleag discuss this morning I learn a lot my name is Helen I used working in with Professor mini dress house for many years right now my office is MIT E7 building in The bridg Institute really focus on education Innovation so my question for AI application one concern in my volunteer teaching recently for disa students learning we found possibly it's very helpful for you for some of for or professional develop some AI
technology for students like has at recently I have one undergraduate student especially with me I found they really need this type of help okay maybe Parkin school we really need it you know near the Water Town section uh another question as our MIT really make students cous mind for math physics biology and astrophysics development how could the just help technology learning but my argument is that how could we training students make their mindset Innovative logical Thinking by themselves instead of help by the machine who my machine okay my question I'm curious like a la labatory
we have design it's helpful like chemistry biology or n devices like used to be Cs and wonder how our undergraduate student first coming this should be long by the MC of logical thinking mathematical application for algorithm development instead of help by CH GPT thank you thank you um I want to actually start with the first part of your of your response and you spoke spoke about students with disabilities and I think this I mean we haven't had that in this talk here but this is something that we think a lot about that um that if
you have students with learning disability for example with dyslexia students who are and and and in that context and I've never done it in front of such a big group but I gave a we gave a we gave a workshop at University of Pennsylvania and uh on this context and it was just like um and I and and I and I and I and I gave a confession if spellchecking would have not been invented I wouldn't be standing here and talk at MIT as a professor to this group because I'm not a good speller when
I had to write things on the typewriter I'm aging myself now yeah um it didn't look as well as when I write them now on the computer yeah so this technology helped me to unleash my whole intellectual potential despite a condition that had held me down held me back for the first 20 22 25 years of my life so I owe the wisdom of professors who said no we're not Banning spellchecking on our tests because that's what people thought that's what writing instructors thought in the early 2000s just like oh no we can't really uh
now we don't we out of work because people people don't make typos anymore yeah so uh I own that to the wisdom of of uh of the fields of my colleagues of the people who came before me that they said no this is a technology we don't consider it cheating if you have like red scribbles under your typos and people like me who see words differently than most people in this room although 7% of us share this situation um get basically feedback from the system and can correct yeah so in that way I think when
I think about cat GPT and I don't want to say oh this this this this is this is just making like the whole thing of writing dyslexia writing difficulty so so a matter of the past no that's not but I think it will help a lot of I never had a problem with the blank space because as you as you now know I'm able to produce a lot of words and maybe sometimes too many um but there are people who have that iety in front of the blank page yeah why not give them the opportunity
not just saying oh yeah use chpt for your homework no why not design prompts Technologies of the future based on large language models that help people to overcome that anxiety of the blank page and I don't know I mean I don't but I think that's something that's at least worth uh reflecting on yeah now I forgot the second second part of your question and I can't probably follow up with a personal anecdote that's more interesting more more more revealing than this one thank you m San from Mexico um I always find it um difficult to
use that single learning theory to explain things because there's not not a unified learning theory and if you cognitivism constructivism cognitive psychology Etc it's very complex is isn't the time and that's just a reflection to to use the metaphor of human with machine to develop a a learning theory that incorporates both things because I think that uh we tend to use the theories that have the name of people that are not living anymore and they have any interaction with computers etc Etc shouldn't we develop a new learning theory that covers the whole thing isn't it
time it may be time but we need to we need to be working right now as well as in um the Technologies is here we're teaching I'm in front of the classroom every single day I do workshops and conferences very regularly and to get people on board we really need to be working with this technology as we are teaching now I do think that over the course of the next months years that we will evolve into practices that today we have not thought of um but I don't think we can take the luxury of sitting
back and coming up with something new right now um as we have to be doing things as well yeah I agree with you I I I I I I I think this is a really really interesting idea and I think we have to as the field kind of evolve in a constant like Circle or dialogue between evolving practices and refinement of theoretical Frameworks and I I I I I I think that's a fascinating idea to think about like a learning theory for the 21st century or for the age of AI um we should write something
to together about that so um we I think I I think the reason why we I mean it's it's it's a little bit it's a it's it seems to be a little bit counterintuitive why we use a Soviet child psychologist in order to provide us with ideas uh like a like like like a model from from from from the 1930s to to to guide this process at the moment but it's also almost intended in a way because what we want to emphasize is that this new technology is not necessarily uh redefining and deconstructing everything about
education that we ever held to to believe it's just like there are things that um that this technology can do and by tapping into ways we've been thinking about learning and human development um for many many years before there was ever the idea of like you having a cell phone here playing with your tablet while while doing the conference yeah so it's it's a it's an old Theory but I think it's very applicable to to this age and it helps us to understand this age maybe through a paradigm that's more familiar that's great I think
we have time for one more question is that or are we over Y is it a quick one yeah thank you is is maybe just um a comment about the last uh the last intervention uh about the theory of theories of learning and how boski is now present in in another developments in in different academic context uh there is a um like a theory based on on botski that this work uh mainly in nor northern Europe called cultural historical activity Theory chat Theory H that incorporates technology as a specific dimension of learning theory and also
incorporates the historical history of populations and one interesting aspect that is also included in the model is conceiving H teachers as uh Workforce and how conceiving a teacher that is now implementing technology that could make H his or her life h less stressful because technology could also make evaluation more easy easier sorry uh give give them more time to to H concentrate on the most most important thing that is interacting with the students and not maybe correcting ER manually thousands and thousands of papers so that that could be interesting to to discuss about thank you
so much I think this is more like a like a comment and like really really wonderful Avenue for us and for for all of us to to go on as we reflect on on on the why and on the on the how of uh using AI in education so so join join me in saying um thank you very much yeah all right we're about to wrap up in the next minute our open morning of discussion with leading lights from across MIT on AI it's been a pleasure to have you join us online in person uh
in the dialogue in um helping us all to think together about the opportunities and maybe concerns we need to think about uh we are going to have chances throughout the rest of jwell week to work with our members directly on some of the very issues of been raised today so I'm really excited about that so thanks to all our guests and thanks to the online folks and folks from the MIT Community do uh come to our website and sign up if you want to get our newsletter um we are uh relaunching it shortly and that'll
give you a way to stay in touch about our goings on um so we hope you can do that and we I'm going to be asking our members to take the elevators down you can take the stairs too actually and then turn towards the river if you look around you'll see it the glass doors there we are going to take a photo on the steps right in front of the glass doors and this is our chance to get everyone in the group photo uh very hard to photoshop you in after um and then we're going
to meet back at open learning and we have several of our staff uh signed up to help walk you over so um Kirky Carolyn tamui Maria are going to help walk folks over to open learning which is at 600 Technology Square an obscure address might be better to know it's at Main Street and Portland just passed Mech Mexican restaurant um and we're going to have lunch ready probably around 12 12:30 a sort of flexible uh lunch setup inside our own kitchen which will you know be informal and then at about 1:20 move on to our
next um section which is a conversation with open learnings uh head of residential education Cheryl Barnes that'll be really fun but now you know get your hair ready take the elevator down your hair everybody's hair looks fabulous um and is going to give you a little more um we had a couple of questions we will not be returning to this room so please remember to take all of your belongings with you