Unknown

0 views10056 WordsCopy TextShare
Unknown
Video Transcript:
>> As with any, well, I should say technology, whether it's an app on an iPhone or a stick that a chimp is using to fish out ants, it's it's a tool that helps you to solve, in an ideal case that helps you to solve some type of prevalent problem. There is a lot that can be automated, and whether it's machine learning or some form of deep learning. There are many ways technology can aid the learning process. So I was one of the first investors in a company called Duolingo, which now has 100 million users. It's
the largest free language learning platform in the world, and they have a lot more coming. It's incredibly powerful. And it is it. It was the byproduct of a number of founders, but one of one of those founders, Luis von Ahn from Guatemala, originally created CAPTCHAs and reCAPTCHA. So if you ever have to type in a bunch of weird characters to prove you're not a robot on a website, you have him to thank for it. But he used that. You might have noticed back in the day, there were two fields and you'd fill one in the
the the program knew the answer to that. That's how it would confirm that you weren't a robot. And then the second was taken from books that they that machines couldn't transcribe accurately. So you're actually he was harnessing millions and millions and millions of people to transcribe books. Effectively so that people so that the blind could use them so that anyone could search them, etc.. And he's applied that to language learning in some really fascinating ways. We live in a digital world where the economics of many of these businesses are dependent on distracting you as much
as possible. They are very, very, very good at it. They're putting billions of dollars, probably collectively trillions of dollars, into discovering new and better ways to distract you off of your chosen task, if you can. If you can teach yourself and your students to single task, not multitask to single task more effectively. That ability, which used to be par for the course, is becoming a superpower. So if you can establish ways of blocking or blocking out distraction In rich technology for even short periods of time, you have a huge competitive advantage. >> Good morning and
welcome to the final day of South by Southwest. Edu 2025. Yay! I'm Renata Salazar, marketing coordinator for South by Southwest Edu, and I'm thrilled to introduce the session, led by two innovative women who are at the forefront of the rapidly evolving AI industry. Sinead Bovell is the founder of way, an organization that prepares youth for a future with advanced technologies. And Natalie Montbleau is the founder of Virtual Human Economy, which advocates for real people that can benefit from putting their virtual selves to work. Both of them, in their own ways, study the future. As a
reminder, during the Q&A session, please open your South by Southwest Edu Go app to ask and upvote questions by selecting the engage button on the session page. And now I am honored to introduce these two pioneering futurists who will share their insights on what we can and cannot predict about the future of AI in education. Please welcome to the stage Sinead Bovell and Natalie Monbiot. >> Wow. Welcome, everybody, to the final day of South by Southwest. Edu. It's been a fantastic week. Everybody been enjoying themselves, feeling inspired. I'm super delighted to be moderating this conversation
with the fabulous Sinead Bovell on AI and the Future of Education. Just by way of a bit of background, Sinead and I have known each other since the clubhouse days. Does anybody on clubhouse know that audio app that was highly popular? One of the bright spots of Covid and we had the chance to meet very much virtually, I guess, and get deep into conversations around AI, the future of AI and society. AI avatars, and really sort of a precursor to the conversation that we're having today. So it's my absolute honor to be in conversation with
Sinead about this topic today. So a little bit about Sinead actually, who follows Sinead on social media. Yeah, a lot of hands. So she's got a massive fan base. It was hard to get to this stage actually, with people kind of coming up to her and saying how much they admire her work. Well, just in terms of her professional, um, perspective, Sinead is a strategic foresight advisor and the founder of way, which is an organization dedicated to preparing businesses and the next generation of leaders for a future shaped by advanced Technology. She advises C-suite executives
and senior leadership across governments and global and global corporations on emerging and exponential technologies. And she's an 11 time United Nations speaker, and she has delivered formal addresses to presidents, royal families and fortune 500 leaders on topics from synthetic biology to artificial intelligence. And today, she has advised very relevant to this conversation 16,000 educators, government officials and policymakers on redesigning education for the age of AI and emerging technologies. So let's dive in. So we've heard a little bit about AI and the future of education and a number of different talks this week. Sinead, tell us
what it is to be a strategic foresight advisor and your lens on AI and the future of education. >> Yeah. And thanks for having me. And thanks for for coming to this session and for that warm welcome. So I think, you know, education is the bedrock for a healthy democracy and for a functioning society. And it's not just essential for things like economic mobility and economic security, but fairness as well and for wellbeing. I believe there's no such thing as a state that over invests in children's future, and investment in children is an investment in
national interest. Right? So you want to foster an informed and adaptive citizenry that can not just safeguard the future, but thrive in it, especially a future that's going to be as complex as the one children in school today are entering into, which will be shaped by quantum computing, genetic engineering, artificial intelligence commuting back and forth to space. This is an incredibly complex world they'll be entering into. So the more that they can understand it, the more we can support them in that journey, the better equipped they are. And that's an investment in a country's economic
security and our collective health and well-being and our overall security and national security interests. Goodness. So couldn't be a more pressing topic. And before we dive in to some of the kind of finer points, where would you say taking a step back, like where are we at this moment with AI? So if I were to say where we are, I mean, it's very, very early. Maybe it's 1992. The internet has dropped. Companies are experimenting. We know it's maybe going to be a big deal, but there's still a lot of doubt. Some people are also kind
of playing around on it, but we have yet to fully comprehend the way it is going to fundamentally transform our world. The Googles, the apples, the Amazons of the future. They have yet to be invented, but they're coming. And artificial intelligence is also a general purpose technology so similar to something like electricity. Think of how pervasive electricity is. We don't even think about it at all. It's so foundational that it's moved into the background. So we will soon be streaming artificial intelligence the way we stream electricity. That is going to be a fundamentally different society
to live in. And these general purpose technologies, they take time to get so entrenched in society. But, you know, when it's reached that point, because when people can't access general purpose technologies, whether that's at a country level, certain neighborhoods, we deem that wildly unethical, who doesn't get fair access to the internet? Who doesn't get access to electricity? That is the path. Artificial intelligence is on. So if artificial intelligence is going to be this general purpose technology and fade into the background, what does AI and education look like now? Like, how should educators be considering AI
in education, given that it will be in the background? But today we're at this very early phase. Yeah. So I think that there are kind of three pillars that are related but distinct in terms of how we should be thinking about AI in education. The first pillar is safe adoption for kids and for learners. So this means equipping kids with the tools to navigate artificial intelligence, because they're going to be on these tools at home regardless. They have supercomputers in their pockets, supercomputers on their iPads. So giving kids the skills to utilize these tools safely.
So that's conversations like AI isn't your friend, right? Write your chatbot isn't something that you tell secrets to. This is what we do or don't share with artificial intelligence. And this is also how you ask it good questions and validate its answers. So that's pillar one. Pillar two is how do we more urgently adjust what we are teaching in school, or just the formula for what happens in the classroom versus what happens at home, knowing that kids are going to be leaning into these technologies at home to do homework and to complete assignments. The third
pillar, and this is where I think we are rushing into, but this is actually the long term game, is how do we fundamentally redesign the entire system of education for the age of artificial intelligence? But what seems to be happening in this moment is we are kind of merging all of those pillars in a sense of urgency. And this leads us to deploy AI in schools for the sake of feeling like we need to meet the moment by bringing AI into the classroom. And there are a lot. Of technologies that aren't ready. So I think
we focus on pillar one. Giving kids the tools to use these tools safely. If they're going to be using them on their phones, we slightly adjust what we're teaching to account for cheating in homework, but it's much more at the departments of education, the ministries of education level, to take this longer term lens and fundamentally redesign our school for the age of artificial intelligence. Great. So I think we've heard this week a number of different ways that educators are experimenting with AI, different pilots, different ways of going about it. Like is now the time to
be. So it sounds like we should be teaching it and educating about it. Is now the time to be experimenting with it in deeper ways? Yeah, yeah. So, you know, teaching it, it's more so AI is a hard skill. So that should be happening. Experimenting with it. Yes. We need to be running these pilots. We need to be gathering the data as to what's working and what's not. But it has to be in very, very intentional ways, and not just assuming that we can just throw in an AI tutor somewhere arbitrarily, and that's going to
be sufficient. And making sure that we're not running social experiments that jeopardize learning outcomes for the sake of just feeling like we need to quickly meet the moment. So yes, I think that these experiments are vital. They should be happening, but they need to be very, very intentional and very, very controlled. I know that you're working with fortune 50 companies in this space and advising them on how to navigate AI and education. What are some of the data points and the advice that you've been giving them? Yeah, so it's been really, really interesting to look
at some of the data that's coming through. And of course, we're still very, very early in early in the age of using artificial intelligence in education. But there's one clear trend that stands out. So I'm going to walk us through a study that I find particularly helpful. And this was done by the University of Wharton, the University of, I believe, Pennsylvania and Budapest British International School, and it implemented artificial intelligence in math classes. So there were a few math classes in the high school, and they broke the class up into three groups the control group,
which is just your traditional doing homework problems with your textbook, the GPT based group. And this is the students that got uninhibited access to artificial intelligence. And then the GPT tutor group. So these are students that got access to an AI that has been designed to just guide them through problems, give them hints, but not the answers. All students got the bass lesson for math together, and then they broke out into their respective groups and the respective tiers of AI access or not. So the study showed that when it came to the practice problems, the
children that got uninhibited access to. I did 48% better on the practice problems than the control group. The students that got access to the GPT tutor did 127% better than the control group, but when it came time to actually test students without access to AI and do the final post unit test, the kids that got the uninhibited access performed 17% worse. So I harmed the learning outcome, and the children that got access to the AI tutor performed at the same level as the children that didn't get access to any artificial intelligence. And so the conclusion
of the study was that generative AI harms learning outcomes. But then there was a second study that happened at Harvard. And of course, we have to control for the fact that self-directed learning is a little bit different at a university level. And clearly, if you're getting into Harvard, there's also some kind of higher order thinking that you're able to do. But that aside, it was a physics class and they broke the physics class into two groups the control group, which is the students that went to the traditional lecture with the with the professor. Then they
broke out into peer groups and worked with one another and their peers, and had instructor led guidance on solving problems. The second group had no in-class lessons at all. The entire process with was done with I, but they specifically designed the I to be self-paced. Going with the students needs to provide immediate feedback whether the student was on the right direction or off the right direction, while they're doing the problems to provide motivation, and to really take all learning best practices and implement it into that system and continue to adapt how it tested the child
based on how they were evolving and doing the problems when they did the general test. After those two experiments, the kids that went the pathway of artificial intelligence performed twice, as well as the peers that didn't get access to AI, and they were more motivated and more engaged. And so what we can learn from just those two kind of isolated studies is that you have to adapt the entire ecosystem, right? It's akin to inventing electricity, but only swapping out where the steam engine was and putting a light switch there or a light switch there. Not
building out the entire assembly line and rethinking about how we design the system. That's step one. Step two immediate feedback is absolutely vital in AI learning outcomes. Gone are the days if we're going to incorporate artificial intelligence where we wait for the unit test or midterms, or the end of year exam to see where students are. Artificial intelligence needs to be. We need to be able to extract the data in real time. This is how somebody is adapting or this is how they're falling behind, and I needs to provide that feedback. Or else we learn
visibility. We lose visibility into how well things are happening. Self-paced learning is also vital. So if you go back to the high school, everybody had an hour and a half to learn the math problem. So whether you were working with your textbook or working with AI, that hurt people that were doing the AI method and it helped people doing the traditional method. So kids need to learn at their own pace, and the system needs to be able to adapt in real time. So those are just a few of the key takeaways when we think about
AI in education. But that's why this is a longer term redesign. And that redesign should not fall on teachers. And that's one thing that I think we need to really make clear that this moment shouldn't fall on teachers. They already have way too much on their plate. This is something for government departments, department heads, and those designing curricula more broadly. And I think we're getting that wrong. Right? Yes. I had the opportunity to sit on a few panels yesterday, and there was definitely, you know, a very understandably irate educator who was like, we had Covid.
We had, you know, we're in an underprivileged area. We've got so many pressures. And now we have AI to learn, and we have to do this two day course, and then the pilot ends, and then we've got to go and do another two day course. And AI is the last thing that we need. Um, so it sounds like there is a right way to do it. And there are definitely ways that can be harmful. And some of those right ways involve a lot of structural redesign of how of how students actually engage with AI. Yeah.
This is like the institution of education we need to approach differently. It's akin to asking the accountant to redesign the concrete and the bricks. That's not what we should be doing. And I know that you there was a we had talked about a school that you had been tracking. So I think it could be helpful to share some of the insights there. Yeah, absolutely. Um, people familiar with Alpha School in the room, a few people, actually. The CEO spoke yesterday, and I was very keen for this conversation because I'm actually also starting to work in
this space and possibly with them too. And they are an incredible, uh, school that is redesigned learning in the way that kind of Sinead has described as sort of from the ground up and kind of reimagined how learning happens and what it is. It's a two hour learning process where, like all the hard skills, all the knowledge that you need to learn at school happens in a two hour period with an AI tutor, and the experience is entirely personalized and adapted to where that student is at. So if you walk around the classroom, you'll see
different students working on completely different math problems, let's say. And as it's understood what that student is interested in, the math problems become kind of contextualized within topics that they love. And so and critically and again, sort of going back to one of the best practices or, you know, mandataries in AI being successful in schools, there's real time feedback and performance on performance of that child. They can see their own performance and actually start to own that journey for themselves. And they get everyone into the 99th percentile no matter where, where they've started in their
journey. So I think that's a fascinating way to look at it. And I think that besides those two hours, to the point of getting, you know, all of that work done in those two hours, and some people are able to some Some students are able to accomplish more double in those two hours and some in the higher performing end five times more. But the critical part is freeing those students to focus on life skills, on EQ skills, on kind of developing their own human ingenuity. And so I feel like that is, you know, from all
the research that you've discussed, just an amazing example of what's happening today. Yeah, totally. And that's kind of the moment we're in, right? We're in pilots and experimentation and innovation, really a redesigned, zoomed out wide lens and some in some ways taking risks. But it should never hurt the learning outcome and it should never be a burden to teachers. And both of those things need to be true. Absolutely. So I did want to kind of like, dig in a little bit more into some of the current challenges with AI that, you know, Educators, students and
parents are experiencing today, which is around AI and cheating and using, you know, ChatGPT to to get to the answer right away and what impact that might be having on the learner experience and kind of the point of being at school. Yes. Oh, absolutely. And I think that that we're in a bit of a crisis in this moment when it comes to artificial intelligence and cheating. And we can talk about what happens when you kind of short circuit that, that thinking. But I think the, the safest assumption we have to make in this moment is
that kids are going to be using artificial intelligence at home. So whatever happens past 3 p.m., expect that to be powered by a supercomputer in some way. So we have to start there. That means we have to change what we are doing in the classroom. And so in some instances, that means maybe we flip what happens at home happens in the classroom, but in other ways, maybe, for example, you teach history, you give children the research portion, and they can go home and do all the research with ChatGPT that they want. But the higher order,
critical thinking, deep learning and discussion, all of that happens in the classroom. So the classroom really has to be a place where the deep learning is happening, where the testing is happening and where we're raising the bar on knowledge. But we do have to assume everything past four is likely going to be done, co-created by or outsourced to an AI system. And then again, the broader goal and the longer term goal is that we've entirely redesigned the curriculum to account for the fact that kids can lean into supercomputers, because that is the actual goal. In
the end, kids in school today are going to step out into a world with advanced robots, with supercomputers that are polymaths. We want them to know how to engage with these, with these tools and these systems, how to utilize them, how to invent with them, and we'll have to redesign education to account for that. And we'll have to make school harder, because you do get access to these supercomputers. And so in a kind of superficial way, maybe that means people are learning about quantum computing at seven years old, because that learning is facilitated by a
teacher and by a supercomputer. But that part is going to take time. So the more urgent kind of redesign is flipping what happens in school towards versus what happens at home. What happens if we don't do that? And I think it's quite obvious, right? We end up just short circuiting the thinking. So there shouldn't be anything to cheat on, because what happens at home isn't what we are evaluating. And that is, I think, the kind of the baseline that we need to to move towards. And that's, I think, what we should be doing more urgently.
So many places to go from everything that you just said there. But I'd say, you know, so maybe a positive example of how outside of hard learning, and maybe an AI tutor helping you learn the things that you need to learn at your own pace in the most individualized and sort of data rich manner. Well, how can you use AI outside of that core learning in a way that helps helps children become more human? Right. So like, what does human flourishing kind of look like and does? I have a role in that. I heard an
example yesterday again from Alpha School one. One of the projects starting at kindergarten is public speaking. And these these children are kind of practicing public speaking. Getting real time feedback from these large language models on, you know, are they saying, um, too much, you know, how is their pacing? So they can do all of this practicing in a sort of shame free environment and really build up their confidence for the real deal, and that's standing on stage and having that confidence to present. And that kind of leads me to sort of like think about like
what you said, computers, AI is going to become so powerful. So to what extent? I think we need to learn how they work and learn how to use them. But also, what should we as humans be focused on because we want to collaborate with AI? So what are the sorts of subjects and skills that are deeply human that we can focus on? And something I've been thinking about quite a bit recently is like different types of knowing. There's a cognitive scientist called John Vervaeke, and he plots out different types of knowing and kind of the
most sort of like academic or kind of research based and fact and knowledge based learning is procedural knowing. And that's the kind of knowing that AI is really good at and is getting increasingly good at. But what I doesn't have is lived experience and deep insights that change you as you experience them in the world. Change your perception of the world, and then change how you connect with others. And so I've been really interested to hear some of the talks this week about experiential learning environments. I learned about Thinkery, which is here in Austin, and
it's kind of this interactive learning museum environment, and it's just really interesting to think about what are those deeply human skills that we can be focused on teaching students while they're learning what AI is and the best ways to collaborate with AI. Yeah, I think that that's vital. And I just wanted to quickly go back to the cheating. Another thing we'll probably need to do in the short term is introduce more pop quizzes and surprise tests, and they don't need to count towards grades, but to see where students are as we are in this kind
of new territory. So a lot of times we don't know how much they're using AI, how much they're cheating with it to insert more, more chances for assessment and be tracking that data. Because that's another thing. We don't have a lot of visibility into how this experiment is going in terms of what I can do and what I can't do, and therefore, what should we be teaching kids in particular and what skills should be fostering? My philosophy is we should never assume, I will never be able to do something. And the reality is we cannot
predict the future, what jobs will be there, how advanced AI is going to get, and how quickly. That means we have to prepare kids for absolutely anything which ever way the future evolves. However quickly we get to the moon, start genetic engineering. Kids can pivot, adapt, and think critically about the world around them. And most of those skills don't actually have anything to do with technology? Technology. They require deeper thinking. Critical thinking is absolutely vital in the age of advanced technologies. Kids need to read more, read for the sake of reading, and read in a
way that they can come back to school or with their parents and discuss the ideas and have those ideas challenged. Kids need to play more in the age of advanced technologies. The future Steve Jobs of the world, they're not going to come from a corporate cubicle. They're going to come from people that have imagination, that can play freely, experiment and work collaboratively. Long term thinking. So getting kids to think beyond the immediate horizon and beyond just this unit test in chemistry or in math. But how could this impact things in 510 years to come? And
even cross-disciplinary thinking. Kids in school today are likely to hold 17 jobs across five different industries. They won't be doing just one thing. So we have to get them to think, how does math connect to what I just learned in history, which may connect to what I do in philosophy or in English. So all of these new and they're not even new skills, but it's just about centering these types of skills. Most of the important, the most important skills for the future are ones we can foster for free. And that's what I think we can
sometimes miss in these moments where we feel like we have to lean into technology for the sake of it, but it's actually the other skills that we need to make sure we are doubling down on in the age of, of advanced technologies. And one kind of lived example that I, that I do with my nieces and nephews constantly since the age of about 6 or 7, I theoretically introduce them to technology, and this is what can happen in the classroom as well. I explain concepts in age appropriate ways, like genetic engineering, and I ask them
to interpret what that would mean for their world and their sense of ethics. So if we could theoretically make sure nobody gets sick in the world with these technologies, should we do that? But what if, to my nephew, it meant all that basketball practice you do, somebody didn't have to do, because that same technology allows them to suddenly be really good at basketball. How should we think about that? They engage in the higher order thinking they're exposed to the longer term concepts of technology without actually having to play around passively on an iPad. So these
are the types of deep conversations and higher order thinking that can happen into the classroom, that teachers are uniquely positioned to deliver and to facilitate. I mean, when you think about a teacher and they don't get enough credit for all of the things that they do, I mean, the curriculum is one small part of it. They are social workers, they are therapists. They know their children inside and out. So being able to go deep into these types of conversations, that's what we also need to be focusing on. And I know it sometimes feels counterintuitive because
I'm a futurist and I spend most of my days in patents and technologies talking about robots uploading brain interfaces. Yet the most important skills for the future have nothing to do with technology. And I want to go back to something that you said. So technology for the sake of technology is absolutely not the right way to go about things. But learning for the sake of learning is. And there's some really interesting insights this week about how schools and test based learning does not set students up to enjoy or take pride in just the sort of
act of learning and, you know, sort of encouraged and optimized to find the answer, get the answer right. And then even in a critical thinking class that cognitive scientist Christine Lagarde talked about yesterday, even in that class where there is no right answer. What the students wanted is they wanted the rubric to get there. And what she said to them is like, well, you know, do you think you're going to when you have a job in the real world, do you think that there is a way, like, you know, you're not going to be asked
to to, you know, discover the answer or where we should go or the right path, I should say, by being given the rubric. So it sort of seemed very were at this kind of acute moment where the way students are taught and what they're taught to optimize for is very at odds with where we're at right now with AI and the fact that it is designed to give you the answer. And I actually talked to a teacher who said teaching 16 to 18 year olds, and she was like, okay, so what I do to try
and kind of circumvent the use of AI and writing is I have my students write in class, and I do give them a bit of a rubric like this is, you know, a good structure for an essay. And then when it comes time to actually submit the essay, they go home and type it up in some cases more. In a few cases, that student had kind of ripped up their essay and basically just completely generated a new one in ChatGPT. And what that said to me was, I mean, there was no like time saved or
cognitive load saved in doing that. What that says to me is that there's like, we're in a confidence crisis. Yeah. And this is this is potentially really detrimental to society more broadly, not just kids, but all of us, that we become so reliant on these technologies. We stop believing in our own ability to make decisions. And no matter how good technology gets at something, there will be times when we have to deviate from the technology's advice, and we have to make sure we are ready for all of those moments. And you might even hear people
talk about optimizing every aspect of your life with artificial intelligence. And I somewhat take issue with that, because if writing that email is the one time in the day where you think deeply, you move through your ideas, you have to structure what you want to say, and you pass that to an AI, unless you are replacing that time and that thinking with something else, that's a dicey bridge to be walking down. And there was a recent study, I believe it was Microsoft and Carnegie that that joined forces for this study. And it did show that
overreliance on artificial intelligence can reduce our ability to think critically. So we need to make sure we are strengthening these skills as we start to move and work alongside artificial intelligence. And there was another study that was really helpful that showed this in real life, in the workforce, where entrepreneurs were given access to AI systems to help with their small businesses. The high performing entrepreneurs that had deep critical thinking skills. I supercharged their performance because they knew the right questions to ask of the AI, and they knew how to apply the AI's answer to their
business. When the low performing entrepreneurs asked AI questions, they ended up doing worse and it hurt the company because they asked the wrong questions. They just gave up and asked the hard questions, and they didn't know how to apply the material to their actual startup. So we don't want to build societies where we are 100% reliant on these systems, and that's something that we have to really think carefully about from at an adult age and at a child age. And I think we're already seeing it in terms of our attention spans spelling. I'm sure there's
a lot of people in this room, myself included, who feel like I spelled that word last week, and now I have no idea how to spell it. This week, we want to make sure we're not short circuiting the thinking in this age. So again, really centering deep problem solving, critical thinking and deep learning. Yeah, there's been a number of studies like the Carnegie Mellon, Microsoft one that shows when you outsource your cognitive work to an AI, you actually become cognitively weaker. And that seems like extremely critical at an age, you know, in a period in
time where, you know, students are supposed to be honing their cognitive abilities, but then it's like, well, how do you know? How can you engage with that? I, you know, in a way to actually, um, benefit from it. And knowing that you if you outsource the cognitive load and you're not doing the cognitive work yourself, not only are you missing that moment, but you're missing that the insights kind of living within you and kind of settling within you and kind of becoming who you are and increasing your body of knowledge and your resilience and your
strength and your expertise. And it seems like in this day and age where it's so uncertain what jobs will look like, what the future will look like, kind of radical self-dependence is something that we should be teaching. And it would be great to kind of hear a little bit about where we think that kind of responsibility lies in that respect. Yeah. And I always hesitate to, when I think about responsibility to bring in parents because everybody is coming from a different place. And we can't really control what happens in the home. That's an that's an
entire other week of South. By making sure that all homes are equal and have access to the same things. But in school, I think we really need to think about building confidence as a skill for kids so they can continue to trust the questions that they're asking and their own ability to generate answers. And again, it doesn't mean, of course, in a world where AI is a master of quantum computing, we want kids to be able to ask questions, but we help them think more deeply about the questions that they're answering, and they have a
broad understanding of the questions that they're asking. They have a broad understanding of the answers that AI can give them. And again, that is a fundamentally different society, right? Where we go from what is the answer to what is the question? And that's why that is part of that bigger system wide redesign. But I think centering confidence, encouraging kids to speak in front of class, classmates, engage in conversation, because that is also the interface of the future. It's conversing with these AI systems is absolutely critical. And then in terms of you asked, what are the
jobs of the future? Nobody can really predict them. We can predict the jobs that are going to be automated. That's much easier to see. But the same way, nobody 20 years ago could have predicted a social media manager was going to be vital to a company's existence. Most of the jobs we can't really see, we know that there's going to be some convergence of synthetic biology and artificial intelligence in space. But again, it's about preparing kids for anything, not just trying to. I think we need to move away from preparing kids for jobs, because jobs
are going to change and that much we can guarantee. And that also means moving away from coupling identity to jobs. We have to move away from that entire philosophy. Right? That that idea that we learn, we work, we retire. That's all changing. So instead, we encourage kids to lean into the problems that they want to solve, the skills that they want to to adopt, and the amazing ways that they want to change the world. I mean, tell kids about the robots and the AI systems that they'll be living with and ask them what they want
to do with it versus coupling identity to jobs, because that is just going to end up in a crisis. And we're moving away into an entirely different type of world. And so some of the skills that we can teach children to kind of prepare for this new future. Um, people sort of use the term like metacognition, right? So how to think? And it was interesting in a, in a talk yesterday. So, you know, one educator was saying, you know, well, you can't necessarily stop people from using stop students from using ChatGPT. But something that he
does is like, okay, you used it. Show me your prompts. Show me the questions that you asked it. Like, show me how you pushed ChatGPT because if you can ask good questions and if you can become a good communicator and you don't necessarily, and you kind of know where you want the answer to go and you can prompt in that direction, then, um, then then that's, that's a skill. That's a skill for today and, and for the future. Um, another skill that sort of came up as kind of an experimental sort of skill. The New
York Times recently covered a story about using this term like a vibe engineer, which is this idea that kind of almost anyone with the will and the passion to do it. And that's something that I think we need to double down on, on encouraging in every individual. But anybody with the will and the desire to create an app can basically do that now. And it's this thing called like vibe engineering. So a lot of people kind of creating apps for themselves or apps for just a few people. And so one of the kind of emerging
skills that was discussed was like around human centered design. So if anybody can design products for others, like how do we get into, well, what would be good for others? And so that felt like another territory that felt rich. Yeah, yeah, I think centering the human experience in an age of advanced technologies is an investment that we should definitely be doubling down on. And yeah. And again that does mean introducing kids to these ideas to these technologies. But then bringing it back to to the human to, to just kind of the core fundamentals. I mean,
I think history, ethics, philosophy, these are subjects that become more important the more advanced and technical our societies get. And like you mentioned earlier, you know, the computer scientists learning today are going to be the future tech tycoons of tomorrow. And so what can we be teaching them to create more ethical AI? And, you know, exponential technologies that are good for people that are designed in a way that is good for society. And so I think that's a really hopeful message that we are in that moment now where that next generation of builders are kind
of we have that opportunity to kind of coach them and help them ask the right questions and design for the good of society. Yeah, I don't think I could have said it better myself. Yeah. Um, so actually a bit of a segue into the question of just ethics in this space more broadly. Um, and, and actually, maybe before we kind of like dive into some, some of those areas, uh, what do we think of kind of the role of the educator in, in all of this? And how does that shift? So let's say in, in,
you know, um, in a great situation where you've got I tutor, I tutor that is, you know, the entire, entire reimagined, um, approach that you mentioned, where you have an AI tutor that's giving you adaptive learning, personalized and all of that. Um, what is the role of the educator in all of that? I think that that's going to evolve. The more these pilots and the more the studies come through the different positioning that the educator takes. So whether that's deep expertise in some areas which will be vital, whether that's facilitating the right questions to ask,
the right way to think about material and the right way to think about learning. I think the role of the educator stays deeply coupled with kids understanding and knowing how to learn. And that is what education was supposed to be for learning. And so I think it goes back to that we've become. We've redesigned education to prepare people for work. And I think we need to move towards preparing people for life. But the educators still stay central to that process. I mean, I don't think many people would want to send their kids to a school
with 95 robots and no people. I don't think that's the future that we're all aiming for. Right? So I guess in some of these kind of very innovative models like Alpha School, where it's two hours of intensive, personalized learning with an AI tutor. The rest of the day is all about human connection with teachers and instructors and guides that help kind of uncover the passion of that student and help to nurture it and help them to have the confidence to kind of, uh, deliver on that. And so in a couple of minutes, we will be
taking some questions. So if you want to use the Slido and ask some questions, they will appear on the screen in a few minutes. Um, but with that I wanted to touch on, you know, the ethics of this space a bit more. Yeah. And this is something we have to really think carefully about artificial intelligence, data and children. That's already, um, a deeply questionable intersection. And ethics appears in a few ways. So the first is, what data are these AI systems going to be collecting when it comes to children. Are parents aware and did they
give consent? Or are we just kind of rushing AI tools into class? And what can be interpreted from the data that gets collected on children? So we want to know where their stamina is on math. We don't want to interpret other emotional cues unless we have figured out how to do that safely with parent consent. So that's one area that I think we need to to really understand. The second is the strange way bias shows up in AI systems. We often think about facial recognition and the cases where we know it more intimately. But there
are unique ways that I can make predictions about you when you interact with it, and then change the level of advice that it gives you, or how well it performs for you based on what it knows about you. So there is a study done, um, using most of the most famous AI systems. And it showed that when you ask the AI systems about African Americans, it gave all great positive reviews. When you gave the AI system an example of text that had more traditional African American English in it, and as the AI systems questions about
that user, the AI system would say, oh, this person is never going to go anywhere. I can't even imagine a job for them. They'll be in low wage jobs. Picture this in education, the AI system detects somebody has this kind of ethnic background or is this gender, and then gives the teacher worse feedback on that student in terms of assessments, or gives the student worse advice in problem solving, because it has already made a prediction that that student is not going to go anywhere in life. So these are the more subtle ways we have to
apply foresight to ethics or ethics and foresight in in academia. And I'd say the final thing that we're going to have to watch out for. And we saw this with social media after the fact is the relationships kids are going to build with these systems. We are now giving kids access to an infinite, never ending opportunity to engage with an imaginary friend, something that is always on. Can answer all of their questions. That is a recipe for a new type of addiction, and we have to really be looking out for this. We kind of missed
the boat on smartphones, and now we're all trying to get them back out of the classrooms. We can see this line of sight directly with AI systems and chatbots. And this is and of course all on educators. This has to come to, you know, tech companies how we design these systems. Age gating them. But something to look out for is this kind of new addiction that might form between kids and chatbots. And that is not going to end up well and do our best to bring parents on board with that. So even if that's at
parent teacher interviews, just casually saying, look out for the amount of time your kid spends chatting with a chatbot, I noticed they were a little bit more disengaged in class. That could be where. So this is another area that we have to apply foresight to, but we can see that line of sight happening quite clearly if we don't intervene. Yeah. In a similar way that we've been talking about, you know, parents and learners having that visibility into their own data and kind of their performance and how engaged they are with their work. Should there be
a case where everybody has that visibility into the relationships with these chatbots? Where do you think that line can be drawn? But I feel like if there is that visibility, then people can be a little bit more relaxed. But then is that. I would say that question needs to be answered by a psychiatrist and a psychologist. That is why these are multidisciplinary conversations. We need to bring everybody to the board. An addiction or a relationship with with a chatbot shouldn't be something that kids download in the App Store. So psychologists, psychiatrists, doctors, I welcome you
to this conversation because we need your voice in it. It can't just be happening out of Silicon Valley. It can't just be left to parents to deal with on their own. Everybody needs to come to the table. We saw what happened with social media. We don't have to do that social experiment again. So I said, okay, we're going to take some questions here. This one's from Rob. How do you see AI increasing the digital divide, especially in underserved communities and developing nations? And how do we as leaders stop this cycle? And we can see that
general purpose technologies build on each other. Right. So the communities that didn't get equal access to electricity, they are the communities that don't get that are struggling with the digital divide. And then there'll be an AI divide. That is why that first pillar that I discussed, AI as a hard skill, teaching kids how to use artificial intelligence, how to prompt it, how to use it safely is vital because that may be the only opportunity kids get access to these AI systems. So that's why it's not pushing AI out of schools. It's being very careful about
adjusting how kids learn with AI, but making sure we build AI as a hard skill is absolutely vital in schools and in education when it comes to the broader world. This is a question that nation states are facing urgently, making sure there are things like sovereign AI that every country gets access to, computing power, and the opportunity to build the Stem skills within their population to adopt these technologies. That is a global conversation that's also happening against a very geopolitically uncertain time. But it's a really important question, and unfortunately, I wouldn't be able to answer
it in 30s. Yeah. And I don't know, just to add something just small to that in a way, could I introduce to everyone because everyone's got a smartphone regardless of their socioeconomic situation, if, if, if students aren't taught how to use it and just over rely on it, and that could be some at a disadvantage. Let's take another question. Um, we're aware that I cannot replace in-person Instructors, but will it? And should it replace the online asynchronous instructors in higher ed? I'm not exactly sure what you. What is meant by this question? Yeah, I guess
I interpret this question is so we know the value of in-person instruction and the need for that human connection. There are other modalities of learning. Some is kind of like on demand learning. And then you've got some which is like live synchronous, sort of synchronous but digital. Um, my thought on that is I think when content is pre-recorded, maybe that's not the best use of a teacher's time to have sat in front of a camera and kind of read through all of that content themselves. Maybe that is a scenario where you can outsource that to
an avatar or an AI in a different format that is proven to be more personalized and adaptive. And I would imagine that any human to human interaction that's focused on human connection is good, whether that's in person or has to be online. And I think that there's also something interesting here. And we actually don't know that the answer to this question. But if you're taking, say, a physics class online, what now happens when the physics teacher is also now powered by these supercomputers, and how their perspective on physics changes and how they see the world,
and then getting access to that person in addition to the AI. So this the answer, I think the jury is still out on how that would unfold specifically as it relates to online learning. Yeah, absolutely. And, you know, there are, um, you know, I work with a company that creates AI twins for experts. And what's going to happen next is that experts expertise that they trade on, they own their expertise, but they're going to be able to enrich that expertise with real time data that they choose to bring in. And so, you know, would you
speak to that real expert or would you speak to that experts I twin? Well, in some cases it might be advantageous to speak to the AI twin, even though the expert you know, the real in person experience, you can have, you know, much more creative conversations. There might be scenarios where the AI twin is actually more valuable in that for certain contexts. I think tackling the last one is interesting. What are the pros and cons of developing skills for prompts? When using an AI, it is becoming critical for a career. How will it impact social
skills? So the pros are. The more you understand how to direct an artificial intelligence system, the better response and access to how the AI kind of processes that data you'll get. So that I think is very helpful. Another pro is teaching people how to process what is in their mind and formulate that into a question that can lead to some response. The con I see is that we end up refining all of our ideas and knowledge and optimizing it for algorithms. So how algorithms we have, we will become optimization engines for algorithms. And I don't
think that's the world that we want to get into. I think it's there are unique advantages that artificial intelligence provides and how it interprets data. And there are unique advantages to how humans approach data. And we don't want to make our approach to thinking optimized for artificial intelligence. We want AI to be optimized for us. And so I think that that would be the con. I think that that this is going to be only a temporary challenge, as we're seeing the kind of nature and the science of prompting is continuously evolving, and eventually it will
turn to be much more conversational. So the way you talk to your colleague or you talk to your teacher or your friend, you'll be able to engage with AI in that way, but that still means communication is absolutely vital. Understanding how to share your ideas. And that isn't something that we always center in education, but being able to vocalize your ideas, refine your refine what? The knowledge that you have in a way that's easy to understand and to interpret for the general public and not just for AI, but will be vital in the future. Yeah,
and I think as AI's become better at promoting themselves and all of that, well, where does the human go? The human needs to go deeper. They need to get more creative. Like what is what are these prompts even about? What is it that I'm trying to achieve? What could I achieve? And so I think that trajectory is a positive one for humans. Like how do you dig it deeper into your human ingenuity? Because all of these things can be handled for you. And so I think that's a net positive for, you know, using AI kind
of in the right way. I know. So there's a question that's received the most likes and I wonder why. What occurs when the US Department of Education is demolished, and how do we move forward to make sure all states receive equal AI education? And I think this goes back to the first question. Investing in children's future is an investment in national interest. They are fundamentally coupled. So if you want to talk economic strength, economic security and national security, you are inherently talking about the success of the next generation. So I am not involved in how
this is being dismantled, but I really hope we are prioritizing and centering children and their ability to self-actualize and reach the maximum capabilities that they can in the decisions that are made, because that is going to be deeply coupled with the longevity and state continuity. So they can't be they can't be decoupled. And that's why I say education is a national security issue. They need to be in the same room. So these are fantastic questions. I did want to leave just a couple of minutes for for Sinead to share. Just some final rounding thoughts on
this last day of South by Southwest. Edu on AI and the future of education. Um, well, first of all, just a major shout out to teachers because this is an incredibly complex time and they are dealing with the most prized, prized asset on the planet, which is children. So this is I mean, I think that they don't get enough credit for the moment that they're navigating. And I think something to remember. We're going to continue to hear about advanced artificial intelligence systems, quantum computing space, all of these deeply technical advancements. But some of the most
important skills have nothing to do with the technology. And even for for parents, it's not being able to navigate an iPad passively at five that will dictate whether your child will do well in the future. If you said, you know, my child doesn't really like working on the iPad, but she's reading four books a day, she loves her sports teams. She wants to spend too much time at the park. I would say that child is going to thrive in the future. So even though there's a lot of pressure to adapt to this moment, remember it
is the non-technical skills that we need to be centering because we are preparing kids for a future we cannot see, which means we have to prepare them for anything, regardless of the way technology evolves. And on that note, I think we will close. Thank you for being an absolutely fantastic audience. >> I think.
Related Videos
How AI is Shaping the Future of Education | Askwith Education Forum
1:26:45
How AI is Shaping the Future of Education ...
Harvard Graduate School of Education
7,012 views
Keynote: Turning Conflict into Progress in Our Divided Times | SXSW EDU 2025
57:14
Keynote: Turning Conflict into Progress in...
SXSW EDU
1,189 views
#senior1 #senior_1  #integrated_science #علوم_متكاملة  #dr_amany_mamdouh
33:32
#senior1 #senior_1 #integrated_science #ع...
science is fun for everyone
3 views
AI and the Future of Education with Google DeepMind | ASU + GSV 2024
37:16
AI and the Future of Education with Google...
Global Silicon Valley
10,306 views
Sal Khan & Angela Duckworth, Ph.D. -- Brave New Words: How AI Will Revolutionize Education
58:44
Sal Khan & Angela Duckworth, Ph.D. -- Brav...
Family Action Network
10,203 views
AI In Education: Shaping The Future Of Classrooms | Prof. Bharat N. Anand At #IndiaTodayConclave2025
44:20
AI In Education: Shaping The Future Of Cla...
Business Today
81,835 views
NVIDIA CEO Jensen Huang's Vision for the Future
1:03:03
NVIDIA CEO Jensen Huang's Vision for the F...
Cleo Abram
2,058,265 views
From Daunting to Doable: How to Empower the Next Generation | SXSW EDU
59:22
From Daunting to Doable: How to Empower th...
SXSW EDU
175 views
The Race to Harness Quantum Computing's Mind-Bending Power | The Future With Hannah Fry
24:02
The Race to Harness Quantum Computing's Mi...
Bloomberg Originals
5,298,282 views
Co-Intelligence: AI in the Classroom with Ethan Mollick | ASU+GSV 2024
24:48
Co-Intelligence: AI in the Classroom with ...
Global Silicon Valley
33,108 views
Geoff Hinton - Will Digital Intelligence Replace Biological Intelligence? | Vector's Remarkable 2024
41:45
Geoff Hinton - Will Digital Intelligence R...
Vector Institute
94,157 views
Education in the age of AI (Artificial Intelligence) | Dale Lane | TEDxWinchester
16:07
Education in the age of AI (Artificial Int...
TEDx Talks
62,903 views
Do Students & Society Still Value Higher Ed? | SXSW EDU 2025
1:02:17
Do Students & Society Still Value Higher E...
SXSW EDU
176 views
How To Build The Future: Sam Altman
46:52
How To Build The Future: Sam Altman
Y Combinator
593,911 views
Think Faster, Talk Smarter with Matt Abrahams
44:11
Think Faster, Talk Smarter with Matt Abrahams
Stanford Alumni
2,964,268 views
The Future of Education | Yılmaz Köylü | TEDxEdUHK
17:37
The Future of Education | Yılmaz Köylü | T...
TEDx Talks
48,968 views
2025 AI+Education Summit: Welcome Remarks
21:10
2025 AI+Education Summit: Welcome Remarks
Stanford HAI
624 views
Mo Gawdat on AI: The Future of AI and How It Will Shape Our World
47:41
Mo Gawdat on AI: The Future of AI and How ...
Mo Gawdat
403,140 views
Meet Khanmigo: The student tutor AI being tested in school districts | 60 Minutes
13:18
Meet Khanmigo: The student tutor AI being ...
60 Minutes
439,523 views
View From The Top with Aravind Srinivas, Cofounder and CEO of Perplexity
52:10
View From The Top with Aravind Srinivas, C...
Stanford Graduate School of Business
481,604 views
Copyright © 2025. Made with ♥ in London by YTScribe.com