>> Please welcome Carnegie Mellon University's 10th president, Dr. Farnam Jahanian. >> Well, good afternoon, everyone. So good to be with you, and I want to welcome all of you to our first President's Lecture Series event in the new academic year. I'm thrilled that you've joined us today, including those of you who are watching this on simulcast. As I'm sure all of you know, today's distinguished speaker, Sundar Pichai, the CEO of Google and Alphabet, is here with us today. I'm going to have the pleasure of introducing him in just a few moments. But for now, let
me just say Sundar, welcome back to Carnegie Mellon. Welcome back to Pittsburgh. We are so excited to have you here. It's also so wonderful to see such a great crowd. I want to confess something to you. Within the first hour of two, after we announced the event that Sundar was going to join us for this, we had more than 2,000 people registered. The interest from our community, especially from our students, I see you all back there, was so high that we decided to host you Sundar in this brand new facility, the Highmark Center for Health,
Wellness and Athletics. And I got to tell you also that we haven't even done the ribbon cutting for this brand new facility. So this is an experiment. We'll see how it goes. With us today are members of our board of trustees, members of our academic and administrative leadership team, including our provost, Jim Garrett, as well as our faculty, of course, our fabulous students and staff from across the campus community. I also want to recognize that we have a number of individuals from our regions private and public sector leadership in attendance today. This includes our friends
from Highmark, and I want to acknowledge David Holmberg, CEO of Highmark Health who is with us today. David, thank you for joining us. David and the Highmark team are going to join us to do the official ribbon cutting for this building in a few weeks. I also want to welcome our friends from Google from not just Pittsburgh, but also from other locations. Thank you all for joining us. We're looking forward to this conversation. Today's distinguished speaker recognizes the powerful role that technology can play in nurturing new opportunities, forging new connections and driving transformative change to
benefit society. Sundar Pichai serves as the CEO of Google and Alphabet, and is also on Alphabet's Board of Directors. His career is one that exemplifies the power of ingenuity, vision and transformative role of technology. He grew up in Chennai, India, where he studied engineering at the Indian Institute of Technology, and later earned master's degree from Stanford University. I know that other university on the West Coast. I know. And an MBA from our neighbors at Wharton School at University of Pennsylvania. When Sundar joined Google in 2004, he oversaw product management and development for Google Toolbar and
then Google Chrome, which, of course, grew to become the world's most popular internet browser. In 2014, he was appointed to lead product and engineering from all of Google's products and platforms, including Search, Maps, Play, Gmail, Android, each of which, of course, has transformed how billions of people interact with each other and interact with information. A year later in 2015, Sundar was named CEO of Google, and four years later, he also became CEO of Google's parent company Alphabet. Today, he leads a team of 180,000 Googlers across 85 locations in more than 50 countries, including the folks
here just down the road at Bakery Square. Under his watch, Alphabet has expanded its influence beyond search and advertising into areas like cloud computing, artificial intelligence, autonomous vehicles and life sciences. Ladies and gentlemen, please join me in extending a warm welcome to Google and Alphabet CEO Sundar Pichai. >> Thank you to President Jahanian for the kind introduction. It's great to be back on campus. I do have some fond memories here. When I came from India to U.S. for graduate school, I first stopped by at Pittsburgh to visit my aunt and uncle who are here today.
I think my uncle has worked at Carnegie Mellon now for over 30 years and has been a big inspiration to me. On my second day here, I think he took me to campus. We had lunch in the cafeteria. It was the first time I had lasagna. Never tasted anything quite like it before. I recall him talking to me about Herb Simon. It left a big impression and was an inspiration to me. So it's very special to be back and to be talking to all of you. And thank you for your welcome. At Google, we have
an incredible admiration for what you do at Carnegie Mellon. You can't work in technology without feeling the impact of the important research that's been done here or feeling the influence of Herb and Allen Newell and so many others who paved the way for this moment. And we feel really fortunate to have deep ties to the CMU community. In fact, our first Google Pittsburgh office was actually right here on campus. And we have collaborations across many areas of computer science, including with the CMU Robotics Institute. These collaborations are what fuel platform shifts, and nowhere is that
more true than AI, which is going to be the focus of the rest of my talk. In particular, I'm going to talk about what makes AI platform shift, why it matters, and how it's accelerating scientific discovery in exciting ways. First, I want to give some brief context for Google's AI journey so far. Google was founded 26 years ago in a garage by Larry and Sergey. We started as a search company and have made a lot of progress to make search better over the years. We are always at our best as a company when we are
solving hard computer science problems. And we do it in the service of our mission to organize the world's information and make it universally accessible and useful. We invested in AI early because we saw it as the best way to advance that mission. If you look back at the defining AI breakthroughs over the last decade, Google and Google DeepMind researchers have been behind a significant number of them. And in 2017, researchers at Google pioneered the transformer architecture that has become the backbone of today's most advanced AI models. All of this is coming together for us in
our Gemini era. It's driven by our Gemini family of models. We are trying to push the frontiers on multimodality, long context, reasoning and more. And we are focused on getting all of this scaling and working well. Because we believe AI is the most important shift happening today, and Google has been through these technology shifts before. We were founded in the transition from the personal to internet computing. A decade later, we helped to drive the shift from desktop to mobile with Android and smartphones. And now today, we're in the middle of another dramatic shift, this time
to AI. It'll be the most profound platform shift of our lifetimes. It's a fundamental rewiring of technology and an incredible [inaudible] of human ingenuity. Let me step back and explain what I mean by a platform shift. When you look back, major platform shifts have two things in common. They have an underlying step change in technological advancement, some combination of computing power, connectivity and user interface that fundamentally resets how everything is done. And a creation of a new ecosystem that helps people and organizations succeed on a larger scale. Let's take them one at a time. First,
a step change in tech advancement. The combination of deep neural nets, transformer architecture, advanced compute. We're scaling up compute like never before, and with internet scale data, all have come together to deliver foundation models that you all use every day. For AI to be a platform shift, we and others have to make these models accessible so others can drive innovation on top of it. And it needs the creation of a vibrant new ecosystem. Today, there's a flourishing of tools, infrastructure and innovations driven by many companies, startups and developers. At Google, we are opening access to
world class tooling and infrastructure, and today more than two million developers are building with our AI models. We're already seeing costs coming down. So if you look at the cost per token of our flagship models over the last 18 months, what used to cost four dollars per million tokens now cost just 13 cents, and this trend is going to continue. You're going to have intelligence just like air, too cheap to meter, and this might just look like a chart, but to me, it's an exciting future of what's to come. So by these measures, AI has
all the ingredients of a platform shift. But what makes it so profound? Three things. First, as AI interactions come to feel more intuitive and human, they'll put us at the center of the experience. Technology begins to feel like a natural extension, augmenting human capability, bridging gaps and expertise and experience, and breaking down barriers like language and accessibility. Second, it's a truly general purpose intelligence. It cuts across all different human endeavors, all parts of the economy. Every company, every sector, will use the technology in their own ways. It'll continue to improve, it's self improving. It'll help
us create new things, and I think it'll touch all areas, including areas like medicine, education and climate. And all of this is poised to drive an explosion in knowledge, learning, creativity and productivity that will shape the future in exciting ways. What I call a new golden age of innovation, and we are in the very beginning of this platform shift. So at Google, we are approaching this. We're going to be bold and responsible. We are focused on improving our next generation of models. I mentioned Gemini, which is demonstrating state of the art multimodal capabilities. We launched
it last December. We lead the industry in progress with long context. It has two million tokens of long context, more than any other model. And it's the first model which is built natively to be multimodal, and it can really unlock some interesting use cases. To give you a glimmer of the future we'll all have, we'll maybe wear glasses with it, we are working on a prototype Project Astra. It uses Gemini to process the real world, understand the context you're in, and respond naturally in conversation. Here's a quick video that shows what it can do. >>
Let's do some tests. Tell me when you see something that makes sound. >> I see a speaker, which makes sound. >> What is that part of the speaker called? >> That is the tweeter. It produces high frequency sounds. >> Give me a creative alliteration about these. >> Creative crayons color cheerfully. They certainly craft colorful creations. >> What does that part of the code do? >> This code defines encryption and decryption functions. It seems to use AES-CBC encryption to encode and decode data based on a key and an initialization vector IV. >> That's right. What neighborhood
do you think I'm in? >> This appears to be the King's Cross area of London. It is known for its railway station and transportation connections. >> Do you remember where you saw my glasses? >> Yes, I do. Your glasses were on the desk near a red apple. >> What can I add here to make this system faster? >> Adding a cache between the server and database could improve speed. >> What does this remind you of? >> Schrodinger's cat. >> Give me a bear name for this duo. >> Golden Stripes. >> Nice. Thanks, Gemini. >> Hopefully,
it gives you a glimpse of what will be coming soon in all your hands, and it's amazing. I've been using it. We're also making advancements in planning, reasoning and mathematics. This past summer, our AlphaProof system was the first AI to earn the equivalent of a silver medal for solving IMO problems. It's a huge step forward in capabilities. We've been talking a lot about what companies are doing, but platform shifts are driven by universities and research institutions too. For example, the internet as you all know was originally an academic research network. This role is especially important
in AI and CMU's work on speech recognition, autonomous vehicles, robotics and more is driving greater progress. It's going to take everyone innovating to unlock AI's potential and as I've said before, AI will transform every industry and domain from climate to health to education. But today, I want to show how it's already accelerating scientific discovery. A good example of this is AlphaFold. AlphaFold has predicted the structure of nearly all catalog proteins known to science. It's helping scientists around the world speed up research to develop new malaria vaccines, cancer treatments and even plastic eating enzymes. There are
more than two million biology researchers from every corner of the world using the AlphaFold database, and its protein structure data has saved up to one billion research years by our estimation. A couple of other examples. Our connectomics research in neuroscience. Here, we partnered with Harvard and others to use AI-based processing analysis to map a piece of the human brain at an unprecedented level of detail. In the world of AI and genetics, the human pangenome effort brought together scientists to publish the world's first draft of a human pangenome. Again, we use deep learning approaches to improve
analysis techniques and eliminate sequencing errors. That's a glimpse of what's exciting in the research domain, but we are also driving practical applications for people. Two examples. One is in climate predictions. We used AI, our neural GCM models. It's a new approach to building atmospheric models. They are faster, more accurate and require less compute. Models like these have dramatically helped improve our flood forecasting, and it's now in more than 80 countries, covering more than 460 million people. And for millions who live under threat of wildfires, we recently announced FireSat, a new constellation of satellites to detect
wildfires in an area as small as five square meters. There are so many meaningful applications to apply science to improve lives. AI is not the only technology we are investing in that's accelerating science. Quantum is another big focus for us, we've been investing to unlock the full potential of quantum computing, and we are partnering again with several universities. AI will help us continue to make advancements, particularly in the area of quantum error correction. One of the big bottlenecks towards the holy grail of a fully error corrected quantum computer. We are super excited by the progress
here. AI as you can see has massive potential to benefit the world. We are acting as boldly as we can to apply it very quickly, but we have to approach it responsibly. As with any emerging technology, it has limitations. It has issues with accuracy, factuality, bias. It can have misuse like the creation of deepfakes. For all these reasons, we believe that AI must be deployed responsibly from the start. Our product development and all our AI work is guided by our AI principles, which we published back in 2018, and our approach falls into three main categories.
First, we are developing technology solutions to address these risks. Two examples, we are working on watermarking, and we are embedding watermarks that persist even after editing to help identify synthetic content. There's more research to be done here, and we are working hard to apply across image, text, video and audio. We are also working on technologies that can address sustainability. AI development demands a lot of energy, so we are focused on model optimization, efficient infrastructure and innovating on new solutions like advanced geothermal projects. We are deeply committed to this work. Second, we approach it through partnerships
and collaboration. Many of these issues involve ethics, philosophy and art, issues that will engage all of us, not just computer scientists. We have partnerships with universities, and we also engage and collaborate with others in the industry and governments. For example, we have engaged with other companies to set up the Frontier Model Forum so that we are establishing rules of the road for AI safety. Third, building AI responsibly means making sure everyone can benefit. Whether it's expanding access to information in more languages or training more than 100 million people in digital skills. We want to make
AI helpful for everyone. This is a huge opportunity and what we are focused above everything else as a company. Looking ahead, we'll see the emergence of highly capable intelligence that will redefine what's possible in ways we can't yet imagine. There are so many interesting problems yet to solve. And I'm looking at this next generation of young scientists and all of you in this lecture are more like a gym, I should say, to help drive progress. I can't wait to see what you all do. Thank you.[APPLAUSE] >> Well, first of all, Sundar, thank you for being
with us and thank you for that inspiring talk. Ladies and gentlemen, let's once again thank Sundar for his talk.[APPLAUSE] But we're not done yet. We're going to engage in a conversation, and also, I want to give you a quick reminder that to submit question via Slido, look at the screen, you know what to do next. Sundar, I'm sure that our audience want to know a lot more about AI, and I'll pose a few questions to you about AI. But before we go there, you've had an amazing career. You lead one of the most impactful companies
in the world. I want to talk to you a little bit about your journey, how you got here. Obviously, we have a lot of students in audience, and they're interested in knowing about your journey. Tell us about yourself. Tell us about how you got here, how you ended up becoming CEO of Google and Alphabet. We have only 45 minutes, so go for it. >> I'll be brief. Look, first of all, it's a privilege to be here, and thank you for having me here. I grew up in the South of India in a city called Chennai
in a middle- class family. Looks like there are a few people from there.[APPLAUSE]. >> More than a few.[APPLAUSE]. >> Look, it was a positive life in some ways, I miss and cherish the simplicity of it. I definitely had an environment of learning and knowledge around me, which I think set me on the path I'm in today. I definitely discreetly remember every aspect of technology that came into our lives. We had to wait for it. >> Sure. >> And five years for a rotary telephone and then you make one call to find whether your blood test
results are ready and saves you fours hours of a trip to the hospital. So early on, it left a strong impression on me how access to technology can improve people's lives and it's been a journey to do that. I was fortunate to come to the U.S. to pursue my graduate studies and I had a chance to later be at Google. The mission of Google, it's literally written in our mission to make technology universally accessible and that's what brought me there. And everything we've done, if you think about Android, the chance to provide, billions of people,
four billion people today use Android smartphones. So things like that, and same will be true with AI. How do you bring access to technology to as many people as possible? That's always what's motivated me. >> Speaking of access to technology. I know you've been passionate about this topic. You have been eloquent about the importance of that and in fact, earlier today, if I'm not mistaken, you just made an announcement related to access to technology, and in particular in the context of K-12. First of all, congratulations on that announcement. You may want to say a few
words about what that announcement is, and I'll come back to the access technology issue. >> Very briefly, this was an announcement focused on K-12. Pretty much all teachers and students across the country are grappling with how to think about AI, how do they incorporate in their curriculum and how do they approach it in the classroom? So we just announced a $25 million grant to many organizations, particularly IST, 4-H, and a few other organizations. This is to educate about half a million students and educators to apply AI in the classroom. What are new tools and techniques,
and then to share that knowledge widely, I think it is such a important part of our future. We have to learn how to embrace it in a positive way and we are doing work to help make that happen. >> So tying that back to your upbringing and your journey, why is this topic so important to you? And also, if you can elaborate a little bit about what has Google done in this area to make technology more accessible, and in fact, if you can pivot to that issue? >> Look, why is it so important? Part of
what really drew me to Google was that it didn't matter. You could be a professor in Carnegie Mellon or a student in rural Indonesia. As long as you had access to a computer and connectivity, you had the same information on your fingertips. >> Of course. >> And the power of that is really hard to measure, and it plays out, it's a multiplicative thing over time. Technology is such an equalizer, it's an enabler of new opportunity, and AI will be the most profound technology shift. So how do you bring that to as many people as possible?
This is a core part of what motivates us. It's obviously good business for us, but we've always taken the view you drive access, you help people build on that and the rest works out. And with AI, particularly, we're trying to figure out, I briefly spoke about the cost of these models coming down. And so the question for us is, how can we make the next generation of models accessible? And the fact that you can use it at scale for as cheaply as possible and how it's present in all smartphones, how it's available to developers so
that they have APIs to come and use it. How do we open source the technology? And so that's the frontier we're pushing through. >> So let's pivot to AI for a moment, you alluded to this, obviously, the future looks amazing. And in fact, as you mentioned, we have a number of colleagues at Carnegie Mellon, who've been working on this for a long time. In fact, some have argued this is probably one of the most transformative intellectual developments of our time. And the potential impact of that on society on every sector of our economy is undeniable.
Now, I'm a little older than you, and I got to tell you, those of you who have been in computer science space, we've seen AI go through these cycles, where we have seen euphoria about the opportunities with AI. And then what my fellow computer scientists referred to as we've seen AI winters, where we've had downturns. But the current generation of AI work that we have done, including the technologies, including generative AI, seems different. It seems like it's different from what we have seen in the past. And of course, research does not move in a straight
line, there are often periods of positive development and then failures or stagnation potentially. So in your view, and you painted an amazing vision, which obviously very consistent with how at Carnegie Mellon, we view the future of AI. So what is different about generative AI and the breakthroughs that we're seeing in the current generation of AI technologies? >> It's a great question. Look, I think this trend has been going for a while. Maybe around 2010, that's when we brought in Professor Jeff Hinton team from University of Toronto to Google, who started working on deep neural network.
>> Who was at CMU by the way? >> Who was at CMU before that's mentioned. And I remember getting a demonstration where for the first time we saw these deep neural networks recognize a picture of cats on their own. And since then, so for me now, it's been almost 14 years we've just seen progress across the board, and I think it's now come to the capture public imagination. But we call the scaling loss. We are all scaling up these models, we are throwing more compute and more data at it. And so far, every generation has
shown tremendous progress. We are trying to achieve further breakthroughs in planning and reasoning, and so on and there is a good debate around how much more potential it is. But regardless, when I look at the fact that these models are already multimodal, that's why I showed the video I did. They're multimodal. In research, we've demonstrated a long context of up to 10 million, and in production, we have it at two million tokens. So it's going to have incredible context and we are all working on making this more agentive and they can use tools. So at
that level, no different from internet and mobile, we are going to practically drive a lot of progress. And you just look at an area like coding or an area like maths, we are pushing the frontiers pretty hard. So I think it feels palpable. You of course don't know the future. We may hit a ceiling, but it feels to me that not just in pre-training, now we are all going to scale up computing at an inference level. So constantly every few months, I've never seen a field move forward at the pace at which it's doing. And
we're beginning to see practical applications in our products, what we are working on, what we see others doing with it. So I think we are now in a golden age where I think people are going to build applications on top. And the fact that AI can itself iteratively help accelerate creation, it's going to change how we program. >> Absolutely. >> So that's a big thing. And with robotics, and by the way, I spend the highlight of my afternoon, I say touring the teams here and the research here is incredible and at the cutting edge. With
robotics, you're going to change how you make things in the real world. So that multiplicative aspect of it, I think is unlike any other technology we've seen. So at least count me in the optimistic bucket, and I think it's going to push the frontier a lot from now on. >> I think you'll find hundreds of my colleagues here at CMU share your optimism. Speaking of actually multimodal large language models, what are Google's thinking about multimodal language? We see the impact of generative AI just on text, and it's unbelievable. And of course, as you said, not
only it's has an impact on every sector of our economy, but in fact, we share your optimism about transforming the nature of science itself and transforming the scientific discovery itself. But tell us a little bit about multimodal large language models and how is Google thinking about it, or how are you thinking about it? >> Look, to be clear, there are a few labs now which are training fully into in multimodal models, but none of us have actually turned on output on the multimodal side. You'll see that happen through next year. And I think the unprecedented
level of human interaction it's going to enable and you saw glimpses of it. I think humans, that's how we approach the world, we don't work with one modality. Obviously, we use multiple modalities. So bringing that into intelligence, I think it's going to be one of the biggest breakthroughs. We are building it, we already see that at Google today, visual queries coming into Google are in the billions of queries. People just point their phones, take a picture and ask us questions, already. It's really hard to do. >> Of course. >> Pull out your camera, click on
an icon and do it. You can imagine as phone factors evolve and AI can see what it sees around you. We already have early data showing that's how people want to interact. But just imagine all the real world applications, so just imagine a doctor interacting with a patient, why can't you have an assistant AI with the doctor observing the whole interaction and using that and learning across all interactions and iteratively getting better. So there's no application I can think of, we're working on self-driving. I've seen the progress with Vaimo. I see the progress in robotics.
And again, your researchers are working hard to bring end to end models, to transform how robotics works. Again, I think it cuts across everything you do, which is why I'm excited by it. >> No doubt, it's a very exciting future. Before I go to my next question, I should tell you that Sundar did mention that he visited several of our labs. I had told him that and he knew this already, he could spend the next two days at Carnegie Mellon, and you wouldn't still see everything that's happening. But I do want to thank you because
you are so busy and your time is so precious for taking the time to spend the afternoon with our colleagues and their students. But especially it's quite extraordinary for our students, undergraduate and graduate students to be able to spend time with CEO of Google and talk to you about technology, science and policy. Again, I want to thank you for doing that.[APPLAUSE] >> Thank you. >> Let's pivot to the topic of, still on AI, but let's talk about energy and environment sustainability issues. And I know it's an important topic also to you personally. New York Times
recently essentially labeled AI as a notorious energy hag and I don't think anybody disputes that. There are different shades of it. The numbers vary, but regardless of where you land, for example, International Energy Agency predicted that the global energy demand from data centers, from cryptocurrency, from AI, language model, essentially development and training, and so on, will double within just the next couple of years. And there are other data that show, 160% growth and so on, in the use of it. The general consensus is that energy is an issue for AI and emerging technologies. On one
hand, we see the excitement around it, the potential impact of it, potential impact of it on humanity and society as you alluded. How serious should be taking this problem? What's Google thinking about it? And what are some of the innovations that are taking place at Google that try to tackle this issue? >> Such an important question. This is something that does concern me from a few dimensions. At Google, we've been carbon neutral since 2007. We're one of the first companies to do so. And we set ourselves an ambitious goal to be completely 24/7 carbon free
in our operations by 2030, but we set that goal before the current AI moment. And you are right. We are scaling up. We are now working on over one gigawatt data centers, which I didn't think we would be thinking about just maybe even two years earlier. And all of this needs energy. I think in the short term, it is challenging. In the medium to long term, I'm optimistic because I think it's also bringing a lot of capital investment to developing new sources of energy. We invested very early in wind and solar because we saw the
opportunity there. And today, many of our biggest data centers operated at around 90% carbon-free basis. And we recently, in our Nevada data center, we are running it now using geothermal, and so we brought a new source and we want others to do that too. I see the amount of money going into SMRs, these small modular reactors for nuclear energy. And so when I look at the capital and the innovation going in, I'm optimistic in the medium to long term. Also on the consumption side, we're going through the early phases. We're all inefficiently pretraining these models,
but on the inference side, I think we can get dramatically more efficient over time and maybe accomplish a lot of these tasks in a much more efficient way. So I think if in a decade out, I think we can be optimistic again, and I do think AI itself will help us solve this problem. So just like we are forecasting climate better now and so on, maybe AI will help us make progress in nuclear fusion and so on. So it may end up playing a role, but in the short term, we are aggressively looking at globally
where we can find renewable sources of energy, and we're offsetting where we can, but I view this as an uncomfortable interim phase to be through before we get to a better direction. >> Thank you for that response. Sundar a few days ago, I was thinking about what questions to pose to you. And as I was thinking about these questions, I thought about why don't I just ask generative AI to help me formulate some of the questions. I don't know if your staff told you this, but it's okay. We figured it will be a surprise. >>
For the AI to tell me. >> Yeah, AI told you this. So we went online and we decided to ask Google Gemini, Perplexity AI and also Microsoft Copilot. What questions would you ask? Here are the two things that we pose to these generative AI tools. What was what questions should I ask Sundar? And the other one was, if I was trapped in an elevator with Sundar Pichai, what would I talk to him about? Are you curious to know what I found out? Yeah, you are, I'm sure. So let's start actually with Gemini. By the way,
we got a bunch of questions back. It wasn't just one or two, but I think our audience can see this on the screen and you can take a look at it. These are pretty sensible questions. What strategies does Google employ to attract and retain top talent? Ask about Google's vision for AI and how it will change the world. I think you actually addressed some of these issues and so on. So this was not Rick, folks, I promise you. Then let's go look at what Perplexity AI said. Again, they seem pretty sensible. In what ways does
Google plan to contribute to global challenges such as climate change and digital inequity? And actually, you addressed partially both of these. The last one was an interesting one. Discuss his views on failure in tech industry, not failure of tech industry, but rather it's about risk taking. How do you encourage your team to take risks? What lesson has he learned from your setbacks in your career, if any, in terms of that? Actually, you should take that question. >> In tech, I think failures are important part of your journey. One of the things that Google, we've always
felt is, when you want to work on really ambitious things, and if you do that, first of all, no one else is working on it, so you tend to have less competition if you work on ambitious things. It attracts the best people in the world, including people from CMU. And third is even if you fail in accomplishing what you set out to do, whatever you created in the process will still be incredibly valuable. We've always had that mindset. And through that process, I think you have to encourage teams to take those bets, it's important to
reward outcomes are important, but sometimes you also want to. But if you only look at outcomes, people learn to game it over time. People set easier and easier goals. So you also reward effort, people being entrepreneurial, taking risks, and so you have to build rewards for that in your culture. And I think that's incredibly important. And you have to stay the course, take self driving CMU has been in this journey as long as Google has. I think we were inspired by a lot of the work that happened in CMU when we started working on Waymo.
Even five years ago, people who would look at Waymo and say, will this work? I think if you now go and sit in a car in San Francisco and see it. I recently took a trip in it. I was in the backseat, I was on my phone. Completely, there was no driver, and obviously, it's going making its way through a crowded street. And technology does progress. And so you have to stay the course and stay focused on long term and get there. >> And to your point earlier today, when we were with Red Whittaker, we
were essentially reminiscing about DARPA Grand Challenge, and if you look back, it's been decades since, in fact, CMU won the DARPA Grand Challenge with Boss, which was the vehicle we won it in. You can clap if you want. Perfectly fine. This is their work, but more seriously, you are so right about this because what we have seen over the last several decades, it was not always progress. There have been discontinuity. There were setbacks and so on, and we are at a point where we're taking a lot of these things for granted, where it's been true
sustained investment. And honestly, when we look at our research, that's absolutely where this country also needs to pay attention to, which is sustained investment in research that has to be done in decades. Would you like to see what Copilot said, Microsoft Copilot? >> Sure. >> I think it's pretty instructive. I thought it was a little bit edgier for some reason. I'm going to read it to you if you can't read it back there. One question was, could each of you please? I don't know who each of you is, but it just went off the rails,
explain why consumers shouldn't benefit if you were to break up Facebook, Google, Apple, Amazon. Microsoft is notably missing here, by the way, but this is all for good humor. Another one was it, how is it fair for Google to aggressively promote its own products alongside third party products? Is this a level playing field for sellers? By the way, I did not game this. This is really what we got. And the last one is something that I want to pivot to and follow up with a question. >> And the second one it's confused about us and
some other company, but I leave it up there. >> Great rebuttal, but let me go to the last question more seriously, though. The next question really is about, and you partially alluded to this, there are a lot of great things about where AI is going and potential impact of this to benefit society, but you actually mentioned it. We need to worry about privacy issues. We need to worry about illicit use, misinformation, deep fake, fraud. We need to worry about intellectual property issues. We need to worry about issues surrounding knowing the provenance of data and also
understanding why some of these algorithms work the way they do. So there are a number of issues that have to be dealt with. And you alluded to this. Responsible AI is something that I think many tech companies are embracing it, and you've been very eloquent about this issue. There's a growing, of course, global discourse around establishing framework for AI governance and AI regulations or potentially AI policies for responsible AI to ensure AI is developed and used ethically. A number of different governance models for this exist. For example, if you look at the U.S. approach versus
the European approach versus the approach taken, say by countries like China, these are different models. In a world that we live in, where it's borderless. You're a multinational company, we operate in a global sense. How should we be thinking about a global framework for AI innovation and governance? And what's your current thinking? You're partially addressed this, but I want to really talk about it more broadly. >> It's an important question. It's a hard one to drive global progress. First of all, it's important to understand we are in an early stage of the technology and you
generally want to take an innovation oriented approach to this space. And I've said this before. It's too important a technology not to regulate well. And I think the good things that are happening, I think the first thing I would do is many countries around the world are developing core capabilities as a government to deal with AI. I think it's happened in the U.S., U.K. and a few other places. A few companies have proposed voluntary AI commitments. And I think that starts giving a blueprint. I do think there are conversations happening in important forums like the
G7. G7 started talking about code of conduct around AI. So I think at that high level, establishing common goals, setting higher level principles, I think are all a good place to start. The question is, how do you expand it and then how do you start bringing in people with very divergent views is going to be the challenge. Long run for us to have a global framework, you inevitably have to think about how does China approach AI and how do we arrive at a constructive framework there? And those are the harder questions ahead, I think, but
for now, I think a set of like- minded countries are beginning to engage and have conversations, which I think is good. Through it, I think we need to be careful how you don't end up making it difficult for smaller companies, startups, etc., to make progress, and I think it can be too burdensome if you do regulation too early. So I think there's a balance to be had there, but I think the right conversations are beginning to happen, and I think we need to do a lot more of it, but there's a lot more work to
be done. >> When you think about the history of information technology, history of essentially technologies that have transformed society, most for the better, in fact, if you go back to the Industrial Revolution, often advances in technology, humanity had time to catch up, but the policies always trail advances in science and technology. We've known this. But it seems like when it comes to AI, advances are happening at the pace that it's not clear that we're ready for it. And it's not a pessimistic statement. It really is something that I think we need to recognize that the
pace is much faster than what we're used to potentially in other technologies, and humanity has to be ready for that. Your thoughts on this. >> Let me look. You feel it when you talk to one of these models and you see it surprise you with glimpses of what looks like really advanced capabilities or I felt a bit when we invited one of the robots to join our group picture earlier today and made sure it was looking at the camera. The change is coming at us fast, and I think you are right. This is why with
AI, I think you can wait and then tackle it later. We have to anticipate, and so all the work we are doing as companies, universities, governments, will end up being super important, and we have to be aggressive about thinking about it, but I think you have to balance it because we all have to understand many industries already have regulations in place. So take healthcare. It's not like healthcare is heavily regulated. So it's not like AI can flow into healthcare and bypass all those regulations. So how do you leverage the existing regulations but allow innovations to
flow through is worth thinking about. The challenge with regulation in AI is AI is a technology which we use to recommend a coffee shop, which you may like to potentially it may recommend the most optimal medical treatment for you. So making sure whatever you're thinking is proportional to the risk, all that needs a lot of thought, and so I think we have to be careful here, but I think it's important to develop the framework, engage in conversations as early as possible. Sundar, you're going actually in a direction that I've heard others also talk about, where
we already have industries with significant framework, guidelines, regulatory frameworks, and so on, whether it's in finance, whether it's in healthcare, and others, that guide these industries. The question really becomes not an overarching, essentially governance model for everything in every industry, but how do we bring essentially these frameworks to various sectors of the economy? That seems to be the direction you're pushing for. I have a couple of more questions, but I know we want to go to some of the audience questions because I am sure we have tons of them, and we're going to try to
get to a few of them. Bethany and Jason are going to help us with this. Let's take a question from the audience. >> Can everyone hear me? >> Yeah. You're good. >> My Slido crashed. So I have the first 100 questions that were asked for you. We're going to start with do you intend the research done with DeepMind and AlphaProof and Alpha Fold to be mainstreamed in Google's product offerings? >> Definitely. I think just again recently there was the US IMO International Math Olympiad Competition. There were about, I think, six problems for a total of
42 points, I forget. There were about 58 gold medalists, and we got a silver medal in it, but the techniques used to do that are involve breakthroughs in reasoning which we are working hard to bring to our mainline models in Gemini, absolutely. I think all of us are working hard on tracking, how do you get these models to better plan and reason and work, which is the next breakthrough in intelligence I think we need. And so the techniques we've used in AlphaProof and AlphaGeometry, we are working hard to bring it to our mainline models. >>
This one was asked in several different forms. >> So will AI replace entry level programmers? >> There's a version of this question which can be asked across many disciplines. I think the most likely scenario in all of these things is it'll both help existing programmers do their jobs, where most of their energy and time is going into higher aspects of the task rather than fixing a bug over and over again or something like that. It'll make programmers much more productive. We can see that in Google as we are adopting. Across the board, we are seeing
productivity increases, but we are still in early days. And second, I was playing around new tools like Cursor AI. I'm sure people here know about it. It is just lowering the barriers for who can program. How can you more in a natural language medium interact? So programming becomes more like a creative tool. I think that's going to enable and make it accessible to more people. So I think that's the right framework, and pretty much, can a radiologist do their job? Will they have more time to interact with patients, maybe and do they have a collaborator
with them to make the job more interesting? I think, as long as we keep that lens, the phrase artificial intelligence is unfortunate. It creates a comparative element. It doesn't need to be that. It's enabling intelligence. And so I think my bet is many more people will be programming in the future. It's been true with chess. The AI models are so much better at playing chess than anyone else. More humans today play chess than at any point in human history, and there's more interest in it than ever before. I think it gives you a sense of
that. >> If I can follow up, actually, I want to broaden that question just a little bit. Recently, in fact, Harvard Business Review reported the average half life of skills, technical skills especially, is now less than five years. And in fact, in some tech fields, it's dipped to as low as 2.5 years, which by the way, explains why the tenure of university presidents is shrinking. But we won't go there. As we think about workforce development, as we think about future of education, give us some advice about this. How should we be thinking about not only
educating college students, but how should we be thinking about upskilling and reskilling a workforce, which obviously is going to be a challenge. Some jobs are going to be lost, some jobs are going to be created, some jobs are going to evolve. How should we be thinking about it as society and any advice for us as university teachers? >> I think you all play a super important role here. I do think this notion of reskilling people, gone are those days where it's once you educate yourself and the rest of your life, you do that, I think
that's fast-changing. I think the more we can have touch points to reskill people, that transition is going to be important, I think. We've had success. We have experimented now, our agency experimented. We today have Google Career Certificates. It's a big program as part of Grow with Google. We train people in these nine-month programs, and then it's an intense nine-month program, and we reskill them, be in IT roles or data analytics roles, etc. And we've had tremendous success in helping them find employment on the other side. So I think maybe community colleges have a role to
play. So at a governmental level, thinking about workforce reskilling, I think it's something in tapping universities, community colleges, I think will have to be an important part of doing that. And to my earlier point, I want to underestimate AI's role in helping you upskill yourself as part of your day to day work too and I think both will end up being important. >> I remember when you visited Pittsburgh a few years back, it may have been 2017 or '18, you announced Grow with Google. >> It's what I'm talking about. >> And this is really the
program that you're referring to and so on. >> Yeah. >> And obviously it's been a success story looking back over the past seven or eight years. >> Yeah. I actually announced in Pittsburgh in 2017 and we committed a billion dollars as a company, and a big portion of that is going towards reskilling, and we've scaled up our career certificate program and we are touching. In the U.S., we have trained 10 million people so far, this is not on the career certificates, but this is more people and businesses on how do you use internet and digital
tools to shift how you do business? How do you have online presence, and how do you reach people, etc., and it's been very very successful. >> Before we go back to a couple of more audience questions, I wouldn't do my job if I didn't acknowledge that we're so happy that Google is in Pittsburgh. And, of course, as you mentioned, it actually started on our campus. Andrew Moore is sitting right there. My former dean and colleague, who was responsible for the launch of working, of course with Google in launching Google Pittsburgh, and of course, grown to
a very vibrant part of our regional economy, not to mention, contributing of course, to Google. For our students who are not familiar with it, tell us a little bit about what's going on in Google Pittsburgh, and what are your aspirations for it? >> Almost tempted to call Andrew to answer the question. But look, we are now over 700 people here. I was just in our offices in Bakery Square this morning. But literally we span, not surprisingly, it's heavily technical, and computer science-oriented, being in Pittsburgh, being so close to CMU. But we have projects in Search
and in Ads, in Cloud, particularly, there are teams there scaling up. You earlier talked about scaling up the technical infrastructure. A good portion of that happens out of Pittsburgh. We now have Google DeepMine teams, Waymo teams, but there's a cross- cutting team of AI and ML pretty heavily across the office, and I'm confident we'll continue to grow here. Google thrives when we are doing things in close conjunction with universities which makes Pittsburgh an incredible place. But it's been great to see not just us, there are many companies I've come here periodically and they progressed here.
The Duolingo [inaudible] etc. That's good to see the underlying innovation here in Pittsburgh. >> Absolutely. And there's no question that Google has played a disproportionate role in catalyzing the local ecosystem, which has now led to more than 100 robotics companies. Not only startups, but major companies that have presence here in Pittsburgh partially because of the ecosystem that has been created around essentially emerging technologies, some of it due to the work that we're doing at Carnegie Mellon, of course, University of Pittsburgh, our neighbors do quite a bit of that, of course, in life sciences. With that,
Bethany, why don't we go to a couple of more questions. >>Farnam, this is one that I know you like to also ask, so you may follow on. Many people say that what differentiates humans from AI is that we are capable of creativity. What is your view on this? >> I think over time, what AI will enable is help humans unlock more of their creative potential. We underestimate how much of our time and effort goes into dealing with the mundane things every day. And so I see it that way. Again, I don't see it as a
conflict. Take many years ago, CGI came and today we have filmmakers who employ it to make better movies than ever before. So I think about it that way like many more people in the world will have jobs, which will tap into their creative potential and can make a living on it thanks to AI. Try going back to someone from 40, 50 years ago and explaining what a YouTube creator is. Today, a number of people who have jobs, high paying jobs, as YouTube creators. It's a whole new economy. And you can't describe it to someone from
30, 40 years ago. AI will enable many things like that in the future. It'll enable more people to create music. And I think society will place a premium on that human-created and curated experience. And I think there's a strong role to play there. >> Bethany, one more question. >> Yeah, final question. This goes with your earlier conversation. Should there be a standard or a marker that differentiates AI output from human output? >> For sure. I think particularly at the stage where we are in, clearly being able to identify synthetic content reliably, I think it's going
to be very important. That's what I spoke about. We're working on watermarking, and it works well in certain areas. Certain areas, it's more in what I call active research stage. But doing it and being able to detect regardless of whose tools were used to create AI, I think it's important. For example, at Google, we're working hard to make sure if you see an image, you want to be able to ask Google when was this image first created? When did it come into the world? So understanding the provenance of images is going to be important so
that we can distinguish reality from not. And so I think these are extraordinarily important principles. >> Let's take one more question and then I want to bring this to a closure soon. >> So let's do one that's a little bit more broad-based. If you could change one thing about your career, what would it be? >> I would have liked to play soccer somewhere in the world. >> We might be able to help with that. We have a great. >> That is something I would need AI itself in a pretty profound way. That's a good question.
If I could change one thing about my career, I genuinely wanted to be in academia. I came here to the U.S. to do my Ph.D. program. I did drop out. But in alternate world, I love academic environments so I love being back here. So maybe that's what I would change about my career for all it's worth. >> I know someone who can help you with that if you want. >> I think your uncle would agree. You did pretty well. Yeah, you did that. Before we wrap up, I want to ask a couple of questions. It
really has to do with more with decision making, especially recognizing that we have a large audience and about leadership. As a CEO, you run a multinational company. We talked about that, and of course, extremely complex 180,000 employees. How do you start your day? And how do you make decisions? How do you decide to prioritize things? Do you have any advice, especially for our students? >> I just start my day with Indian masala chai, if you haven't tried it, you should try it every day. I think the main thing I think about in the mornings, particularly,
we all live in a world where there's so much coming at you. It's an information overload. I think a lot of running things at scale is separating the signal from the noise. Most of what's coming at you, it's not consequential. It feels that way. And I think even when you go through the day or your week, most of what you do, most of the decisions you make aren't actually that consequential. You just want to error on the side of making them. Even if you get it wrong, it's easy to fix. But there are very few
things that are consequential. I think a lot of it is stepping back and I use my tea question on the mornings, try and step back. And I definitely read. I always start my morning with reading the news, absorbing what's happening in the world, and maybe at Google, but stepping back, for a few minutes, at least, and thinking about, what is it that I want to get done. Otherwise, you let that information overload dictate what you do. So almost resisting that temptation and stepping back and thinking about the few important things you need to do in
the course of a day or a week, is the important thing to do, I think, and that's a tool I've used often. >> Thank you for that response. The final question for you. As I mentioned, we have a lot of students in the audience. By the way, the students that we have here are from our business school, from our College of Fine Arts, engineering, computer science, and so on. If you could travel back to your days as an undergraduate and then grad student here in the United States, what one or two pieces of advice would
you give to these aspiring students? >> Maybe the main thing I would say is I feel the main thing I would tell them is not to be too stressed. >> These Carnegie Mellon students by the way. >> Which is why I'm telling that. I think you're trying very, very hard to figure out what it is that you want to do. And I think your mind is telling you a lot of things. But I think you want to take the time to figure out what it is that your heart would enjoy you doing. And if you
do what you enjoy, you're far more likely to do well. And for some people that answer is very obvious, very early. But for some people, it can happen 5-10 years later. And so I think the main advice I would have is to take that time to figure it out and sometimes patience is a virtue, and I think people are impatient to quickly figure out what it is that they want to do, but to slow down and try and find what it is that they love doing, I think is the most important thing you can do.
>> Thank you for that response.