Heat. Heat. [Music] [Music] Tokens. Tokens. [Music] Fore tokens. [Music] Soldier [Music] tokens. Fore tokens. [Music] How [Music] many Sh. [Music] Young woman. [Music] [Applause] Welcome to the stage, Nvidia founder and CEO, Jensen Wong. Taiwan. It's great to be here. My parents are also in the audience. They're up there. Nvidia has been coming to Taiwan for over 30 years. This is the home of many of our treasured partners and dear friends. Over the years, you have seen Nvidia grow up and seen us accomplished many exciting things and have been partner with me all along the way.
Today we're going to talk about where we are in the industry, where we're going to go, announce some new products, exciting new products and surprising products that open new markets for us, creates new markets, new growth. We're going to talk about great partners and how we're going to develop this ecosystem together. As you know, we are at the epicenter of the computer ecosystem, one of the most important industries of the world. And so it stands to reason when new markets has to be created, we have to create it starting here. at the center of the
computer ecosystem. And I have some surprises for you, things that you probably wouldn't have guessed. And then of course, I promise I'll talk about AI and we'll talk about robotics. The NVIDIA story is the reinvention of the computer industry. In fact, the Nvidia story is also the reinvention of our company. As I said, I was been coming here for 30 years. Many of you have been through many of my keynotes. Some of you, all of them. And just as you reflect on the conversation, the things we talked about in the last 30 years, how dramatic
we changed. We started out as a chip company with a goal of creating a new computing platform and in 2006 we introduced CUDA which has revolutionized how computing is done. In 2016, 10 years later, we realized that a new computing approach has arrived. And this new computing approach requires a reinvention of every single layer of the technology stack. The processor is new, the software stack is new. It stands to reason the system is new. And so we invented a new system. A new system that on the day I announced it at GTC 2006, no one
understood what I was talking about and nobody gave me a PO. That system was called DGX1. DGX1. I donated the first one to a nonprofit company called OpenAI and it started the AI revolution. Years later, we realized that in fact this new way of doing software which is now called artificial intelligence is unlike traditional ways of running software. Whereas many applications ran on a few processors in a large data center. We call that hypers scale. This new type of application requires many processors working together serving queries for millions of people and that data center would
be architected fundamentally different. We realized there were two types of networks. one for north south because you still have to control the storage. You still have to have a control plane. You still have to connect to the outside. But the most important network was going to be east west. The computers talking to each other to try to solve a problem. We recognized the most the best networking company in east west traffic for high performance computing largecale distributed processing. A company that was very dear to our company and very close to our heart a company called
Melanox. And we bought them five years ago 2019. We converted an entire data center into one computing unit. And you heard me say before, the modern computer is an entire data center. The data center is a unit of computing. No longer just a PC, no longer just the server. The entire data center is running one job and the operating system would change. NVIDIA's data center journey is now very well known. Over the last three years, you've seen some of the ideas that we're shaping and how we are starting to see our company differently. No company
in history, surely no technology company in history has ever revealed a road map for five years at a time. No one would tell you what is coming next. They keep it as a secret, extremely confidential. However, we realized that Nvidia is not a technology company only anymore. In fact, we are an essential infrastructure company. And how can you plan your infrastructure, your land, your shell, your power, your electricity, all of the necessary financing around all over the world? How would you possibly do that if you didn't understand what I was going to make? And so
we described our company's road map in fair detail, enough detail that everybody in the world can go off and start building data centers. We realize now we are an AI infrastructure company. An infrastructure company that's essential all around the world. Every region, every industry, every company will build these infrastructures. And what are these infrastructure? These infrastructure in fact not unlike the first industrial revolution when people realized GE, Westinghouse, Seammens realized that there was a new type of technology called electricity and new infrastructure has to be built all around the world. And these infrastructure became essential
part of social infrastructure. That infrastructure is now called electricity. Years later, this is during all of our generation, we realized there was a new type of infrastructure. And this new infrastructure was very conceptual, very hard to understand. And this infrastructure called information. This information infrastructure, the first time it was described, made no sense to anybody. But we now realized it is the internet and every internet is everywhere and everything is connected to it. Well, there's a new infrastructure. Now, this new infrastructure is built on top of the first two. And this new infrastructure is an
infrastructure of intelligence. I know that right now when we say there's an intelligence infrastructure, it makes no sense. But I promise you in 10 years time you will look back and you will realize that AI has now integrated into everything. And in fact we need AI everywhere and every region, every industry, every country, every company all needs AI. AI has now part of infrastructure and this infrastructure just like the internet just like electricity needs factories and these factories are essentially what we build today. They're not data centers of the past. A1 trillion dollar industry providing
information and storage supporting all of our ERP systems and our employees. It's that's a data center. A data center of the past. This is similar in the sense that it came from the same industry. It came from all of us, but it's going to emerge as something completely different, completely separated from the world's data center. And these AI data centers, if you will, are improperly described. They are in fact AI factories. You apply energy to it and it produces something incredibly valuable. And these things are called tokens. to the point where companies are starting to
talk about how many tokens they produced last quarter and how many tokens they produced last month. Very soon we will be talking about how many tokens we produce every hour just as every single factory does. And so the world has fundamentally changed. We went from a company on the day that we started our company. I was trying to figure out how big our opportunity was in 1993 and I came to the conclusion Nvidia's business opportunity was enormous. $300 million. We're going to be rich. $300 million chip industry to a data center opportunity that represents about
a trillion dollars to now an AI factory and AI infrastructure industry that will be measured in trillions of dollars. And this is the exciting future that we're undertaking. Now, at its core, everything we do is founded on several important technologies. Of course, I talk about accelerated computing a great deal. I talk about AI a great deal. What makes Nvidia really special is the fusion of these capabilities and very specially very especially the algorithms, the libraries, what we call the CUDA X libraries. We're talking about libraries all the time and in fact we're the only technology
company in the world that talks about libraries non-stop and the reason for that is because libraries is at the core of everything that we do. Libraries is what started it all. And I'm going to show you a few new ones today. But before I do that, let me show you a preview of what I'm going to tell you today. Everything you're about to see, everything you're about to see is simulation, science, and artificial intelligence. Nothing you see here is art. It's all simulation. It just happens to be beautiful. Let's take a look. [Music] [Music] [Music]
Heat. Heat. [Music] Heat. [Music] Heat. Heat. [Music] Heat. [Music] Heat. Heat. [Music] [Music] [Applause] This is uh real-time computer graphics I'm standing in front of. This is not a video. This is computer graphics is generated by GeForce. This is a brand new GeForce 5060 RTX 5060. And this this is from Asus. My good friend Johnny is in the front row. And this this is from MSI. And we took this incredible GPU and we shrunk it in here. Does that make any sense? See, this is incredible. And so this is this is MSI's new uh laptop
with 5060 in it. GeForce brought CUDA to the world. Right now what you're seeing is every single pixel is ray traced. How is that possible that we're able to simulate photon and deliver this kind of frame rate at this resolution? Well, the reason for that is artificial intelligence. We are only rendering we're only rendering one out of 10 pixels. So every pixel that you see only one out of 10 is actually computed. The other nine AI guessed. Does that make any sense? And it's perfect. It's completely perfect. It guessed it perfectly. Of course the technology
is called DLSS neural rendering. It took us many many years to develop. We started developing it the moment we started working on AI. So it's been a 10-year journey and the advance in computer graphics has been completely revolutionized by AI. GeForce brought AI to the world. Now AI came back and revolutionized GeForce. So really really amazing. Ladies and gentlemen, GeForce You know, when you're CEO, you have many children. And and GeForce brought us here. And now all of our keynotes is 90% not GeForce. But it's not because we don't love GeForce. GeForce RTX 50 series
just had its most successful launch ever, the fastest launch in our history. and PC gaming is now 30 years old. So that tells you something about how incredible GeForce is. Let's talk about libraries. At the core, of course, everything starts with CUDA. And by making CUDA as as uh performant as possible, as pervasive as possible, so that the install base is all over the world. Then applications can find a CUDA GPU quite easily. The larger the install base, the more developers want to create libraries. The more libraries, more amazing things are done, better applications, more
benefits to users, they buy more computers. The more computers, more CUDA. That feedback path is vitally important. However, accelerated computing is not general purpose computing. General purpose computing writes software. Everybody writes it in, you know, Python or C or C++ and you compile it. The methodology for general purpose computing is consistent throughout. Write the application, compile the application, run it on a CPU. However, that fundamentally doesn't work in accelerated computing because if you could do that, it would be called a CPU. What's the point of not of not just changing the CPU? So, you could
have write the software, compile the software, run it on a CPU. The fact that you would have to do something different is actually quite sensible. And the reason for that is because so many people worked on general purpose general purpose computing, trillions of dollars of innovation. How is it possible that all of a sudden a few widgets inside a chip and all of a sudden computers become 50 times faster, 100 times faster? That makes no sense. And so the logic that we applied is that we could accelerate applications if you understood more about it. You
can add you can accelerate applications if you were to create a architecture that are better suited to accelerate to run at the speed of light 99% of the runtime and just even though it's only 5% of the code which is quite surprising most applications small small parts of the code consumes most of the runtime. We made that observation and so we went after one domain after another. I just showed you computer graphics. We also have numeric. This is Kai numeric. This is uh Kupi is the most uh most pervasive numerical library. Aerial and Shona. Aerial
is the world's first GPU accelerated radio signal processing for 5G and 6G. Once we make it softwaredefined, then we can put on top of it AI. So now we could bring AI to 5G and 6G. Parabrics for genomics analysis. Moni for medical imaging. Earth 2 for weather prediction. Coo quantum for quantum classical computer architectures and computer systems. Coupravarians and coup tensor contraction of tensor uh mathematics. Uh megatron. This whole row this whole column here consists of all of our deep learning and all of our libraries necessary for training as well as inference uh for deep
learning. just revolutionized computing and it all started with all these libraries not just CUDA but CUDNN on top of KUDNN there was Megatron Megatron then Tensor RTLM and then now lately this brand new operating system for large AI factories Dynamo QDF for data frames like Spark and SQL structure data has can be accelerated as well QML classical machine learning warp a framework for a Pythonic framework for describing CUDA kernels incredibly successful coup mathematical operations optimizations things like traveling salesperson uh the ability to optimize highly constrained large number of uh variables uh type of problems like
supply chain optimization this is an incredible success I'm very excited about coup DSS and coup sparse for sparse structure simulators uh those are used for CAE and CAD fluid dynamics uh finite element analysis incredibly important for EDAS and CAE industry and then of course KU litho one of the most important libraries for computational lithography mask making takes could easily take a month and that mask making process is extremely computationally intensive and now with KU litho we could ex we could accelerate that computation by 50 times 70 times as a result this is going set the
the stage, open the world for applying AI to lithography in the future. We have great partners here. TSMC is using Kitho quite extensively. ASML, Synopsis, excellent partners working with us in Kitho. So the the libraries themselves is what makes it possible for us one domain of application after another domain of science after another domain of physics to be able to accelerate those applications. But it also opens up markets for us. We look at particular regions and particular markets and we say that area could really be important to transform to the new way of doing computing.
If general purpose computing after all these years has run its course, why hasn't it run its course in every single industry? So one of the most important industries of course is telecommunications. Just as the world's cloud data centers have now become softwaredefined, it stands to reason that telecommunication should also be software defined. And so that's the reason why we've taken now some six years to refine and optimize a fully accelerated radio access network RAN RANS stack that does incredible performance for data rate per megawatt or data rate data rate per watt. We are now on
par with the state-of-the-art AS6. And so once we could do that, once we could achieve that level of performance and functionality, then after that we can layer on top AI. And so we have great partners here. You could see SoftBank and T-Mobile, Indoad and Vodafone are doing trials. Nokia, Samsung, Kiosera are working with us on the full stack. Fujitsu and Cisco are working on the systems. And so now we have the ability to introduce the idea of AI on 5G or AI on 6G um along with uh AI on computing. We're doing that with quantum
computing. Quantum computing is still at the noisy intermediate state intermediate scale quantum called NISK. However, there are many many good applications we could already started to do and so we're excited about that. We're working on a class a quantum classical or quantum GPU computing platform. We call it CUDA Q working with amazing companies around the world. GPUs could be used for pre-processing and post-processing for error correction for control. And so in the future I predict that all supercomputers will have quantum accelerators all have quantum QPUs connected to it. And so a supercomputer would be a
QPU with QPUs and GPUs and some CPUs. And that would be the representation of a modern computer. So working with a lot of great companies in this area. AI 12 years ago, we started with perception AI models that can understand patterns, recognize speech, recognize images. That was the beginning. The last five years, we've been talking about generative AI, the ability for AI to not just understand, but to generate. And so, it could generate from text to text. We use that all the time in chatbt. text to images, text to video, video to text, images to
text, almost anything to anything. Which is the really amazing thing about AI that we've discovered a universal function approximator, a universal translator. It can translate from anything to anything else if we can simply tokenize it, represent the uh the bits of information. Well, now we have reached a level of AI that's really important. Generative AI gave us oneshot AI. You give a text and it gives you text back. That was the two years ago when we first engaged chat GPT. That was the big amazing breakthrough. You give a text and it gives you text back.
It predicts the next word, predicts the next paragraph. However, intelligence is much more than just what you've learned from a lot of data that you've studied. Intelligence includes the ability to reason, to be able to solve problems that you've not seen before, to break it down step by step, to maybe apply some rules and theorems to solve a problem you've never seen, to be able to simulate multiple se multiple options and weigh its benefits. Some of the technology you might have heard about chain of thought breaking down step by step, tree of thought coming up
a whole bunch of coming up a whole bunch of paths. All of these technologies are leading it leading uh the ability for AI to be able to reason. Now the amazing thing is once you have the ability to reason and you have the ability to perceive that is let's say multimodal read PDFs you could do search you can use tools you have now agentic AI this agentic AI just does something that I've just described all of us do we take we're given a goal we break it down step by step we reason about what to
do what's the best way to do it we consider its consequences is and then we start executing the plan. The plan might include doing some research, might include doing some work as using some tools. It might include reaching out to another AI agent to collaborate with it. Agentic AI is basically understand, think, and act. Well, understand, think, and act is the robotics loop. Agentic AI is basically a robot in a digital form. These are going to be really important in the coming years. We're seeing enormous progress in this area. The next wave beyond that is
physical AI. AI that understands the world. They understand things like inertia, friction, cause and effect. That if I if I roll a ball and it goes under a car, depending on the speed of the ball, it probably went to the other side of the car, but the ball did not disappear. Object permanence. You might be able to reason that if there's a table in front of you and you have to go you have to go to the other side, the best way to do it is not to go right through it. The best way maybe
go around it or underneath it. To be able to reason about these physical things is really essential to the next era of AI. We call that physical AI. And so in this particular case, you're seeing you're seeing us simply prompt the AI and it generates videos to train a self-driving car in different scenarios. And I'll show you more of that later. That's a dog said, "Generate me a dog. Gener me one with a bird with people." And it started out with the image on the left. And then the the phase after that we take reasoning
systems, generative systems, physical AI and this level of capability would now go into a physical embodiment. We call it a robot. If you could imagine that you can prompt an AI to generate a video to reach and pick pick up a bottle. Of course, you could imagine telling a robot to reach out and pick up the bottle. The AI capability today has the ability to do those things. Well, the computer that we those that's where we're going in the near future. The computer that we're building to make this possible has properties that are very different
than the previous. The revolutionary computer called Hopper came into the world about three years ago and it revolutionized AI as we know it. It became probably the most popular, most well-known computer in the world. In the last several years, we we've been working on a new computer to make it possible for us to do inference time scaling or basically thinking incredibly fast. Because when you think, you're generating a lot of tokens in your head, if you will. You're generating a lot of thoughts and you iterate in your brain before you produce the answer. So what
used to be oneshot AI is now going to be thinking AI, reasoning AI, inference time scaling AI and that's going to take a lot more computation. And so we created a new system called Grace Blackwell. Grace Blackwell does several things. It has the ability to scale up. Scale up means to turn what is a computer into a giant computer. Scale out is to take a computer and connect many of them together and let the work be done in many different computers. Scaling out is easy. Scaling up is incredibly hard. Building larger computers that is beyond
the limits of semiconductor physics is insanely hard. And that's what Grace Blackwell does. Grace Blackwell broke just about everything. and all of you in in the audience, many of you are partnering with us to build Grace Blackwell systems. I'm so happy to say that we're in full production, but I am also we can also say it was incredibly challenging. Although the bl the Blackwell systems based on HGX has been in full production since the end of last year and has been available since February, we are now just putting online all the great Grace Blackwell systems.
They're coming online all over the place every single day. It's available in coreweave now for several weeks. It's already being used by many CSPs and now you're starting to see it coming up from everywhere. Everybody started to tweet out that Grace Blackwell is in for production. In Q3 of this year, just as I promised, every single year, we will increase the performance of our platform every single year like Rhythm. And this this year in Q3 we'll upgrade to Grace Blackwell GB300. The GB300 will increase the is the same architecture, same architecture, same physical footprint, same
electrical, mechanicals, but the chips inside have been upgraded. It has upgraded with a new black wall chip is now one and a half times more inference performance. Has one and a half times more HBM memory and it has two times more networking. And so the overall system performance is higher. Well, let's take a look at what's inside Grace Blackwell. Grace Blackwell starts Grace Blackwell starts with this compute node. this compute node right here. This is one of the compute nodes. This is um what the last generation looks like, the B200. This is what B300 looks
like. Notice right here in the center, it's 100% liquid cooled now, but otherwise externally it's the same. You could plug it into the same systems and same chassis. And so this is the Grace Blackwell GB300 system. It's one and a half times more inference performance. The training performance is about the same but the inference performance is one and a half times more. Now this particular system here is 40 pedaflops which is approximately the performance of the Sierra supercomput in 2018. The Sierra supercomput has 18,000 GPUs voltage GPUs. This one node here replaces that entire supercomput.
4,000 times increase in performance in six years. That is extreme Moore's law. Remember, I've said before that AI, Nvidia has been scaling computing by about a million times every 10 years, and we're still on that track. But the way to do that is not just to make the chips faster. There's only a limit to how fast you can make chips and how big you can make chips. In the case of Blackwell, we even connected two chips together to make it possible. TSMC worked with us to invent a new co-as process called cos l that made
it possible for us to create these giant chips. But still, we want chips way bigger than that. And so we had to create what is called MVLink. This is the world's fastest switch. This MVL link here, right here is 7.2 2 terabytes per second. Nine of these go into that rack. And that nine, those nine switches are connected by this miracle. This is um quite heavy. That's because I'm quite strong. I made it I made it look so light, but this is almost this 70 pounds. And so this is the MVLink spine. Two miles of
cables, 5,000 cables structured, all coaxed impen matched and it connects all 72 GPUs to all of the other 72 GPUs across this network called MVLink switch. 130 terabytes per second of bandwidth across the MVLink spine. So just put in perspective the peak traffic of the entire internet the peak traffic of the entire internet is 900 terabits per second. Divide that by eight. This moves more traffic than the entire internet. one MVLink spine across this MV nine of these MVLink switches so that every single GPU can talk to every other GPU at exactly the same time.
This is the miracle of GB200 and because there's a limit to how far you can drive Certis. This is as far as any Certis has ever driven from CH. This goes chip to the switch out to the spine to any other any other ch any other switch any other chip all electrical. And so that limit caused us to put everything in one rack. That one rack is 120 kilowatts which is the reason why everything has to be liquid cooled. We now have the ability to disagregate the GPUs out of one motherboard essentially across an entire
rack. And so that entire rack is one motherboard. That's the miracle completely disagregated. And now the GPU performance is incredible. The amount of memory is incredible. The networking bandwidth is incredible. And now we can really scale these out. Once we scale it up, then we can scale it out into large systems. And notice almost everything Nvidia builds are gigantic. And the reason for that is because we're not building data centers and servers. We're building AI factories. This is coreweave. This is Oracle cloud. The power density of each rack is so great. They have to they
have to uh put them further apart so that the power density could be distributed. But really in the end we're not building data centers. We're building AI factories. And this is the XAI Colossus factory. This is Stargate. 4 million square feet. 4 million square feet. one gigawatt. And so just think about this factory here. This one gigawatt factory, this one gigawatt factory is probably going to be about, you know, 60 to 80 billion. Out of that 60 to80 billion, the electronics, the computing part of it, these systems are 4050 billion of it. And so these
are gigantic factory investments. The reason why people build factories is because you know you know the answer. The more you the more you buy. Say it with me. The more you buy, the more you make. That's what factories do. Okay. The technology is so complicated. The technology is so complicated. And in fact, just looking at it here, you still cannot get the deep appreciation of the amazing work that's being done in all of our partners and all of the companies here in the audience in Taiwan. And so we made you a movie. I made you
a movie. Take a look. Blackwell is an engineering marvel. It begins as a blank silicon wafer at [Music] TSMC. Hundreds of chip processing and ultraviolet lithography steps build up each of the 200 billion transistors layer by layer on a 12in wafer. The wafer is scribed into individual blackwell dye, tested and sorted, separating the good dyes to move forward. The chip on wafer on substrate process done at TSMC spill and amcore attaches 32 Blackwell dieseS and 128 HBM stacks on a custom silicon interposer wafer. Metal interconnect traces are etched directly into it connecting Blackwell GPUs and
HBM stacks into each system and package unit locking everything into place. Then the assembly is baked, molded, and cured, creating the Blackwell B200 Super Chip. At KYC, each Blackwell is stress tested in ovens at 125° C and pushed to its limits for several hours. Back at Foxcon, robots work around the clock to pick and place over 10,000 components onto the Grace Blackwell PCB. Meanwhile, additional components are being prepared at factories across the globe. Custom liquid cooling copper blocks from Cooler Master, AVC, ARAS, and Delta keep the chips at optimal temperatures. At another Foxcon facility, Connect
X7 Supernicks are built to enable scale out communications and Bluefield 3DPUs to offload and accelerate networking, storage, and security tasks. All these parts converge to be carefully integrated into GB200 compute [Music] trays. MVLink is the breakthrough high-speed link that Nvidia invented to connect multiple GPUs and scale up into a massive virtual GPU. The MVLink switch tray is constructed with MVLink switch chips providing 14.4 terabytes per second of all toall bandwidth. MVLink spines form a custom blindmated backplane integrating 5,000 copper cables to deliver 130 tab per second of alltoall bandwidth. This connects all 72 black wells
or 144 GPU diese into one giant GPU. From around the world, parts arrive from Foxcon, Wistron, Quanta, Dell, Asus, Gigabyte, HPE, Super Micro, and other partners to be assembled by skilled technicians into a rack scale AI supercomput. In total, 1.2 million components, 2 m of copper cable, 130 trillion transistors, weighing 1,800 kg. [Music] From the first transistor etched into a wafer to the last bolt fastening the Blackbell rack, every step carries the weight of our partners' dedication, precision, and craft. Blackwell is more than a technological wonder. It's a testament to the marvel of the Taiwan
technology ecosystem. [Music] We couldn't be prouder of what we've achieved [Music] together. Thank you, Taiwan. Thank you. Thank you. That was pretty incredible, right? But that was you. That was you. Thank you. Well, Taiwan doesn't just build supercomputers for the world. Today I'm very happy to announce that we're also building AI for Taiwan. And so today we're announcing that Foxcon, Taiwan, the Taiwanese government, Nvidia, TSMC, we're going to build the first giant AI supercomputer here for the AI infrastructure in the AI ecosystem of Taiwan. Thank you. Is there anybody who needs an AI computer? Any
AI researchers in the audience? [Applause] every single student, every researcher, every scientist, every startup, every large established company. TSMC themselves does enormous amounts of AI and scientific research already. And so, Foxcon does enormous amount of work in robotics. I know that there are many other companies in the audience. I'm going to mention you in just a second that are doing robotics research and AI research and so having a worldclass AI infrastructure here in Tai in Taiwan is really important. All of that is so that we could build a very large chip and MVLink and Blackwell
this generation made it possible for us to create these incredible systems. Here's one from Pegatron and QCT and Wistron and Wei Win. This is from Foxcon and Gigabyte and ASUS. And you could see the front and the back of it. And its entire goal, its entire goal is to take these black weld chips that are, you know, you could see how big they are and turn it into one massive chip. Now the ability to do that of course made was made possible by MVLink but it understates the complexity of the system architecture the rich software
ecosystem that connects it all together the entire ecosystem of 150 companies that came together to build this this architecture and the entire ecosystem in technology in software in industry has been the work of three years. This is a massive industrial investment and now we would like to make it possible for anybody anybody who wants to build data centers. It could be a whole bunch of NVIDIA GB200s or 300s and an accelerated computing systems for Nvidia. It could be somebody else. And so today we're announcing something very special. We're announcing Nvidia MVLink Fusion. MVLink Fusion is
so that you can build semi-custom AI infrastructure, not just semi-custom chips, because those are the good old days. You want to build AI infrastructure and everybody's AI infrastructure could be a little different. Some of you could have a lot more CPUs and some of it could have a lot more Nvidia GPUs and some of it could be somebody's semi-custom AS6. And those systems are so insanely hard to build and they're all missing this one incredible ingredient. This incredible ingredient called MVLink. MVLink so that you could scale up these semi-custom systems and build really powerful computers.
And so today we're announcing the MVLink Fusion. MVLink Fusion kind of works like this. This is the Nvidia platform 100% Nvidia. You got Nvidia CPU, Nvidia GPU, the MVLink switches, the networking from Nvidia called Spectrum X or Infiniband, Nyx, network interconnects, switches and all of the entire system, the entire infrastructure built end to end. Now, of course, you can mix and match it if you like and we now today make it possible for you to mix and match it even at the compute level. This would be what you would do using your custom ASIC. And
we have great partners I'll announce in a second who are working with us to integrate your special TPU or your special ASIC, your special accelerator. And it doesn't have to be just a transformer accelerator. could be an accelerator of any kind that you would like to integrate into a large scaleup system. We create an MVLink chiplet. It's basically a switch that a butts right up to your chip. There's IP that will be available to integrate into your semi-custom ASIC. And then once you do that, it fits right into the compute boards that I mentioned and
it fits into this n this ecosystem of an AI supercomputer that I've shown you. Now maybe what you would like is you would like to use your own CPU. You've been building your own CPU for some time and maybe your CPU has built a very large ecosystem and you would like to integrate Nvidia into your ecosystem and now we make it possible for you to do that. You could do that by building a your custom CPU. We provide you with our MVLink chipto-chip interface into your ASIC. We connect it with MVLink chiplets and now it
connects and directly abuts into the Blackwell chips and our next generation Reuben chips and again it fits right into this ecosystem. This incredible body of work now becomes flexible and open for everybody to integrate into. And so your AI infrastructure could have some Nvidia, a lot of yours, a lot of yours, you know, and and uh a lot of CPUs, a lot of AS6, maybe a lot of NVIDIA GPUs as well. And so in any case, you have the benefit of using the MVLink infrastructure and the MVLink ecosystem that's and it's connected perfectly to Spectrum
X and all of that you know is industrial strength and has the benefit of an enormous ecosystem of industrial partners who have already made it possible. So this is the MVL link fusion. Whether you buy completely from us that's fantastic. Nothing gives me more joy than when you buy everything from Nvidia. I just want you guys to know that. But it gives me tremendous joy if you just buy something from Nvidia. And so we have some great partners. We have some great partners. Lchip, Astero Labs, Marll and one of our great partners, MediaTek are going
to be partnering with us to work with ASIC or semicustom customers, hyperscalers who would like to build these things or CPU vendors who would like to build these things and they would be their semi-custom ASIC provider. We also have Fujitsu and Qualcomm who are building semi who are building their CPUs with MVLink to integrate into our ecosystem and Cadence and Synopsis. We've worked with them to inteer our IP to them so that they can work with all of you and make that IP available to all of your chips. So this ecosystem is incredible. But this
just highlights the MVLink Fusion ecosystem. Once you work with them, you instantly get integrated into the entire larger Nvidia ecosystem that makes it possible for you to scale up into these AI supercomputers. Now, let me talk to you about some new product categories. As you know, I've shown you a couple of different computers. However, in order to serve the vast majority of the world, there are still some computers that are missing. And so, I'm going to talk about them. But before I do that, I want to give you an update that in fact this new
computer we call DGX Spark is in full production. DGX Spark will be ready will be available shortly probably in a few weeks. We have tremendous partners working with us. Dell, HPI, Asus, MSI, Gigabyte, Lenovo, incredible partners with working with us. And this is the DJX Spark. This is actually a production unit. This is our version. This is our version. However, our partners are building a whole bunch of different versions. This is designed for AI native developers. If you're a developer, you're a student, you're a researcher, and you don't want to keep opening up the cloud
and getting it prepared and then when you're done scrubbing it, okay, but you would just like to have your own basically your own AI cloud sitting right next to you and it's always on, always waiting for you. It allows you to do your prototyping, early development. And this is what's amazing. This is um DJX Spark. It's one pedlops and 128 gigabytes. In 2016 when I delivered DGX1, this is just the bezel. I can't lift a whole computer. It's 300 lb. This is DGX1. This is one pedlops and 128 gigabytes. Of course, this is 128 gigabytes
of HBM memory. And this is 128 gigabytes of LPDDR5X. The performance is in fact quite similar. But what's most important is that the work that you could do you could work on this is the same work you could do here. It's an incredible achievement over just the course of about 10 years. Okay. So this is DGX Spark for anybody who would like to have their own AI supercomputer. And it's um I'll let all of our partners price it for themselves, but one thing for sure, everybody can have one for Christmas. Okay, I've got another computer
I want to show you. If that's not if that's not enough and you would still like to have your own personal Thank you, Janine. This is Janine Paul, ladies and gentlemen. If that one isn't big enough for you, here's one. This is another desk side. This is also going to be available from Dell and HPI, Asus, Gigabyte, MSI, Lenovo. Uh, it'll be available from Box, from Lambda, amazing workstation companies. And this is going to be your own personal DGX supercomput. This computer is the most performance you can possibly get out of a wall socket. You
could put this in your kitchen, but just barely. If you put this in your kitchen and then somebody runs the microwave, I think that's the limit. And so this is the limit. This is the limit of what you can get out of a wall outlet. And this is a DGX station. The programming model of this and the giant systems that I showed you are the same. That's the amazing thing. One architecture, one architecture and this has the ability, enough capacity and performance to run a 1 trillion parameter AI model. Remember Llama is Llama 70B. A
one trillion parameter model is going to run wonderfully on this machine. Okay, so that's the DGX station. So now let's talk about remember these systems are Thank you, Jenny. These systems these systems are AI natives. They're AI native computers. They're computers built for this new generation of software. It doesn't have to be x86 compatible. It doesn't have to run traditional IT software. It doesn't have to run hypervisors. It doesn't have to run all of the It doesn't have to run Windows. These computers are designed for the modern AI native applications. Of course, these AI applications
could be APIs that can be called upon by the tra traditional and the classical applications. But in order for us to bring AI into a new world and this new world is enterprise IT, we have to go back to our roots and we have to reinvent computing and bring AI into traditional enterprise computing. Now enterprise computing as we know is really three layers. It's not just the computing layer. It's compute, storage, and networking. It's always compute, storage, and networking. And just as AI has changed everything, it stands to reason that AI must have changed compute,
storage, and networking for enterprise IT as well. Well, that lower layer has to be completely reinvented and we're in the process of doing that. I'm going to show you some new products that opens up, unlocks enterprise IT for us. It has to work with the traditional IT industry and it has to add a new capability and the new capability for enterprise is agentic AI. basically digital marketing campaign manager, a digital researcher, a digital software engineer, digital customer service, digital chip designer, digital supply chain manager, digital versions, AI versions of all of the work that we
used to do. And as I mentioned earlier, Agentic AI has the ability to reason, use tools, work with other AIs. So in a lot of ways these are digital workers. They're digital employees. The world has a shortage of labor. We have a shortage of workers by 2030 by about 30 to 50 million shortage. It's actually limit limiting the world's ability to grow. And so now we have these digital agents that can work with us. A 100% of NVIDIA software engineers now have digital agents working with them so that they can help them assist them in
developing better code and more productively. And so in the future you're going to have this layer that's our vision you're going to have a layer of agentic AIs AI agents. And so what's going to happen to the world? What's going to happen to enterprise? Whereas we have HR for human workers, we're going to have it becoming the HR of digital workers. And so we have to create the necessary tools for today's IT industry, today's IT workers to be able to manage, improve, evaluate a whole family of AI agents that are working inside their company. And
so that's the vision of what we want to build. But first, we have to reinvent computing. Remember what I said, enterprise it works on x86. It runs traditional software such as hypervisors from VMware or IBM Red Hat or Nanix. It runs a whole bunch of classical applications. And we need to have computers that do the same thing while it adds this new capability while it adds this new cap called agent AI. And so let's take a look at that. Okay, this is this is the brand new RTX Pro. RTX Pro Enterprise and Omniverse server. This
server can run everything. It has x86 of course. It can run all of the classical hypervisors. It runs Kubernetes and those hypervisors. So the way that your IT department wants to manage your network and how how they want to manage your your clusters and orchestrate workload works exactly the same way. It has the ability to even stream Citrix and what other what other virtual desktops to your to your PC. Everything that runs in the world today should run here. Omniverse runs on here perfectly. But in addition to that, in addition to that, this is the
computer for enterprise AI agents. Those AI agents could be only text. Those AI agents could also be computer graphics, little TJs, you know, coming to you, little toy Jensen's coming to see you, you know, helping you do work. And so those AI agents could be either in text form, it could be in graphics form, it could be in video form. All of those workloads work on this system. No matter the modality, every single model that we know of in the world, every application that we know of should run on this. In fact, even Crisis works
on here. Okay, so anybody who's a GeForce gamer, there are no GeForce gamer in the room. Okay, what connects these eight GPUs, the Blackwell, new Blackwell RTX, RTX Pro 6000s, is this new motherboard. This new motherboard is actually a switched network. CX8 is a new category of chips. It's a switch first, networking chip second. It's also the most advanced networking chip in the world. This is now in volume production. CX8 in the CX8. You plug you plug in the GPUs. The CX8s are in the back. PCI Express connected here. CX8 communicates between them and the
networking bandwidth is incredibly high at 800 gigabits per second. And this is the transceiver that plugs into here. So each one of these GPUs have their own networking interface. All of the GPUs are now communicating to all of the other GPUs on east west traffic. Incredible performance. Now the surprising part is this how incredible it is. So this is this is um RTX Pro. This is the performance and I showed you guys at GTC how to think about performance in the world of AI factories. The way to think about this is throughput. This is tokens
per second which is the y- axis. The more output your factory, the more tokens you produce. Okay? So throughput is measuring tokens per second. However, every AI model is not the same. And some AI models require much more reasoning. And so you need those AI models, you need the performance per user to be very high. So the tokens per second per user has to be high. And this is the problem with factories. Factories either like to have high throughput or low latency, but it doesn't like to have both. And so the challenge is how to
create an operating system that allows us to have high throughput. the y- axis while having very low latency which is the y- axis interactivity tokens per second per user. And so this chart tells you something about the overall performance of the computer of the overall computers of the factory. Look at all those different colors. It reflects on it represents the different ways you have to configure all of our GPUs to achieve the performance. Sometimes you need pipeline parallelism. Sometimes you want expert parallelism. Sometimes you want to batch, sometimes you want to do spec speculative decoding,
sometimes you don't. And so all of those different types of algorithms have to be applied separately and differently depending on the workload. And the paro, the outside area, the overall area of that curve represent the capability of your factory. Okay? And so notice something. Hopper is our this is the most famous computer in the world. Hopper H100. The HGX $225,000 is Hopper is down there. And the Blackwall server you just saw, the enterprise server is 1.7 times its performance. But this is amazing. This is Llama 70B. This is Deepseek R1. Deepseek R1 is four times.
Now the reason for that of course is that Deepseek R1 has been optimized and this is Deepseek R1 is genuinely a gift to the world's AI industry. The the amount of computer science breakthrough is is really quite significant and has really opened up a lot of great research for uh researchers in United States around the world. Everywhere I go re deepse R1 has made a real impact in how people think about think about AI and how think about inference and how think about reasoning AIs. They've made a great contribution to uh to the industry and
to the world. And so this is Deepseek R1. The performance is four times the state-of-the-art H100. That kind of puts it in perspective. Okay. And so if you're building enterprise AI, we now have a great server for you. We now have a great system for you. It's a computer you could run anything on. It's a computer that has incredible performance and it, you know, whether it's x86 or AI, all of it runs. Okay, it's going to it's a our RTX Pro server is in volume production across all of our partners in the industry. This is
likely the largest go-to market of any system we have ever taken to market. So, thank you very much. The compute platform is different. The storage platform is different. And the reason for that is because humans query structured databases like SQL. People query structured databases like SQL. But AI wants to query unstructured data. They want semantic. They want meaning. And so we have to create a new type of storage platform. And this is the NVIDIA AI data platform on top just as a just as SQL servers, SQL software and file storage software from your storage vendors
that you work with. There's a layer of very complicated software that goes with storage. Most storage companies, as you know, is mostly a software company. That software layer is incredibly complicated. And so on top of a new type of story system is going to be a new query system we call IQ, Nvidia AIQ or IQ. And this it's really state-of-the-art. It's fantastic. And working with basically everybody in the storage industry. Your future storage is no longer CPUs sitting on top of a rack of storage. It's going to be GPU sitting on top of a rack
of storage. And the reason for that is because you need the system to embed find the meaning in the data in the in the unstructured data in the raw data. You have to index you have to do the search and you do the ranking. So that process is very compute intensive. And so most storage servers in the future will have a compute a GPU computing node in front of it. It's based on the models that we create. Almost everything that I'm about to show you starts with great AI models. We create AI models. We put
a lot of energy and technology into post-training of open AI models. We train we post train these AI models with data that is completely transparent to you. It is safe and secure data and it's uh completely completely uh uh okay to use to train and we make that list available to you to see and so it's completely transparent. We make the data available to you. We post train the models and our post-trained model performance is really incredible. It's right now downloadable open-source reasoning model. The llama neotron reasoning model is the world's best. It's been downloaded
tremendously. It's and also we also surround it with a whole bunch of AI other AI models so that you can do what is called IQ the retrieval part of it. It's 15 times faster than what's available out there. 50% better query results. And so these models are available all available to you. blueprints are the IQ blueprint are open source and we work with the storage industry to integrate these models into their storage stack, their AI platform. This is Vast. This is what it looks like. I'm not going to go into it. I just want to
give you a texture of the AI models that are integrated into their platform. Let's take a look at what Vast has done. [Music] Agentic AI changes how businesses use data to make decisions. In just 3 days, VAS built a sales research AI agent using the NVIDIA IQ blueprint and its accelerated AI data platform using Nemo Retriever. The platform continuously extracts, embeds, and indexes data for fast semantic search. First, the agent drafts an outline, then taps into CRM systems, multimodal knowledge bases and internal tools. Finally, it uses Llama Nematron to turn that outline into a step-by-step
sales plan. Sales planning that took days now starts with an AI prompt and ends with a plan in minutes. With Vast's accelerated AI data platform, organizations can create specialized agents for every employee. Okay, so that's Vast. Dell has a great AI platform, one of the world's leading storage vendors. Hitachi has a great AI platform. AI data platform. IBM is building an AI data platform with NVIDIA Nemo. NetApp is building a net AI platform. As you could see, all of these are open to you. And if you're building an AI platform with a semantic query AI
in front of it, Nvidia Nemo is the world's best. Okay. So that gives you now compute for enterprise and storage for enterprise. The next part is a new layer of software called AI ops. Just as supply chain has their ops and HR has their ops in the future, it will have AI ops and they will curate data. They'll fine-tune the models, they'll evaluate the models, guardrail the models, secure the models, and we have a whole bunch of libraries and models necessary to integrate into the AI ops ecosystem. We got great partners to help us do
that to take it to market for us. Crowd Crowdstrike is working with us. Data IQ is working with us. Data robots is working with us. You could see these are all AI operations creating fine-tuning models and deploying models for agentic AI in enterprise. And you could see Nvidia libraries of models integrated all over it. So data robots, here's data stacks, this is elastic. I think I heard somewhere that they're downloaded 400 billion times. This is Newonix. This is Red Hat. This is Trend Micro here in Taiwan. I think I saw Eva earlier. Okay. Hi, Eva.
Okay. Wait some biases. Okay. And so, so that's it. This is this is how we're going to bring the world bring to the world's enterprise IT the ability to add AI to everything that you do. You're not going to rip out everything from it the enterprise IT organizations because companies have to run. But we can add AI into it. And now we have systems that are enterprise ready with ecosystem partners. Incredible ecosystem partners. I think I saw Jeff earlier. There's Jeff Clark, the great Jeff Clark, been coming to Taiwan for as long as I have
been coming to Taiwan and has been a partner of all yours for a long time. So there's Jeff Clark and and so our ecosystem partners Dell and others are going to take this platform, these platforms uh to the world's enterprise IT. Okay, let's talk about robots. So agent AIS, agentic AIs, AI agents, a lot of different ways to say it. Agents are essentially digital robots. Re re reason for that is because a robot perceives, understands and plans and that's essentially what agents do. But we would like to build also physical robots. And these physical robots
first it starts with the ability to learn to be a robot. The ability to learn to be a robot can't be done in the physical world productively. You have to create a virtual world where the robot can learn how to be a good robot. that ro that virtual world has to obey the laws of physics. Most physics engines don't have the ability to with fidelity deal with rigid and soft body simulation. And so we partner with deep uh with uh with deep mind Google deep mind and Disney research to build Newton the world's most advanced
physics engine. It's going to be open sourced in July. It's incredible what it can do. It's completely GPU accelerated. It's differentiable, so you could learn from experience. It is incredibly high fidelity. It's super real time. And so we could use that Newton engine and it's integrated into Mujoko. It's integrated into Nvidia's Isaac sim. So irrespective of the simulation environment and framework you use. And so with that, we can bring these robots to life. [Music] [Music] [Music] Who doesn't want that? I want that. Can you imagine one of those little ones or a few of them
running around the house chasing your dogs, driving them crazy? And so, did you see what was happening? It wasn't an animation. It was a simulation. And he was slipping and sliding in the sand, in the dirt. All of it was simulated. The software of the robot is running in the simulation. And so it wasn't animated, it was simulated. And so in the future, we'll take the AI models that we train and we put it into that robot in simulation and let it learn how to be a great robot. Well, our we're working on several things
to help the robotics industry. Now, you know that we've been working in in uh autonomous systems for some time. Our self-driving car basically has three systems. There's the system for creating the AI model and that's GB200, GB300. It's going to be used for that, training the AI model. Then you have Omniverse for simulating the AI model. And then when you're done with that AI model, you put that model, the AI into the self-driving car. Okay, this year we're deploying Mercedes around the world our self-driving car stack end to end stack. But we create all of
this in all the and the way we go to market is exactly the same that we work everywhere else. We create the entire stack. We open the entire stack and for our partners they use whatever they want to use. They could use our computer and not our library. They could use our computer our library and also our runtime. However much you would like to use, it's up to you because there's a lot of different engineering teams and different engineering styles and different engineering capabilities. We want to make sure that we provide our technology in a
way that makes it as easy as possible for everybody to integrate NVs technology. You know, like I said, I love it if you buy everything from me, but just please buy something from me. Very practical. And so, so we're doing exactly the same thing in robotic systems just like cars. And so this is our Isaac Groot platform. The simulation is exactly the same. It's omniverse. The compute the training system is the same. When you're done with the model, you put it into inside this Isaac group platform. And Isaac Group platform starts with a brand new
computer called Jetson Thor. This is just started in production. It is an incredible processor. Basically a robotic processor goes to self-driving cars and it goes into a human or robotic system. On top is an operating system we called Isaac. Nvidia Isaac. The Nvidia Isaac operating system is the runtime. It does all of the neuronet network processing, the sensor processing, pipelines, all of it and deliver actuated results and then on top of it pre-trained models that we created with an crazy an amazing re uh amazing robotics team uh that are pre-training these models and uh all
the tools necessary in creating this we make available including the model. And so today we're announcing that Isaac Groot N1.5 is now open sourced and it's open to the world to use. It's been downloaded 6,000 times already and the popularity and the likes and the and the and the uh uh appreciation from the community is incredible. And so that's creating the model. We also open the way we created the model. The biggest challenge in robotics is and well the biggest challenge in AI overall is what is your data strategy and your data strategy has to
be that's where great deal of research and a great deal of technology goes into in the case of robotics human demonstration just like we demonstrate to our children or a a coach demonstrates to an athlete you demonstrate using telea operations you demonstrate to the robot how to perform the task and the robot can generalize ize from that demonstration because AI can generalize and we have technology for generalization. You can generalize from that one demonstration other techniques. Okay. And so what if what if you want to teach this robot a whole bunch of skills? How many
different teleoperation people do you need? Well, it turns out to be a lot. And so what we decided to do was use AI to amplify the human demonstration systems. And so this is essentially going from real to real and using an AI to help us expand amplify the amount of data that was collected during human demonstration to train an AI model. Let's take a look. The age of generalist robotics has arrived. with breakthroughs in mechatronics, physical AI, and embedded computing. Just in time, as labor shortages limit worldwide industrial growth, a major challenge for robot makers
is the lack of large-scale, real, and synthetic data to train models. Human demonstrations aren't scalable, limited by the number of hours in a day. Developers can use NVIDIA Cosmos physical AI world foundation models to amplify data. Group dreams is a blueprint built on Cosmos for largecale synthetic trajectory data generation, a realtoreal data workflow. First developers fine-tune Cosmos with human demonstrations recorded by teleoperation of a single task in a single environment. Then they prompt the model with an image and new instructions to generate dreams or future world states. Cosmos is a generative model so developers can
prompt using new action words without having to capture new teleop data. Once a large number are generated, Cosmos reasons and evaluates the quality of each dream, selecting the best for training. But these dreams are still just pixels. Robots learn from actions. The Groot Dreams blueprint generates 3D action trajectories from the 2D dream videos. This is then used to train the robot model. Groot Dreams lets robots learn a huge variety of new actions with minimal manual captures. So a small team of human demonstrators can now do the work of thousands. Groot Dreams brings developers another step
closer to solving the robot data challenge. Is that great? So in order for robotics to happen, you need you need AI. But in order to teach the AI, you need AI. And so this is really the great thing about the era of agents where we need a a large amount of synthetic data generation. robotics, a large amount of synthetic data generation and skill learning called fine-tuning, which is a lot of reinforcement learning and enormous amount of compute. And so this is an er this is a whole era where the training of these AI, the development
of these as well as the running of the AI needs an enormous amount of compute. Well, as I mentioned earlier, the world has a severe shortage of labor. And the reason why humano robotics is so important is because it is the only form of robot that can be deployed almost anywhere brownfield. It doesn't have to be green field. It could fit into the world we created. It could do the task that we made for ourselves. We engineered the world for ourselves and now we could create a robot that fit into that world to help us.
Now the amazing thing about human robotics is not just the fact that if it worked it could be quite versatile. It is likely the only robot that is likely to work. And the reason for that is because technology needs scale. Most of the robotic systems we've had so far are too low volume. And those low volume systems will never achieve the technology scale to get the flywheel going far enough fast enough so that we're willing to dedicate enough technology into it to make it better. But human or robot, it is likely to be the next
multi-t trillion dollar industry and the technology innovation is incredibly fast and the consumption of computing and data centers enormous. But this is one of those applications that needs three computers. One computer is an AI for learning. One computer is a simulation engine where the AI could learn how to be a robot in a uh in a virtual environment and then also the deployment of it. Everything that moves will be robotic. As we put these robots into the factories, remember the factories are also robotic. Today's factories are so incredibly complex. This is Delta's manufacturing line and
they're getting it ready for a robotic future. It is already robotics and software defined and now in the future there will be robots working in it. In order for us to create robots and design robots that operate in and as a fleet, as a team, working together in a factory that is also robotic, we have to give it omniverse to learn how to work together. And that digital twin, you now have a digital twin of the robot. You have a digital twin of all of the equipment. You're going to have digital twin of the factory.
Those nested digital twins are going to be part of what Omniverse is able to do. This is Delta's digital twin. This is WiiW's digital twin. Now, while you're looking at this, if you're not if if you look at it too closely, you think that it's in fact photographs. These are all digital twins. They're all simulations. They just look beautiful. The image just looks beautiful, but they're all digital twins. This is Pegatron's digital twin. This is Foxcon's digital twin. This is Gigabytes digital twin. This is Quantis. This is Wistrons. TSMC is building a dig digital twin
of their next fab. As we speak, there are five trillion dollars of plants being planned around the world. Over the next three years, $5 trillion dollars of new plants because the world is reshaping because re-industrialization moving around the world. New plants are being built everywhere. This is an enormous opportunity for us to make sure that they build it well and cost effectively and on time. And so putting everything into a digital twin is really a great first step and preparing it for a robotic future. In fact, building that $5 trillion doesn't include a new type
of factory that we're building. And even our own factories we put in a digital twin. This is the Nvidia AI factory in a digital twin. Gong is a digital twin. They made Gausong a digital twin. They're already hundreds of thousands of buildings, millions of miles of roads. And so, yes, Gaus is a digital twin. Let's take a look at all of this. Taiwan is pioneering softwaredefined manufacturing. TSMC, Foxcon, Wistron, Pegatron, Delta Electronics, Quanta, Wiiwin, and Gigabyte are developing digital twins on Nvidia Omniverse for every step of the manufacturing process. TSMC with MEDAI generate 3D layouts
of an entire fab from 2D CAD and develop AI tools on COP that can simulate and optimize intricate piping systems across multiple floors saving months of time. Quanta, Wistron, and Pegatron plan new facilities and production lines virtually prior to physical construction, saving millions in costs by reducing downtime. Pegatron simulates solder paste dispensing, reducing production defects. Quanta uses Seaman's Team Center X with Omniverse to analyze and plan multi-step processes. Foxcon, Wistron, and Quanta simulate power and cooling efficiency of test data centers with Cadence reality digital twin. and to develop physical AI enabled robots. Each company uses
its digital twin as a robot gym to develop, train, test, and simulate robots, whether manipulators, AMRs, humanoids, or vision AI agents as they perform their tasks or work together as a diverse fleet. And when connected to the physical twin with IoT, each digital twin becomes a realtime interactive dashboard. Pegatron uses NVIDIA Metropolis to build AI agents who help employees learn complex [Music] techniques. Taiwan is even bringing digital twins to its cities. LinkerVision and the city of Kaos use a digital twin to simulate the effects of unpredictable scenarios and built agents that monitor city camera streams,
delivering instant alerts to first responders. The age of industrial AI is here. Pioneered by the technology leaders of Taiwan. Powered by [Music] Omniverse. My entire keynote is your work. It's so excellent. Well, it stands to reason it stands to reason that Taiwan at the center of the most advanced industry, the epicenter where AI and robotics is going to come from. It stands to reason that this is an extraordinary opportunity for Taiwan. This is also the largest electronics manufacturing region in the world. And so it stands to reason that AI and robotics will transform everything that
we do. And so it's really quite extraordinary that for one of the first times in history that the work you do has revolutionized every industry and now it's going to come back to revolutionize yours. At the beginning I said that GeForce brought AI to the world and then AI came back and transformed GeForce. You brought AI to the world. AI will now come back and transform everything that you do. It's been a great pleasure working with all of you. Thank you. I have a new product. I announced several products already today, but I have a
new product to announce. I have a new product to announce. We've been building out in space dock for some time and um I think it's time for us to reveal one of the largest products that we've ever built. And it's uh parked outside uh waiting for us. Let's uh let's see how it goes. [Music] [Applause] Heat. Heat. [Music] [Applause] [Music] Heat. Heat. [Applause] [Music] [Applause] Nvidia Constellation. Nvidia Constellation. Well, as you know, we have been growing and all of our partnerships with you have been growing. The number of engineers we have here in Taiwan have
been growing. And so, we are growing beyond the limits of our current office. And so, I'm going to build them a brand new NVIDIA Taiwan office. and it's called Nvidia Constellation. We've also been selecting the sites. We've been selecting the sites and all of the mayors and all the different cities have been very kind to us and I think we got some nice deals. I'm not sure. Seems quite expensive. But prime real estate is prime real estate. And so today today I'm very pleased to announce that Nvidia constellation will be at Beto Shilling. [Applause] We
have we have negotiated uh the transfer of the lease from the current current uh current owners of that lease. However, I understand that in order for the mayor to approve that lease, he wanted to know whether the people of Taipei approve of us building a large, beautiful Nvidia constellation here. Do you He also asked for you to call him and so I I'm sure you know his numbers. Everybody call him right away. Tell him that you think it's a great idea. So this is going to be Nvidia Constellation. We're going to build it. We're going
to start building as soon as we can. We need the office space. Nvidia Constellation Beetho Sheiling. Very exciting. Okay. Well, I want to thank all of you. I want to thank all of you for your partnership over the years. We are at a once-in-a-lifetime opportunity. It is not It is not an understatement to say that the opportunity ahead of us is extraordinary. For the very first time in all of our time together, not only are we creating the next generation of IT, we've done that several times from PC to internet to cloud to mobile cloud.
We've done that several times. But this time, not only are we creating the next generation of IT, we are in fact creating a whole new industry. This whole new industry is going to expose us to giant opportunities ahead. I look forward to partnering with all of you on building AI factories, agents for enterprises, robots, all of you amazing partners, building the ecosystem with us around one architecture. And so I want to thank all of you for coming today. Have a great Computex everybody. Shash. Thank you. Thank you for coming. Thank you. [Music] [Music]