[AUDIO LOGO] [MUSIC PLAYING] [PAPER CRINKLING] [KEYBOARD CLACKING] [CLAPS] [MUSIC PLAYING] ANNOUNCER: Please welcome to the stage, Clay Magouyrk. [CHEERING, APPLAUSE] CLAY MAGOUYRK: First, I want to say, thank you all for being here. I know that it's a busy time in everybody's lives. And it takes a lot to come out to an in-person conference and listen to a lot of people talk about technology. Just in case you were wondering, we will talk about AI today. I know you probably haven't had enough conversations about AI. So we'll get there, but it's not the only thing we're going
to talk about. I have a lot of conversations with our customers, our partners, even my employees. And a lot of what I talk about are the new things that we're doing at Oracle Cloud. And that's what we're really going to focus on here today. But what I find helps first is to really establish a baseline about what are the special properties of the Oracle Cloud. And given the recent advancements that we have in AI, one of the ways that we can try to understand the Oracle Cloud better is by asking a generative AI service. I
feel like that's something that we all do these days. You go, you spin up your favorite version. Whether it be from vendor A or vendor B, you type in your question. And what's amazing is that this technology that now seems almost commonplace, a year ago was really unheard of. And so as you can see, generative AI answers the question incredibly well. But if I can't add any color to that, I'm probably in the wrong job. So I'm going to take some time to explain the special properties about the Oracle Cloud. I've been in the cloud
industry for a long time. And the thing that I focus on is that the cloud means different things to different people. We all use the cloud every day, sometimes in our personal lives, oftentimes in our jobs. And based on who we are, the cloud means different things to us. But in terms of common properties, the cloud is really a collection of services that are delivered to you over a network. And to me, the most important part is that it's somebody else's job to deal with the unfun parts. And across the industry, there are different cloud
providers. There's really different cloud verticals across infrastructure, things like enterprise applications, industry applications. And each of those clouds are designed to solve different business needs. A typical cloud strategy consists of separate application and infrastructure layers. And there's a cohesiveness that is missing between those two layers. If you think about the evolution of this business, cloud applications came first, really starting in the early 2000s. And it evolved in parallel with cloud Infrastructure, which came a few years later. At Oracle, we have a fundamentally different approach to the cloud. Our strategy is not to deliver separate clouds
across separate layers, but instead to combine the functionality of infrastructure, enterprise applications, and industry applications into a single unified cloud. To do that, we've re-engineered how our applications work across all layers of the stack. So I'll tell you a bit more about some of the benefits of a unified cloud strategy. The first is that you get cohesiveness and deployment choice. Across our infrastructure and all of our applications, you can receive them in the public cloud, in government-specific clouds, across sovereign deployments, some dedicated to national security needs. And then customers can receive that same unified cloud
environment in their data center via our Dedicated Region offering. We also have an integrated extensibility platform. Now there's too many things to talk about. But I'll give a couple of examples about what that means. Take for example, our Fusion and NetSuite Analytics Warehouse. This is something that takes your amazing business data that's authored in Fusion or NetSuite, combines it using Oracle's data platform using the Autonomous Data Warehouse and then allows you to supplement that with your own custom application data, other business data. That kind of thing is only possible because we offer a unified cloud
experience. We also offer, for example, our Oracle Integration Cloud that is used by essentially almost all of our Fusion customers so they can create custom extensions to their application platform. All of that sits in this same unified cloud environment. There's also something that I hear the most from customers, is that there's a huge importance around unified governance and control model. Whether it be our Cloud Guard services, things like maximum security zones, the work that we do in our Identity Services, our entire compliance portfolio, when customers come to our cloud platform, all of that is unified
across the entire stack. I'll give you a single example of where this is really impacting the conversations that we're having with customers. If you listen to the things that Mike Cecilia talks about in some of our industry applications, what Larry talks about with some of our healthcare needs, we go and we have conversations with entire countries about actually revolutionizing their digital infrastructure. And that's from the basic infrastructure layer, up through telecommunications, into patient monitoring systems. The fact that we can go in and deliver that and to start small and grow as the needs grow in
a unified whole, it's just something that most other cloud providers can't even think about. So it's great that you have a unified cloud. But I think we all recognize that that's only really valuable if the individual pieces are great. And we've been hard at work across Oracle Cloud Infrastructure to ensure that we have an amazing offering that underpins everything that you need. So over the past year, we've added many services, like our queuing service, our Data lakehouse, container instances, serverless Kubernetes, and many more. We've also added more than 13 regions, and we're up to 64
customer-facing regions around the world. And we continue to grow our customer base every day. Here are some examples of customers that are using OCI. Subaru is using simulations to ensure their vehicle safety. True Digital Surgery is controlling robots for microsurgery. And Experian has their data lake on OCI. Moving to our enterprise and industry applications, I find that oftentimes when I'm talking to specifically an infrastructure audience, there's a lack of understanding about the true breadth and depth of Oracle's applications. Whether it be in pillars like financials, like human resources or supply chain or industries like banking,
energy and water, retail or telecommunications, Oracle has a comprehensive portfolio to meet those business needs. And all of that runs on OCI. And the great thing is that those customers get all the benefits of the applications as well as the benefits of our Cloud Infrastructure. Some examples of our great application customers. FedEx runs their entire logistics business on Fusion. SoFi uses NetSuite to serve hundreds of thousands of their customers. And Wagamama uses industry applications for restaurant ordering service and payment processing. So the first special property I want you to understand is that Oracle Cloud is
about delivering a unified cloud combining applications and infrastructure. Across the industry, the demand for compute has never been higher and it's never been growing faster. As of today, there are more than 8,000 data centers around the world. Current electricity consumption of 3% is projected to grow to 8% by 2030. And CO2 emissions from data centers is projected to grow from today's 2% to 14% of the total world's emissions by 2040. At the same time we're having this growth, energy is becoming more expensive and also less available. So as a technology industry, what should we do
about this? The good thing is that every new generation of technology, every incremental step, has been improving our compute power per watt of electricity. We've been making incremental improvements over the past four decades. You should go and do the calculation to see if we had the energy efficiency of computers in 1970, how much the impact it would have to the world. You'll need to turn your iPhone calculator sideways. Let's put it that way. But at the same time that we're seeing these incremental improvements, technology comes along that has a step function shift that can make
a big difference more so than just the year over year incremental improvement. Arm is one of those technology shifts. If you all look at what's in your pockets, whether it be a phone or a tablet, your laptops, there's a reason that the Arm architecture has come to dominate our personal computing devices. And that's because energy consumption is so important in those devices. Well, the same thing that's happened on the personal device side is now becoming even more important on the server side. So at Oracle, we decided to push forward and really try to overinvest, move
faster than the industry is going towards such a technology. And one of the things that we've been working on for the past year, and I'm very proud to share with everyone, is that as of today, 95% of all OCI services and all new Fusion customers run on Ampere technology. The reason this is good for Oracle is, well, we receive substantial energy savings. We obviously reduce our power bill. But at the same time, for every single rack that we deploy on Ampere compared to the alternative, we save the carbon footprint that could fly this entire room
to Singapore and back twice. So to talk a bit more about what's going on with Ampere and why it's so impactful, please help me welcome to the stage Renee James, the CEO of Ampere. [MUSIC PLAYING, APPLAUSE] Renee. RENEE JAMES: Hi. CLAY MAGOUYRK: Thank you. RENEE JAMES: Thanks for having me. My favorite topic. CLAY MAGOUYRK: Well, as you and I have talked about many times, I know you're passionate about Ampere, and you're also very passionate about sustainability. You've been in the technology industry for a while. You've been working on semiconductors for most of your career. RENEE
JAMES: Yes. CLAY MAGOUYRK: But then, five years ago, you decided to go and start a new company. What motivated you to go out and do that? RENEE JAMES: Yes, I have been in semiconductors for a long time, my entire career actually. There were two twin challenges really facing us that I felt like there was an opportunity for a new company to emerge that we're going after the twin challenges of the cloud, which is increased performance, linear scaling performance, and efficiency and sustainability. And that quadrant of computing, which is higher performance at lower power versus lower
performance lower power, hadn't really been pioneered. And the cloud opened up an entirely new opportunity from a software perspective. Because so many customers like you, all were moving to the cloud, because so many of the workloads were in the cloud, we could build a cloud native processor that was very tuned to the needs of people like OCI. And could deliver, in addition, the sustainability that's absolutely requirement. And this was five-plus years ago. And now with generative AI, that efficiency requirement is even higher. CLAY MAGOUYRK: Absolutely, at Oracle ourselves, we're experiencing a lot of growth. As
you can imagine, there's a lot of success happening across the company. And as that growth is happening, sometimes you run into limitations. You don't always get the power that you want. And so one of the reasons that we shifted so quickly to adopt Ampere technology is because of that power efficiency. It allows us to continue growing our business without bottlenecks. RENEE JAMES: First of all, I'm very appreciative of your guys' adoption, what you call A1, which is the Ampere Altra line. And certainly, the power efficiency has been excellent. But now we're seeing in our new
generations and as we go forward, incredible performance gains as well. And so I'm very pleased with that evolution. The cloud is unfolding the challenges that five years ago, frankly, when I talked about efficiency and sustainability and microprocessors-- because we used for decades-- for those of you that don't know, we used to use power as a proxy for performance. So we would just add more power because if it was 400W or 500W, who cared? You could not air cool a data center that was full of microprocessors and/or GPUs that are 500W. And there's no possibility that
that's a long-term solution given that most people are being constrained on the grid about how much more power. In fact, some jurisdictions are actually saying, you can't put more data centers in. So you're going to have to get more density. So the approach that we went after was a dense single core scale-out approach in a very, very power-efficient way. CLAY MAGOUYRK: Well, as a software person at heart, I'm always-- RENEE JAMES: Me too. CLAY MAGOUYRK: I'm always impressed by how you can have fundamentally different approaches that result in different options. And I think the work
that you've been doing at Ampere is impressive. RENEE JAMES: Thank you. CLAY MAGOUYRK: To hear a bit more about what we're seeing not just as an Oracle or as a customer, but also from some external customers. I think we should bring Mehdi from 8x8 up. What do you think? RENEE JAMES: That'd be great. [MUSIC PLAYING] [APPLAUSE] CLAY MAGOUYRK: I'm Clay. MEHDI SALOUR: Hi, thank you. RENEE JAMES: Hi. CLAY MAGOUYRK: So, Mehdi, we've been working together for a while. 8x8 is a company that cares about performance, efficiency, and sustainability. Can you share with people what is
it that 8x8 is working on, and why performance and efficiency is so important to you? MEHDI SALOUR: Well, absolutely. Our journey with OCI, in fact, started on the foundation of performance and efficiency. At 8x8, we're in the business of cloud communication services, providing businesses globally with contact center solution, unified communications, video and voice on a single integrated platform. And for our type of services, we're dealing with a real time audio and video over the public internet. It's one of the most difficult applications to be delivered over the public internet. So it's absolutely critical for us
to have excellent performance on the compute and on the network side. So very, very important to us. If you guys want to know the whole story, just search 8x8 Oracle. You're going to see a lot of articles on Oracle's website, on Forbes, other places, how the journey started. But to summarize it, at the beginning of pandemic, OCI was one of the main reasons that we were able to continue providing free meeting services to those who needed desperately at the time. It was a very important time for us to be able to extend our reach and
provide these services to hospitals, schools. We had so many great messages coming in. We moved our services to OCI, massive amount of data transfer as you can imagine. Millions of users were utilizing our services for free at the time. And we saved 80% on network egress costs, which was humongous for us. And we gained 25% performance per node as we were using. But our story just did not stop there. It just continued. And the cycle of performance and efficiency is something that I've seen with OCI continuously. We're talking a lot about these things. For example,
when the flex shapes were introduced, I think it opened up a new dimension actually in performance and cost efficiency. It was really great. And I believe right now with the Ampere processors, you have taken it now to a completely next level. And the other side of things, network is also super important to us. Your expansions globally, it's been amazing just in the past few years, how many regions you have opened up. We have our own global reach technology. So we can deliver the best quality of service to users around the world on a single platform.
So contact center users might be all around the world. We want to make sure the best quality voice. And to do that, we need infrastructure in the region. So we keep the media and signaling within the region and reduce the latency and the distance you traverse over the internet to get to the end user. So utilizing your infrastructure, we have been able to reach our customers really well and provide that excellent quality of service. Lastly, as far as the experience with your team, I can tell that it's an amazing team. You have a customer-obsessed team.
Let me tell you this. Literally, the way that we're dealing with your product teams, they're listening to us. It's just similar to us internally dealing with our product teams at 8x8. And our account team and several guys are so integrated with our team, sometimes I forget they are part of Oracle. So that's great. And keep up the great job. CLAY MAGOUYRK: So Mehdi, I know you've recently adopted our Ampere A1 offering. Can you tell people a little bit about what that experience was like. Was it easy, was it hard, how did it go? MEHDI SALOUR:
Oh, great question. So let me tell you guys a story. Around, what, five weeks ago, Clay calls me. We're on a meeting and tells me, Mehdi, let me tell you something. We have this awesome Ampere processors that are cost effective. They're performing really well. And they consume much less power. So they're great for the environment. And I know you care about all these three. We have moved a lot of our own managed services to them. Do you want to give it a try? And I'm like, absolutely, let's give it a try. And this is the
first time we have been x86 all along. So for us, let's see how it goes. Three weeks later, I'm sending him an email saying that, hey, we're live on A1s in production right now. And we have production video traffic on this. And as far as the experience, it was actually much easier than what I thought. I threw this to one of our most agile innovative teams, our video team. And I said, hey, let's try this out. Literally, they had to just recompile one of the modules which was native. We were able to find all the
libraries that we needed, including a security module that was like a mask from our security team. That was available, so we put it on, tested it. The rest of the time was really working on the orchestration and switching the compute shapes that we're using and all of that. We went live. I waited a week actually to make sure it's working really well. And then actually, I notified you guys. And the performance, these processors, they don't do hyper-threading. It's very consistent linear performance that we have experienced. Initially, we were conservative the way that we utilized the
number of processors that we assign. Then we started to make the machines run hotter. By that, I mean putting more loads on the machines, not because the temperature are lower actually on these machines. [LAUGHS] RENEE JAMES: Yeah, thank you. MEHDI SALOUR: Yes. RENEE JAMES: They don't get hotter. MEHDI SALOUR: They don't get hot. Yeah, exactly. And we're like, oh, they're performing really consistent, really good. So that linear scalability and performance has been really good. And I'm sure you're happy because your density in the data centers are getting better with these. CLAY MAGOUYRK: Look, this is
a great-- you're happy, I'm happy, everybody's happy. RENEE JAMES: I'm happy. MEHDI SALOUR: Excellent. [CHUCKLES] CLAY MAGOUYRK: So Renee, we're here, we hear a lot about all the great things that are currently available and the advantages of using the current technology. How do you see this continuing to evolve? RENEE JAMES: Yeah, that's a great question. Well, first of all, thank you. We've been an 8x8 customer since I started the company. So now I'm happy-- MEHDI SALOUR: I really appreciate that. RENEE JAMES: --an Ampere customer. We're excited, running Fusion and the database and applications of that
ilk are profound for architecture to move to that point. The next product is called Ampere 1. We named it that because it's the first one of our own cores that we've designed from the ground up. And have, of course, more cores and more performance, but some very unique features that are important for cloud providers like you all. CLAY MAGOUYRK: Well, and I'm very excited to say today that along with the new Ampere 1 processor, at OCI we'll be launching our new A2 instance early next year, which will make this great new technology available to all
of our customers worldwide. RENEE JAMES: Yeah. CLAY MAGOUYRK: Renee, Mehdi, this has been awesome. I really appreciate you coming here and telling everybody about your experience. Thank you very much. RENEE JAMES: Thank you. MEHDI SALOUR: Thank you very much. RENEE JAMES: Thanks. [MUSIC PLAYING, APPLAUSE] CLAY MAGOUYRK: So it's great to hear from Renee and Mehdi on the advantages of these processors. But it's also important during this transition to understand that there's a need for a win-win. We each have a role to play. You as a customer must adopt this technology, reduce your energy consumption, and
therefore help the environment. It's our responsibility as a cloud provider to incentivize that by making sure that the more efficient option is also more cost-effective for you. And at Oracle we are committed to delivering that sustainable cloud. And we're committed to passing those savings on to you so that you are incentivized to be more sustainable. That's another very special property of the Oracle Cloud. I talk to customers a lot. And one of the most common things that they bring up over the past few years is that they have multiple clouds. And what they ask from
us as a cloud provider is to make sure that our clouds work well together. It's an important part of our strategy at Oracle that we ensure our cloud works well with others. And in the pursuit of that vision, we have multiple offerings that I'm going to talk about. The first of which is Oracle Alloy. Oracle Alloy is a set of functionality that enables others to become cloud providers themselves. We launched this at Cloud World last year. And since then, we have two very important partners and customers, Telecom Italia and NRI. NRI has been a long-time
Dedicated Region customer of ours. They helped us evolve our products. They're amazing partner and give us great feedback and help us deliver the solution together. And once they understood the power of Alloy, they immediately decided to take that offering. And together, we can go and serve more highly regulated markets, like the Japanese government market. Another thing that we've been working on for many years is our relationship between OCI and Azure. And the reason for that is because, as you can imagine, Microsoft and Oracle, we have a lot of joint customers. If you go to any
on-premise data center, there's a lot of Microsoft technology, and there's a lot of Oracle technology. More than four years ago, we started with our first Interconnect between OCI and Azure. And since then, we've launched more than 12 regions around the world. The focus of that product was really on making it interoperable at the network layer, at the identity layer, and having a shared support model. Along that path, we also got a lot of feedback from customers. They liked the fact that we were working together. They liked that we were trying to really solve their joint
problems, but they wanted more from us. And so a year ago, we launched our Oracle Database Service for Azure, which was really an integrated experience focused on how can you use the best of Oracle database technology, Exadata, Autonomous Database, HeatWave. And use that in a really easy to use interface with your existing Azure applications. Since then, across the Interconnect, across our Oracle Database Services for Azure, we have almost 500 customers around the world. And not only are we seeing continued growth, we're seeing an acceleration in the adoption of this technology. But we're not standing still.
Last week, Larry and Satya announced an expanded partnership between Oracle and Microsoft. And that expanded partnership is called Oracle Database at Azure. And to hear a little bit more about that and to hear about some of the progress so far, even though it's only been a few days, I'm going to invite Judson Althoff, executive vice president and chief commercial officer of Microsoft to the stage. [MUSIC PLAYING, APPLAUSE] JUDSON ALTHOFF: How are you doing? CLAY MAGOUYRK: Oh, great, man. Thanks for being here. JUDSON ALTHOFF: Thanks for having me. CLAY MAGOUYRK: So, last week, Larry and Satya
were hanging out in Redmond. It went well from my perspective. JUDSON ALTHOFF: Ours as well. CLAY MAGOUYRK: Good. JUDSON ALTHOFF: It was nice to host Larry and Redmond for their first time ever. CLAY MAGOUYRK: It's been a few days since then, not too long. But you talk to customers all the time. What's the reaction that you're seeing from customers? JUDSON ALTHOFF: Yeah, first of all, look, we couldn't be more excited about this partnership, Clay. And for me, both personally and professionally, up until about four years ago, I thought the only time I would ever see
the Oracle logo and the Microsoft logo together would be on my LinkedIn profile. CLAY MAGOUYRK: [CHUCKLES] JUDSON ALTHOFF: And I was talking to my daughter the other night and telling her about all the work we're doing. And she reminded me about how she and her brother grew up with Oracle onesies on and had more Oracle logo wear than garanimals. So aside from the personal nostalgia, customers are fired up about what we're doing. Because as you talked about, look, there's no such thing as like a mono-cloud strategy in this day and age. Customers want us to
work together. They want to see the ability to leverage applications running on Azure connected to the Oracle Database Service. And frankly, without limits, without boundaries. And AI is only going to further build upon that. Every single customer that comes to Microsoft talking about AI is a customer that we counsel in saying, look, you can invest all you want in your AI tech. But if you don't first ground it in your data estate, all you're going to do with your AI is make mistakes with greater confidence than ever before. So look, the importance of being able
to bring the Oracle data estate, Exadata, the Autonomous Database, connected to Azure and help our customers accelerate progress, it's an awesome, awesome opportunity. CLAY MAGOUYRK: That aligns with exactly what I hear from customers all the time. This is a clearly been an area that customers have wanted us to work on and we have been working on it for several years. What do you think about this offering, that is, what makes it different than, say, the stuff that we did with the Interconnect or even some of the other stuff we launched last year? JUDSON ALTHOFF: Well,
I like the timeline that you showed because, look, as you've stated, we've been working on this for a while. Our engineering teams have been side by side, hands on keyboard, trying to create better experiences for customers now. And the Interconnect was revolutionary. It was really the first time two major cloud providers came together to deliver a better experience for customers, in earnest. And we learned a lot, as you stated. And the bottom line is, the kind of service that a customer expects to come from Microsoft and from Oracle is a high performance, real time experience.
And so the pressure on the Interconnect was always the latency and gosh, could we get you all to coordinate a little bit more effectively on data center locations. So we really thought the best thing to do was to actually put OCI inside of Azure so that customers can have that seamless experience without boundaries. So you can literally go into the Azure portal. If you have a commitment to Microsoft for what we call an Azure Cloud Consumption Commitment, you can literally retire that commitment through leveraging Oracle Database Services that are connected right in the same data
center rack, side by side, zero latency, the best of both worlds. CLAY MAGOUYRK: Right. JUDSON ALTHOFF: So we think it's a phenomenal leap forward. As you stated, we're starting off in 12 regions, but we hope to grow it from there. There are hero regions, by the way, where we have the bulk of our compute and the bulk of our AI capabilities today. And so as customers roll out more and more generative AI solutions leveraging Microsoft copilots and some of the demos that you all have shown with Power BI connecting into the Oracle Database, they can
expect a far better experience than ever before. CLAY MAGOUYRK: Yeah. So you've got a lot of excited customers. We've got a lot of excited customers. What advice would you give to those customers? How do they take that next step? What do they go from here? JUDSON ALTHOFF: The times now, we're open for business. We have a lot of customers already in pipeline, as you and I have talked about. This was one of those announcements where we didn't have to do a lot of arm twisting to get customers to step forward. I think the first reaction
was, finally. The most common reaction anyway. So look, reach out to your Oracle Sales team, your Microsoft sales team. They're both equally incented to work with you on all of this. Failing that, you can reach out to me at judson@microsoft.com. I am thrilled about this partnership. And I'm thrilled about what we're going to do together with customers and market. And look, this is just the beginning in terms of the unlock I think we can really have. I'll come back to this AI point. Look, every customer I talk to has 100 big ideas of what they
think they can do with AI. And I guarantee you, it won't be unlocked any faster than working with Oracle and Microsoft together. So we're thrilled, Clay. CLAY MAGOUYRK: Well, Judson, this has been a long journey for both of us. I know you and I have worked on this together for quite some time. It's great to have a moment where it's real and live. And I appreciate you coming here today and talking about it. Thank you very much. JUDSON ALTHOFF: Thanks so much for the partnership. CLAY MAGOUYRK: Yes. [MUSIC PLAYING, APPLAUSE] So as you heard from
Judson, this new partnership is really about a few things. It's about reducing the latency for these workloads so that you can take your most demanding workloads to both of our clouds. It's about a unified user experience. And it's also about a unified commercial experience, where you can procure this through the marketplace and receive private offers from Oracle. We can't be more excited about it. And I think that not only is this going to be very impactful for customers, I think this is something that will continue to evolve the cloud industry going forward. Another part of
our multi-cloud strategy is the work that we're doing with HeatWave. MySQL is an extremely important technology at Oracle. And we've invested very heavily into our MySQL HeatWave service. And we believe it's the best way to run an open source database in any cloud. And so another part of that MySQL HeatWave strategy is that it's critical that we meet our customers where they are. To learn more about the great things that we're doing with MySQL HeatWave, I'm going to invite Edward Screven, executive vice president and chief corporate architect to the stage to talk more about our
HeatWave strategy, availability, and some adoption by our customers. [MUSIC PLAYING, APPLAUSE] [INDISTINCT CHATTER] EDWARD SCREVEN: All right. OK. So we heard Clay talk about improving efficiency. So using less power, saving money. We all know how important that is. And doing it without actually sacrificing performance, running at least as fast as you did using older hardware. So we can use new infrastructure. We can be more efficient. We use less power, save money. But what if you could improve query performance by 10 times? What if you could train your ML models 25 times faster? What if you
could replace five, five AWS services with one service? And still run an AWS, by the way, because you can choose your own cloud. Well, you can do that with MySQL HeatWave. So if you don't know, MySQL HeatWave is a Database Service that combines online transaction processing with lightning fast analytics. The only other Database Service I know that does that is actually the Oracle Database Service. That using Autonomous Database, using Autonomous Data Warehouse, you can do both transaction processing and high speed analytics. The advantage of that are clear. You don't need to move the data. If
you use other cloud database services, you have a transaction processing system, you want to do analytics on it, you have to create ETL to move the data from transaction processing system to analytics system. So not only do you pay more, because you have to pay for transaction processing and analytics service. You have to move the data, which means the analytics you're running, they're not real time. Using MySQL HeatWave, your analytics are real time. You get the answers you need in real time. Now, one very distinguishing characteristic of MySQL HeatWave, it has built-in automatic machine learning.
You don't need to use a separate service to do machine learning. You're using AWS. You probably using Sage if you want to do machine learning. You don't need to use a separate service for machine learning if you use MySQL HeatWave. And that machine learning is automatic. And what I mean by that is, you just tell MySQL HeatWave what is your goal. I want to do regression, and I want to do classification, I want to do recommendation. You show the data. You say, I want these tables, these columns, and say go. It chooses the Specific machine
learning algorithm. And it chooses the hyperparameters for you. It then trains the model for you. It manages that entire process. You don't need to be a data scientist. You don't need to manage the data. And not only is it lower labor, it's also more secure. You're not moving the data around. The data stays in the MySQL HeatWave Database. Now this summer, we launched something called MySQL HeatWave Lakehouse. MySQL HeatWave can now query data stored in files in object store. Files that may be generated by IoT devices, files that are generated as logs from various services
you're running, files that are generated as backups from other database services. So for example, if I'm using Aurora in AWS, I can take an Aurora backup and query it directly using MySQL HeatWave Lakehouse. That means MySQL HeatWave is an analytic solution for more than just MySQL applications. If you can generate files with data, you can query it with MySQL HeatWave. All of this is managed through something we call Autopilot. Autopilot is machine-learning-driven automation of MySQL. You don't need to tune MySQL HeatWave. It's automatically tuned. You don't need to worry about how to scale MySQL HeatWave.
It's automatically scaled. You don't need to be an expert in database at all to use MySQL HeatWave. And the result of Autopilot is that we have achieved amazing performance, dramatic performance. So here I've got a slide that's comparing MySQL HeatWave versus several other databases you might be using for analytics, you might be considering using. The price performance advantages are overwhelming. This is a 500 terabyte TCPH benchmark. The price performance difference with Redshift, 8 times, 8 times. The price performance advantage against Databricks, 18 times. Snowflake, 22 times. Google BigQuery, 30 times. Now these are price performance
numbers. If I showed you just the straight performance numbers, they would be equally dramatic. MySQL HeatWave is much faster than these other dedicated analytics databases. And remember, you get it together with OLTP capability. You get it together with Lakehouse capability. Now what that means is you get cost savings, you get energy savings, you get faster insights. And you can get all of those things on your choice of cloud. Of course, MySQL HeatWave runs on OCI. It runs on AWS. If you have files you're generating into S3 buckets, you can query those directly using MySQL HeatWave
Lakehouse without moving the data out of AWS because MySQL HeatWave runs directly in AWS. And of course, you can also access MySQL HeatWave from Azure. Now a question, why don't we talk to an actual MySQL HeatWave customer, NVIDIA? And I'd like to please invite Chris May onto the stage. [MUSIC PLAYING, APPLAUSE] Chris is the head of IT platform and cloud engineering at NVIDIA. CHRIS MAY: Thank you, Edward. EDWARD SCREVEN: Chris, why don't you start by talking a little bit about what you're doing with MySQL HeatWave and the problems you're trying to solve? CHRIS MAY: Yeah,
so my team provides critical services that support all of the engineering and operations work that keeps NVIDIA working. And one of the specific areas, the challenges that we addressed had to do with our code coverage application. We needed to make sure that we had accurate reports on what percentage of our code was being executed and exercised by our test suites. EDWARD SCREVEN: So just to make sure I understand, so basically, the engineers at NVIDIA are writing both software and hardware code, probably SystemVerilog for your chips. And you need to verify that the tests that you're
running are covering all of that code. CHRIS MAY: Exactly. EDWARD SCREVEN: Now I think many people here are probably software-oriented, and so we're used to being able to just fix the bug and deliver a patch. But if you actually have a bug in the Verilog, can you patch those chips? CHRIS MAY: That is a great question. And yeah, there are workarounds that our hardware engineers can do. EDWARD SCREVEN: But it's very expensive. CHRIS MAY: It's very difficult. EDWARD SCREVEN: It is very expensive. CHRIS MAY: And usually, you need to know early on before you actually
tape out. EDWARD SCREVEN: So this is a critical part of the way NVIDIA does business. And so do you have any metrics that have shown improvements with using MySQL HeatWave? CHRIS MAY: Yeah. So when we moved our coverage application to the cloud, we identified a specific area of incompatibility. It relied on a database engine that we could only support on prem. And that was challenging. We were introduced to HeatWave as a possible solution. And as we delved into that, we realized that the in-memory analytics that's built into HeatWave perfectly matches our need for that piece
of it. So that reduced a lot of the code complexity around this. And we had some phenomenal results from that. EDWARD SCREVEN: And so reducing that code complexity, so it not only ran faster, but you now have an environment, which is probably a lot more maintainable. CHRIS MAY: Yeah, exactly, which freed up smart engineers to work on other problems than maintaining this code. Some of the other benefits that we ran into, we noticed a 50% increase in performance of our queries. And this was both for the analytics queries, as well as our normal transactional work.
EDWARD SCREVEN: That's great. So I understand you're also using HeatWave for machine learning now. Is that right? CHRIS MAY: Yes. So a lot of the people, my colleagues at NVIDIA are doing really exciting things with AI, self-driving cars or medical advances. I wanted to look at how can I help have AI help me and my team with our operational excellence. And so we started a proof of concept using HeatWave's AutoML log anomaly detection. And we coordinated with Oracle's team. And we're drawing the OCI's native logs, collaborating that with the database error logs. And the idea
is to find early detection of errors before they actually happen. So if we can, for example, identify that a file system is running low, that's a yellow flag situation. But we can generate a ticket to an operator, who can then either grow the system or remove files and prevent the outages that would happen if we hadn't addressed it. So obviously, this means this is more reliable for those that rely on our services. But the added benefit is the operational support effort for preventing that outage is much less than it would be recovering from an outage.
So now we're actually starting to coordinate what we're getting out of this analysis with our monitoring systems so that we can measure a reduction in outage time. EDWARD SCREVEN: So just one followup question on that. Did you have to hire any data scientists to do this machine learning project? - Yeah. No, this is the greatest part about it because all of this was built into the HeatWave service that we're already using. So we didn't have any more cost. There's no additional overhead. We didn't have to build an application for this. It was so easy to
use. And we're storing our data in a native format. So it greatly reduced the complexity of the solution and helped us to get to production much more quickly. EDWARD SCREVEN: That's great. I'm really glad to hear that. So if you'd like to learn more about MySQL HeatWave and Lakehouse and AutoML, you can join me today at 4 o'clock in ballroom E, where I'll be giving a keynote about it. And with that, I'd like to really thank you, Chris, for coming and talking to us today. CHRIS MAY: Thank you. EDWARD SCREVEN: Thank you so much. Thank
you. [MUSIC PLAYING, APPLAUSE] CLAY MAGOUYRK: All right. Thank you, Edward. EDWARD SCREVEN: Thank you. CLAY MAGOUYRK: It's great to hear from Edward and Chris about the success they're having with MySQL HeatWave. Now I know that we haven't talked enough about AI, but we're now getting to that section. I've shared a lot about Alloy, our partnerships with Microsoft. You've heard Edward talk about HeatWave. The thing to understand is that Oracle is really committed to delivering a distributed cloud that delivers our differentiated services wherever you need them. So we started this conversation by asking our generative AI
service powered by Cohere, what is the Oracle Cloud? And given the recent advancements, it would obviously be wrong for me not to spend some time talking about generative AI. So to do that, let's start by talking about what generative AI is. Fundamentally, it predicts what comes next based on what's come before. And this is a correct interpretation, but it's also overly simplistic. Amazing things are built out of very simple primitives. Take for example, a general purpose computer. You have a simple architecture. You have a very simple instruction set. And through a massive combination of all
of those different instructions, you get the technology that we've built as a species over the past 40, 50 years. So saying that generative AI is not reasoning or it's not doing what a human would do, those things are true, but it also misses the point. So let's take a look at some examples of what generative AI can do rather than just talking about it theoretically. Here's an example where a healthcare provider is using a simple prompt to generate a letter it needs to send to an insurance company that authorizes a brain imaging procedure. And this
is done entirely based on doctor's notes. So this is a task that would normally be done by a person. It's completely automated based on existing data. If you're like me, you're in many meetings every day. Sometimes you miss one. And all you get is a transcript or a recording of those meetings. And it's a lot of content to parse through. These models can extract and summarize the relevant information. In this example, a simple prompt extracts and summarizes the action items, along with the people that they're assigned to. These models can also transform. Here's an example
of a chat between two colleagues who were reviewing a proposal for a customer. In this case, we've integrated the model so that it detects harsh language and suggests a more professional wording before this is sent to your colleague. These models can also do things that humans traditionally understand as reasoning. How many times have you looked at an insurance policy and you have a question about some of the details covered in this very long document? In this case, you can ask the model a question. And it will return the answer along with references to the document
explaining exactly where that information lives. Machine learning has been with us for many years. It's become ubiquitous in many ways. We don't even realize when we're using it. Whether it be we're uploading a picture to social media and it uses facial recognition to help us tag our friends, whether our spam filter on our email system reduces the amount of clutter that gets sent our way, all of those are great examples of machine learning. So if machine learning has been around for years, what is it about now that makes generative AI happen? Well, it's a combination
of multiple factors. First, it starts with better software. The transformer model, which came out of a paper from Google in 2017, is about neural networks that learn context and thus, meaning. This was focused on taking large sets of data and understanding the relationships between that data Stanford researchers called these transformers foundation models. And this has been a paradigm shift in the AI space. At the same time this is happening and over the next few years, we have massive advancements in the hardware to support the training of these models. Whether it be through general purpose GPUs
or dedicated ASICs, we have more than 100-fold increase in the compute power we can use to train these AI models. And at the same time that you're seeing the individual compute power of a single unit increase, we also have higher bandwidth networks, lower latency networks. And we have technology that enables you to cluster together more of those GPUs than has ever been possible before. So at OCI, we call that combination of hardware and networking our superclusters. Today we support more than 16,000 GPUs that can each talk to each m from any GPU to any other
GPU in less than 20 microseconds, all with 400 gigabits per second of throughput, from each GPU to every other GPU. Customers like Reka and MosaicML and NWorld use these superclusters today to train their massive models. And Mosaic, as an example, reports 50% better performance and 80% cost savings by running this on OCI. So as an industry, we've made substantial investments into the hardware and the networking to support these workloads. But there are other barriers to adopting this technology. First, when I talk to customers, their biggest barrier is getting access to it. And when they do
have access, how do they control the technology? How do they make sure they know where their data is going and how it's being used? So to solve that problem, we started with the best models from our partner Cohere. We then worked together to deliver an enterprise-class, customized, fully controlled service, which is our new generative AI service. As of today, we're announcing that's in early access, and it will be in production very soon. The key that we're focusing on with this service is ensuring that customers retain complete control of their data and understand exactly how it's
being used. That service will be available everywhere in our Distributed Cloud model. Once you get past the access and control barrier, the next barrier I experience when I talk to customers is really, it's a lack of experience. This is not unique to AI. The reality is, anytime we have a new technology trend, employees or people, they just don't have experience with it so far. So what do you do? The thing is, you've got to play around with it. We've all been children at one time or the other, where we're more excited about what we can
learn rather than worrying about what we're going to lose. So the way to solve this problem is to make sure that you take access to the technology, give it to your teams in a controlled and safe manner. And then you'll end up being extremely surprised by the things that they create with it that you couldn't even imagine before. So once you have that experience and you've got controlled access, what I see as the next barrier is how do I integrate that technology into my existing applications and workloads. To solve this problem, what we've done is
we've worked in conjunction with Cohere to create a great demonstration of how these foundational models, combined with retrieval augmented generation, can actually make it very easy for you to integrate this into your workloads. So let's take a look at that. [MUSIC PLAYING] MAN (ON VIDEO): So how does that all work? How does an application powered by generative AI get you to the answers you need faster? Natalie, our customer, is interacting with an application powered by a large language model. And this application does more than just chat. It's connected to a number of backend systems, like
the user, knowledge base, and inventory systems. Natalie is trying to figure out why her generator isn't starting up. She can ask her question however she likes. And the system responds naturally and also includes links to more information. The application is using a large language model to understand Natalie's request and search a collection of information for the answer. In this case, the system is looking in a knowledge base and will include links to the relevant items with a response. The system has provided some possible solutions. And Natalie has decided it must be an issue with the
spark plug. She can ask a followup question, like which spark plugs she needs. The application is going back to the knowledge base. But it does it while maintaining the context of the conversation. Natalie doesn't have to restate anything, like her generator model. Now that Natalie knows the right part, she wants to know where to get it. The system finds the part in nearby locations and also finds online options. The application can turn to other systems, like a customer database, to first discover where Natalie is located, and then an inventory system to look for matching parts.
The large language model can translate the natural language prompt into an SQL query to get the information from these systems. Being able to look across a full data landscape allows the application to return richer responses. Natalie opts for an online option, and the system orders the part. With permission to get more information from the customer database and the ability to place an order, the application can perform an action, like creating an online order for Natalie. Applications powered by generative AI can get you to your answers faster by allowing you to interact naturally, finding the information
you need across systems and supporting the actions needed to get the job done. Work smarter with OCI Generative AI. CLAY MAGOUYRK: Well, I don't know about you. I look at demonstrations like that. And I think it brings together all of these individual pieces that we're thinking about. In addition, one of the favorite integrations I have are the ones that I don't have to do. And so as Oracle, because we're also an applications company, I want to show you briefly how we can actually preintegrate this great technology into the existing applications. So our HCM team has
integrated AI-assisted goal setting. Our NetSuite team has integrated AI-driven job description generation. And across all of our industry verticals, we are using generative AI in industry specific ways. Here, the construction and engineering team has built a scheduling AI assistant to generate complete project schedules for you. So when you take all of this and you combine it together, the end result is a complete cloud AI solution that gives you everything you need from an infrastructure layer to train and inference these models across the best models and services available in your entire cloud portfolio. And these things
are pre-embedded in your applications. So another important special property of the Oracle Cloud is that we're continuously learning. Today we've covered many of the special properties, that Oracle is about delivering a unified cloud that combines the best of applications and infrastructure. Those workloads are delivered sustainably wherever customers need them met. And that the entire portfolio is continuously learning and improving. What I want you to walk away with today is that the pace of innovation and technology is accelerating across the board. You have a choice. You can either be afraid of this high rate of change
or you can instead choose to embrace it. I encourage you to experiment. Go out and play with new technology. Obviously that's true for generative AI, but it's true across everything that we've talked about today. And as that pace of change increases, you're going to find if you embrace it, that there are many opportunities you didn't imagine before. And that instead of being disrupted, you can really surprise yourself and your customers with the great things that you can build. And throughout that journey, the Oracle Cloud is here for you. Thank you very much. [APPLAUSE]