A trillion dollar sell off. Front and center this hour. The deep sell off.
A major tech sell off today. Deep Seek sell off is still really pressuring tech. Triggered on its face by one Chinese startup.
Let's talk about DeepSeek because it is mind blowing and it is shaking this entire industry to its core. But underneath, stemming from growing fears around a technique that could upend the AI leaderboard. A technique called distillation.
Distillation. Their work is being distilled. Distillation is the idea that a small team with virtually no resources can make an advanced model by essentially extracting knowledge from a larger one.
DeepSeek didn't invent distillation, but it woke up the AI world to its disruptive potential. I'm Deirdre Bosa with the Tech Check Take: AI's distillation problem. AI models are more accessible than ever.
That is the upshot, from a technique in AI development that has broken into the main narrative called distillation. Geoffrey Hinton dubbed the godfather of AI. He was the first to coin the term in a 2015 paper while working as an engineer at Google.
Writing that distillation was a way to transfer the knowledge from a large, cumbersome model to a small model that's more suitable for deployment. Fast forward to today, and upstarts. They're using this method to challenge industry giants with years of experience and billions in funding.
Put simply, here's how it works a leading tech company invests years and millions of dollars developing a top tier AI model from scratch. They feed it massive amounts of data, harness huge amounts of compute, and fine tune it into one of the most advanced models on the market. Then a smaller team swoops in.
Instead of starting from scratch, they train a smaller model by using a technique called knowledge distillation, pummeling a larger model with questions and using that to train its own smaller, more specialized model. By capturing the advanced model's reasoning and responses, they create a model that's nearly as capable but much faster, more efficient, and far less resource intensive. This distillation technique is just so extremely powerful and so extremely cheap, and it's just available to anyone you know.
Anyone can basically do it that we're going to see so much innovation in this space on the LLM layer, and we're going to see so much competition for the LLMs. So that's what's going to happen in this new era that we're entering. Tech giants use the technique too.
In fact, Google was a pioneer in distillation thanks to Hinton's research just weeks before Deep Seek broke onto the scene. Google was already using distillation to optimize lightweight versions of its Gemini models. Once again, Google had the tech but someone else, DeepSeek this time, turned it into the story.
Just like Google pioneered Transformers that made generative AI possible, only for OpenAI to swoop in with ChatGPT and own the narrative. Deep Seek showed Wall Street just how effective distillation could be. People talk a lot about Deep Seek and these new models that seem to be doing the work that was done in years before or months before, in days people need to remember what's happening is they're distilling, which basically means building on the frontier models that have been created.
Able to mimic and even surpass OpenAI's advancements in just two months spending what it says was less than $6 million on the final training phase. But it's too simplistic to attribute its success to just copying. It became increasingly clear that they had made some fundamental improvements in the way to approach this.
DeepS eek also applied clever innovations. If distillation was the only reason China got these models like DeepSeek got these models, then Microsoft should have also gotten them right. Like there's something more to it.
And it's not just compute, it's not just distillation. It's not just access to like a lot of tokens. It's about cleverness.
Yet distillation was a major factor in DeepSeek rapid ascent, helping it scale more efficiently and paving the way for other, less capitalized startups and research labs to compete at the cutting edge faster than ever before. These researchers at Berkeley just a week ago showed that for 450 bucks, they could create models that were almost as smart as this reasoning model from OpenAI called O1 for $450. That's not $450,000, so just 450 bucks they could do that.
So this distillation really works. That new reasoning model called Sky T1, made by distilling leading models in just 19 hours with $450 and just eight Nvidia H100 chips. There's a little small detail in the technical report in the research paper by the deep sea folks.
They actually took an existing model. It's called Quinn. This is not a reasoning model.
It's an older model by Alibaba. And they said, hey, can we make that smarter so that it can do reason. One approach they had was to do this kind of reinforcement learning, this kind of fancy technique.
Uh, they did that and it was much better. Quinn got much better. But they also just tried distilling it.
So they just took 800,000 outputs from R1. And then they just did very basic tuning, very cheap simple tuning. And that actually beat all of the other approaches.
Just weeks later, researchers at Stanford and the University of Washington created their S1 reasoning model in just 26 minutes, using less than $50 in compute credits. And Hugging Face, a startup that hosts a platform for open source AI models recreating OpenAI's newest and flashiest feature. Deep research, just for fun.
Hosting an in-house 24 hour challenge that resulted in an open source AI research agent called Open Deep Research. Which begs the question, why are big tech firms still investing billions of dollars to push the frontier, developing the most advanced AI models? When someone can just turn around and distill it for significantly less work and less money.
Microsoft and OpenAI are investigating whether their new Chinese competitor used the OpenAI model to train its rival chatbot. Including unauthorized access to OpenAI developer tools. Also, David Sacks, Trump's AI and crypto czar, said on Fox yesterday there's, quote, substantial evidence DeepSeek, quote, distilled knowledge from OpenAI models to create its own products.
If model distillation is accelerating, which these recent developments show is happening, and any small team can leap ahead of the biggest AI companies, what's happening to their competitive edge? Another key paradigm shift post DeepSeek the rise of a new open source order, a belief that transparency and accessibility drive innovation faster than closed door research. My joke is everybody's model is open source.
They just don't know it yet, you know, because it's so easy to distill them. So you might think you haven't open sourced your model, but you actually have. Deep Seek open sourced its V3 and R1 models, publishing essentially a blueprint that allows anyone to download, use and modify it for free.
Likewise, those models published by university researchers and Hugging Face, they were all open source too, raising questions about OpenAI's closed source strategy that once seemed like a moat but suddenly looked more like a target. Open source always wins. You know, like in tech industry, you know, that has been like one truth, you know, of the last three decades is that you cannot beat the momentum.
Even OpenAI CEO Sam Altman himself walked back his closed source strategy in response to a question on Reddit whether OpenAI would release model weights and publish research. Altman replied, personally, I think we have been on the wrong side of history here and need to figure out a different open source strategy. Just a remarkable statement from a leader who has long championed the closed source approach, citing safety, competitive dynamics and monetization as key justifications.
For Altman to now admit that OpenAI may have been on the wrong side of history. That suggests mounting pressure and could have ripple effects across the AI landscape. Distilled models they can be created on the cheap and without evaluation to protect the universities and startups creating the models, they seem inclined to give it away for free.
The biggest winners, developers. The open source keeps the proprietary players in check from a pricing and performance standpoint. Because if I know as a developer that I can always run an open source model on my own infrastructure, then that really reduces the pricing power of those proprietary players, which again, is a huge win.
The cost of running AI applications is plunging. DeepSeek R1, for example, costs $2. 19 per million tokens.
OpenAI's comparable O one costs $60, so someone building an AI app would have to pay multiples more to use OpenAI instead, which means the script has now flipped. Teams building AI applications, they used to be disparagingly called ChatGPT, rappers accused of having no competitive edge since their entire AI app or website is just an interface on top of an OpenAI or Google or anthropic model. But with the cost to serve steadily declining those AI application makers, they now have an advantage.
Every time you make AI more efficient, you actually open up a dramatic increase in more use cases. I think if you zoom out and you say, if I became five x more efficient or ten x more efficient. I'd argue that we'll have 100 x more use cases in the next 5 to 10 years.
So I think this is a win for developers. I think it's a win for anybody building at the application layer of AI, and I think it's a win for the long term AI ecosystem to have a continued sort of deflationary effect of the cost of these models. And A I model builders who have been on top for the past few years, they're looking more and more commoditized.
What distillation has not done is change the calculus for the biggest players in AI. OpenAI is on track to raise $40 billion from SoftBank, meta, Microsoft, Google, Amazon, they all said on their first earnings report post DeepSeek that not only were there AI spending plans intact, but they would ramp up capital expenditures or CapEx even more. Nvidia recovered from the DeepSeek sell off, and Jevons Paradox became the leading narrative.
There's plenty of people who are quoting Jevons paradox this morning, talking about the fact that an increase in efficiency for a particular resource leads to more use of that resource. Are you in that camp as well? Absolutely.
I believe fully that these new innovative techniques will lead to the development of more models, more testing, and will lead us towards AGI. Faster, smaller, more targeted models are optimized for speed and efficiency, making them cheaper to use not just for developers, but also for enterprises integrating AI into their businesses. Here's Tuan Srivastava, founder of AI infrastructure company Base10.
What we are seeing from our customers is that when it comes to using these models in production, oftentimes those smaller models are good enough, and now you're taking that really big power and using it in that smaller model. He says he heard from nearly two dozen fortune 100 companies in the week following DeepSeek reasoning model breakthrough. Not only do they want to run more efficient models, but it's changed their view on licensing AI.
They're questioning the premium they pay for OpenAI's ChatGPT APIs in Azure and Anthropic's Claude in AWS. Which brings us to OpenAI. The pressure is on, and for Altman and his team, the holy grail is AGI or artificial general intelligence.
For them and the best capitalized players, you could call it an AGI at all costs strategy. The idea that they will continue to push the technological frontier relentlessly and reaching it first is worth any investment, rather than relying on incremental improvements. Half $1 trillion for Project Stargate, all part of Sam Altman and SoftBank CEO Masayoshi Son's grand ambition to reach artificial superintelligence AI that surpasses human intelligence in all aspects, including creativity and problem solving.
Distillation may yield more cost effective performance gains, but it does not drive the revolutionary breakthroughs needed to reach AGI or ASI. And that is why the race has never been more urgent. In a world where AI capabilities can be distilled, refined and replicated faster than ever.
The window to build a true frontier level advantage is narrowing by the day. These topics we cover in the Techcheck take. They're complicated, and we heard your feedback on our long form interview.
Within our DeepSeek piece. So this time we're bringing you an in-depth conversation with another pioneer in the space, Glean CEO Arvind Jain. He's a Google veteran, and he's not just observing distillations impacts, he's applying it in real world enterprise AI solutions.
Thanks for watching and we hope you enjoy. Arvind, thank you so much for being here. Let's talk about some of the changes that have just happened in this broader AI landscape.
I saw you maybe a month or two months ago. We got DeepSeek . Yeah, we got sort of questions around scaling laws.
What do you think the biggest takeaway from what's what deep seeks breakthrough was? Yeah. Well, I think the the big thing is between 2024 and 2025, there's a there's a big shift in like how the industry thinks what's going to happen next with AI models.
Like last year, we were all thinking that model building is reserved for these, the largest companies out there, and there's going to be only 3 or 4 of them. And now you're seeing that no, that's not actually the case. Like, you know, you can have like, you know, companies that can come and build amazing models like, you know, which are comparable to the state of the art with much lower training costs.
And they're going to be so many of them now. And also the industry has sort of also like these models are no longer like there's no longer like, you know, the best model out there. That concept has gone away.
Like we had that like, you know, two years back of course. But now different models are all getting better at different things. And for you as like, you know, for your use case, like if you're an enterprise trying to actually build an AI agent.
What agent makes sense to you? Like it's going to be one of many of those that are that are being available in the market. So that's sort of like there's a big transition in the market.
And and I think it creates complexity as well. Like for enterprises like in terms of how do I keep up with all of this. Like what are the models I should be using, which ones I should not, which are secure, which are not?
So there's a lot of questions, but but at the same time, a lot of amazing innovation to take advantage of. And what to pay for, right? I mean, like you said, the shift that's happened where it's become so much more competitive and maybe totally commoditized.
Do you need to pay for open AI APIs versus, you know, a DeepSeek which is like efficient? Right? At the core of that is it feels to me distillation.
Is that right? Yeah. Yeah.
And I think the, the like part of it, it doesn't matter. Like you know how they got where they got it, right. Like, you know what what matters to me as a customer is what can I use.
Like, you know, what capabilities am I getting right now? And am I getting them at a at a lower price? Because remember that for most enterprises, like training is not the thing that they do.
Like we just need to use models to, to, to to actually transform our business processes. And so if we have a model that's cheaper, that's, you know, as performant and, you know, and I can actually reduce my cost by like, you know, 100 x, well, I'm going to use that. Has that been sort of the major shift as well on the application side is that running AI has become cheaper running AI.
So so interesting. Like, you know, I'll tell you a fact about our own company. You know, our business is growing at a very, very rapid pace.
We're going to have so much more usage of AI models in our system, but we are modeling our cost to actually not rise at the same pace. Like, you know, like, you know, I would say that we like we are we are anticipating that we'll be able to reduce cost for our end, for our for our end customers by ten x, you know, with all the model, you know, advances that are going to be coming this year. But that may mean that they're just using more of it, right?
Exactly. The big debate in the market, can you just going back like a little bit though, can you I know you're primarily concerned with the output. What models and how competitive they are at the end.
But can you talk a little bit about distillation for someone who's not familiar with that and how that concept has changed the landscape? Yeah. So so distillation as a technique, the way to think about it is, um, you take a large model, and a large model is actually good at doing a lot of different things, but for your need, you know, you only have one.
Like you only you need you need a model that can do one thing really well. So what you do is you will take a large model and you use that as the, you know, as a, as a thing that actually trains this new model that you're building, which is going to be as capable as the large one on that, only that one task. But, you know, because, you know, you've taught the model that needs to do one thing, it actually is able to simplify itself a lot.
And and so you are able to like, you know, build this new model, which is much lower, like in terms of like, you know, parameters or the cost of inferencing it and it actually solves the problem that you have, which is like, you know, that one task that you wanted to actually go and solve in a very nice way. So, so this is a technique that is actually going to be more and more common. And in fact, we've been doing this for the last two years as well.
But but expect that like, you know, there will be a lot of these purpose built models which are all distillations of a larger model, you know, to a to a specific task. Then what was DeepSeek breakthrough? Was it the fact that it's open source and it used distillation?
The breakthrough, like, you know, from an end user perspective is, well, I got I got the model that performs amazing. And it's, you know, very, very cheap. And it's actually very, very fast as well.
Like, you know, for, for inferencing. So, so I think and it's actually like deep secret is not like a, I would not call it like a Typical like, you know, like model distillation process, you know, because it's actually still very broadly, you know, it has broad capabilities. It can actually do a lot of different things.
There's innovation on top of the distillation. Yeah. Will others come though.
Did it sort of open up a new playbook that other companies, Chinese or otherwise, are able to use to get right to the frontier and make competitive models? That trend is actually broader than DeepSeek like, you know, there is a lot of, you know, research and innovation that's happening in the industry, like in terms of different techniques on how to train models. So you can expect, like, you know, many, many more such advances to come.
I mean, this this one like, you know, you know, captured everybody's attention in a big way. But I think this is going to be a normal thing for us. Like, you know, as you know, throughout this year you will see lots and lots of different models like, you know, that are great at certain things and like, they can do those things at a fraction of the cost of like, you know, the frontier models today.
Help me understand another piece of this, which are the benchmarks and things like humanity's last test, right? These are like math and coding tests. And does it open up the question especially, you know, as glean uses this for other enterprises?
The idea of generalization. How do we know that these models aren't just good at passing these specific benchmarks? How do we know that it's going to be able to, you know, solve or analyze a problem, solve these sort of unknown tasks that are so unique to an enterprise?
Yeah so, so that's that's sort of like Glean does, you know, we are actually taking all of this innovation that's happening in the industry, like, you know, like whether the innovation is coming from OpenAI or Anthropic or Google or, you know, meta or all these open source models that we have out there. Um, all of them have some capabilities and they can actually solve real world needs. And so what we do at glean is actually take them all and make them all available to our enterprise customers in a safe and secure way, and actually make it easy for them to make these models work for real world business use cases like, you know, connect these models with their knowledge, their data, their information, and their business processes.
Right. So now, like, you know, how like how do you test like whether these models are good at doing certain things or not. In the real world.
In the real world, I mean, I think like, you know, we are seeing it like, you know, with, with our, you know, our application and our, you know, which is primarily about like in knowledge access, like, you know, you know, people are using it very actively in their day to day work. Like, you know, whenever you have questions, you know, that you that you need answers for or information that you need to complete your tasks like AI is actually amazing at actually, you know, at that particular category of tasks like, you know, making knowledge more accessible, you know, helping you process, analyze, research, you know, on large amounts of information. So that's already proven.
Like, you know, these models do a great job at, at, at actually analyzing information, making it more accessible to you. But but then if you think about more and more like, you know, like you heard you heard last year. Agents and agents like, you know, that's the talk of the town.
Like today. Like, you know how businesses are thinking about AI. Is that.
Well, I got, like, you know, these business processes and, you know, they're time consuming. You know, there's a lot of money that I have to spend, like running these processes. I want to see if I can build an agent.
So then, you know, you start a process of building an agent to solve a specific business problem. You do some iterations, you work with a platform like ours or some other. You actually try different models, see, like you know which one is actually, you know, you know, you know, doing better, you know, for you on that particular task, you have to tweak, you have to like, you know, it's engineering like you, you do some work, but ultimately, like, you know, you get there.
You actually, you know, bring that automation like you bring 90% automation or 100% automation in these, you know, processes. So so you're seeing that real world impact already happen like, you know, across a large range of problems. So the models are not like, you know, certainly not like I won't classify them as being like good at like, you know, giving sat or just that like, you know, they're actually they're.
The ones that you're sort of tweaking and adapting for your customers. Yeah. Those are the ones that are better at Real World.
No no, no. All these models are actually good. Like, you know, like don't don't think of any one of these models that, you know, it only knows how to take SAT.
You know. Okay. Got it.
Generally these models have broad capabilities and you can actually bring it into a business and make it work for a real business. Is that because of the reasoning breakthrough. And then maybe put like the deep research features into context.
That's right. The real power of llms have always been reasoning. And some generation of course like it can actually use that use those reasoning capabilities to also write and generate, you know, artifacts for you.
And that reasoning capabilities are on the rise. Like, you know, you see, you know, with with GPT, with like, you know, O1 the level of reasoning capabilities is like, you know, miles ahead of like, you know, the previous generation, um, with oh three, like, you know, that we're expecting even more oh, three minutes, you know, is giving the same level of performance as oh one on certain tasks. So, so that, that the more reasoning that you get because if you think about, you know, any business process, you know, in an enterprise today, you know, there is a, you know, you know, it sort of involves a human, you know, that works with some information and, you know, use some of their reasoning powers and do some work.
Right. And so, like, you know, the the reasoning capability, the more you bring from the models, the more complex business processes you'll be able to now automate with AI. Does that mean the data wall is irrelevant now?
Data walls are irrelevant. Well, we talked so much about it last year, right? And we thought, okay, maybe the advancements had plateaued.
But it turns out there's just a different kind of advancement, right? Yeah. Yeah.
There is I mean, yeah, there is there is no plateauing on any front, by the way. Like, you know, when it comes to AI and, and like, you know, the ability for us to leverage, you know, I, you know, in our day to day work, like, you know, we're just scratching the surface. You know, we talked about this last year that the event like, you know, you know, even if like, you know, somehow there was a standstill and there was no progress in model capabilities, like, you still haven't, like, you know, tapped into even if 1% of, like, all the capabilities the existing models have.
So so there's a lot of work, you know, like, you know, you're going to see this massive transformation this year. Like lots and lots of business processes are going to see AI getting infused in them. Talk about the open AI of it all, because their moat, their advantage used to be sort of these pre-training advancements and the reasoning advancements are really significant, and they're building features on top of it.
But distillation DeepSeek means that sort of it's open season now that those advancements are almost commoditized, right? Where does this leave OpenAI? And I also wonder how much you're using OpenAI over the last few weeks and what you're doing for your customers.
Yeah. Well, we we use OpenAI as well as, like, you know, these other models from Google and Anthropic and other companies. Has your usage, though with the rise of DeepSeek , has your usage of OpenAI gone down?
No it hasn't. Well, so far, like for Glean right now, we have actually evaluated and um, and made, you know, we're making Deep Seek available in our platform. But, you know, like it takes time, like, you know, right now, like, you know, there is no impact in terms of actual enterprise usage, you know, for our customers, like, you know, they're all still using, you know, the same like, you know, frontier models from.
Why, b ecause if DeepSeek offers the same performance or similar performance at a much cheaper cost, is it only a matter of time to switch over? Yeah, yeah, it's a matter of time. You know, there is like, you know, an enterprise.
You have to there are a lot of other considerations like, you know, one one of one of them is, of course, security and making sure that like, you know, like, you know, we're using a robust, you know, model. Like, you know, any model that comes in like, you know, there are possibilities of, you know, prompt injection attacks. And so they say like there is there is a change management process.
Like part of it is like, you know, when we are delivering a model to our customer, we need to make sure that it's robust, absolutely on all fronts. But second, also like I would say that the cost is not the only, you know, like factor driving, you know, AI adoption and usage today. Like we're very early journey like a lot of our customers, they're interested in actually seeing some magic happen.
Like it's sort of irrelevant. Like, you know what the cost of that was because it's not at scale. And and so you want to first like for a lot of like new use cases, new agents that you want to build like the, the idea like, you know, for in fact, that's what I would recommend to any, you know, any, any enterprise leader is that, you know, like cost is the second, you know, consideration first.
Like, you know, you have you know, you want to build an agent that's going to actually transform a business process, make make it work first and then figure out like, you know, can you actually optimize it? So so if there is actually a clear leader like in terms of what's the best model? It may be slow, but it's the best.
I would start with that first, but. Who is that right now? Who is that right now?
Well that depends on the task. Like okay. Yeah.
But I mean, it just feels like OpenAI had such a hold on that leadership for so long, maybe like Gemini and Anthropic that has changed. That's changed. Yeah.
Do you see that mix being a lot different I mean 100% yeah. I think I think. There's a fundamental shift like, you know, you know the llms, you know, they're going to be many of them and they're all going to get better at different things.
And you know, as an industry like, you know, it's going to be also like, you know, a commodity commoditized market in that sense. Yes. Do you think that OpenAI is going to become cheaper to for your customers to use?
I think we're already seeing that on the consumer side. Right. Right after deep 6R1, they made one free.
That's right. Is that happening on the enterprise side too. It's going to happen.
Um we haven't seen immediate pricing changes. But the but I expect like, you know, like everybody has to be competitive in this market. And like you cannot be like, you know you know these are real cost differences.
This is not like 20% 30% like these are order of magnitude like reduction in prices. So so like you know, we will see that, you know, pricing pressure and and like pricing coming down. It's really interesting last week, and I think that Sam Altman did an AMA Reddit AMA Yeah.
And he said that OpenAI has been on the wrong side of history when it came to open source, which was just remarkable to me because they've spent the last two years defending it with everything that they have. What did that mean to you? Yeah.
Well, you remember that open source always wins. You know, like in tech industry, you know, that has been like one truth, you know, of the last three decades is that you cannot beat the momentum. You know, that is successful.
Open source project is able to actually generate, you know, there is just so much, um, like, you know, the like when you when you build systems in open source, the, the, the number of developers who are behind that, you know, project, you know, and and the amount of innovation that happens there, you know, combined with the fact that, you know, like, you know, that's a technology that's, you know, um, you know, you can understand it more. It's like, you know, more efficient, like from a cost perspective, it's a very, very hard, you know, moment to beat. Like, you know, you have to like, you know, you know, I think the and like, you know, like when, when it comes to like, you know, it's not that, you know, everybody gets to always build open source systems.
You know, companies like us like, you know, we build a, a, you know, ready to use product for our customers. But like, you know, that's our mindset to like, you know, under the hood, like, you know, we're going to maximize the use of like open source systems that we can like, you know, provided you know that they have the right security, you know, um, you know, credentials. Does that mean everything's going to be open source in the future?
From ChatGPT to Gemini to Anthropic? Well, I mean, I think the, The. Well, that's actually a hard, hard question to answer.
Like, I don't know. I think one thing I can tell you is that there's going to be if you look at usage of models, uh, majority of AI usage is going to be on open source models. Um, and I mean, it used to be when this was sort of a debate, open or closed source, it doesn't feel like as much of a debate anymore.
It was sort of llamas going to be the I don't know if gatekeeper is the right way because it's the right word, because it's open. I mean, who who's creating the open source ecosystem? Is it one player or is it many players?
I think it felt like, you know, open source will be also like 1 or 2 players because like the fundamental thing that has changed is do you need billions of dollars to build models? And that's where like now the industry is confident that you don't. And which means that, like, you know, the number of players is going to be large.
Like, you know, there's going to be lots of players who are going to be innovating and bringing models into open source is not going to be, you know, like one company that is publishing open source models. Okay, we've only got a few minutes left. I want to make sure I kind of understand this too.
At the enterprise level and the application level, which a lot of people are talking about, you're kind of in the perfect position to see that. Glean's own financials. Right?
I think you guys hit 100 million ARR. Um, has that had anything to do with this shakeup with the cost coming down with all with distillation, any of these themes? I would say no.
Like, you know, that's a progressive. You know, we hit that like, you know, in our last fiscal year, the um, like the, the momentum for us is actually coming from, you know, the fact that, you know, we have a product, um, that is actually, you know, a very obvious, you know, thing that, you know, our customers want, like, you know, think of, like, you know, Glenn is clean is a better version of ChatGPT, like, you know, for your enterprise, like, you know, it's like, you know, it can actually be useful to you in your day to day, you know, work life and and I think the people, people actually like there is no doubt like, you know, like every leader in the industry today is focused on AI education. Like, you know, they know that AI is a big thing.
Like, you know, you know, everything is going to get transformed. You know, their companies is going to get transformed and and well, to transform the company, you need to make sure that you have the workforce that can actually bring that transformation. And so there's a lot of focus on AI education.
And when you think about education like, you know, what are the tools? Like, you know, how do you actually get people more comfortable with AI? Like, you know, we are a tool which is sort of like, you know, like the most basic one that you would that you would imagine you want to give to every employee of yours.
So, so that is what has created momentum for us, I mean, as a user. And I guess that's what people are doing in the enterprise. But just as like a consumer of AI and using the different tools, I really loved our one because you got the thought process.
Is that something that people in the enterprise value to 100%? Yeah. Why do you think that is?
Because I think like the magic, you know, that you feel with AI is basically you know, coming from that, you know, human like reasoning capabilities. You know, the models can think because, you know, like you could always like anything that didn't require thinking where I had to do the thinking I could actually write a program that could do, like, you know, the rest of the work. What what machines couldn't do before was, was, you know, the thinking capabilities and that, like, you know, truly transforms, like, you know, how now you can actually both like, you know, what kind of systems you can build.
But also like, I think this also democratizing, you know, the process of now, like, you know, every business, you know, user like every employee in the company, they feel that they have the power to create. Right. And think about the trust side of it too.
You can see the reasoning. And that's so important in a business process. Yeah.
Arvind, I could talk to you for another hour and I wish we had another hour, but we'll leave it here. Um, thank you so much for coming in and talking about all these subjects. Thank you for having me.