I'm convinced that Wall Street still doesn't understand Nvidia Nvidia stock is down by over 10% after one of their best earnings calls ever and I think I figured out why so in this video I'll break down everything you need to know about nvidia's latest earnings the three big things that investors seem to be missing right now and what that means for Nvidia stock in 2025 and beyond your time is valuable so let's get right into it first things first I'm not here to hold you hostage so here's everything up front I'll quickly highlight nvidia's awesome
earnings results I'll walk you through three key points that most investors are missing right now I'll cover three big catalysts that could be huge for Nvidia stock in the short term and of course what this all means for nvidia's shareholders as a result there's a ton to talk about so let's Dive Right into nvidia's earnings call Nvidia posted record revenues of $130 billion for 2024 which is their fiscal year 2025 Top Line revenues are up a whopping $114 % year-over-year and Nvidia posted earnings per share of $2.94 which is up 147% from a year ago
after accounting for their 10 for one stock split last summer nvidia's data center segment is their biggest business unit by far accounting for 88% of their total revenues today those revenues came in at $115 billion which is up by an insane 142% year-over-year just for reference Amazon web services revenues came in at $108 billion in 2024 and AWS Powers around 1/3 of the entire internet so nvidia's data center revenues coming in higher than aws's is a pretty big deal one thing that's scaring Wall Street is that nvidia's gross margins fell to 73% in quarter 4
which is a 3o year-over-year decline nvidia's Chief Financial Officer Colette Crest said that the lower margins are due to speeding up Blackwell manufacturing and that Nvidia expects grow gross margins to be back to the mid-70s later this year but what nobody pointed out is that nvidia's gross margins for the last 12 months came in at 75% which is actually up 2.3 points from last year even after accounting for Blackwell's expedited production that's the first thing that Wall Street is missing which I'll get back to in a little bit 73% gross margins means Nvidia is as
profitable as most software companies on top of that their gaming revenues were up 9% professional visual ization revenues were up 21% and Automotive revenues were up 55% all year-over-year I'm not spending too much time on these segments because they account for just 11% of nvidia's revenues combined but at this rate nvidia's total revenues could hit around $190 billion for calendar year 2025 which would mean more revenues than Tesla AMD and paler put together that's pretty crazy right if you've been watching this channel for a while you know that the key to finding great stock stocks
is understanding a company's products not just their profits the best investments are usually in companies with platforms that other businesses pay to build on top of and Nvidia is worth around $3 trillion because every major tech company relies on their accelerated Computing platforms Nvidia currently holds around a 90% share of the data center GPU market and that's before Blackwell is fully ramped which we know that Nvidia is speeding up even if it means lower margins in the short term that's because nvidia's hardware and software platforms are the picks and shovels of this entire AI Gold
Rush and how big it gets ultimately depends on how fast Nvidia can deploy their Solutions around the world and they want to go as fast as possible because according to Market us the global artificial intelligence Market is expected to more than 8X in size over the next 8 years which is a compound annual growth rate of 30% through 2033 but many of the companies building next generation AI applications are not publicly traded think about the 9s and early 2000s companies like Amazon and Google went public very early in their growth cycle but today they're waiting
an average of 10 years or longer to go public that means investors like us can miss out on most of the returns from the next Amazon the next Google the next Nvidia that's where the fundrise Innovation fund who's making this video possible can help you they give you access to invest in some of the best tech companies before they go public venture capital is usually only for the ultra wealthy but fund rise's Innovation fund gives everyday investors access to some of the top private pre-ipo companies on Earth with an access point starting at $10 they
have an impressive track record already investing over $110 million into some of the largest most in demand Ai and data infrastructure companies so if you want access to some of the best late stage companies before the IPO check out the fundrise Innovation fund with my link below to today all right so nvidia's annual revenues and earnings per share both grew by triple digits year over-year on top of that they're investing in Blackwell ramping even faster to maintain their massive market share that's why margins are down quarter over quarter even though they're still up for the
entire fiscal year this is the first thing that Wall Street seems to be missing as a result Nvidia is trading at a price to earnings ratio of 42 which is close to its 5year low in fact the last time it traded at PE ratios this low Nvidia stock was just $11 per share the second thing that most investors missed was actually a pretty big bombshell that nvidia's CEO Jensen Wong dropped during the Q&A portion of the earnings call let's walk through the question and then Jensen's answer together Timothy aruri from UBS asked Jensen to speak
about how some of nvidia's biggest customers are balancing as6 and gpus for context Amazon Microsoft and Google all have their own application specific integrated SE circuits or as6 which some investors are worrying compete directly with nvidia's gpus and now we're seeing some super clusters use both kinds of chips together instead of relying only on nvidia's gpus Jensen gave a pretty thorough answer so I'm going to paraphrase a bit but I'll show you his exact words on screen as I go Nvidia built very different things than asex even though sometimes they overlap andv video's architecture is
general which means it's great for diffusion based models or Vision based models or multimodal models or text instead of needing to specialize in any one of those and nvidia's ecosystem is so big and feature Rich that most Innovations and algorithms come out on Cuda first Nvidia is also really good end to end from data processing and curating training data to the training post-training and inference steps themselves and unlike as6 that are in one Cloud AWS Azure or Google Nvidia is in every cloud and can also be on premises or in a robot and nvidia's upgrade
cycle is every year which tends to be faster than the as6 get upgraded and this includes multiple chips gpus CPUs and networking chips to connect them at scale as a result nvidia's total performance per watt is anywhere from 2x to 4X or even ax these as6 that translates directly to revenues since AI applications charge based on the amount of tokens generated and most data centers either have a fixed amount of space or a fixed amount of power being able to generate 8X more tokens in the same amount of space and power budget translates directly to
8X Returns on investment for these data centers and I don't think that Wall Street realizes that Amazon Microsoft and Google's custom chips don't compete with Nvidia at all they're slower to upgrade much more limited in the workloads they can support and have two to eight times lower performance per watt than nvidia's chips depending on what Innovations their massive install base have come up with and where nvidia's chips are in their annual upgrade cycle let me give you a clear example Nvidia released an open-source software package called tensor RT llm which literally doubled the inference performance
for large language models running on nvidia's gpus that means the hundreds of thousands of h100s bought by companies like Amazon Google Microsoft meta open aai and xai got twice as good at running large language models overnight but tensor RT llm also works on nvidia's love La gpus their ampere A1 100s which predate the h100s and Blackwell gpus as well and this leads nicely into the third thing that Wall Street seems to be missing which is pretty ironic considering Jensen said it on CNBC for the entire world to hear so now basically it's AI teaching AIS
how to be better AIS that posttraining process is where an enormous amount of innovation is happening right now a lot of it happened uh with these reasoning models and that comput computation load could be a 100 times more than pre-training and then Here Comes inference the reasoning process instead of just speing out an answer when prompted it reasons about it it thinks about how best to answer that question breaks it down step by step might even reflect upon it come up with several versions pick the best one and then presents it to you so the
amount of computation that we have to do even that inference time now is a 100 times more than what we used to do when chat GPT first came out so AI inference takes 100 times more computation now than it did when chat GPT was released a little over 2 years ago thanks to all the new reasoning approaches in open AI GPT 4 XI grock 3 and deep seeks R1 models that works out to be roughly a 10x in inference compute costs every single year which is huge for NVIDIA and I expect this trend to continue
let me explain why until recently inference didn't need much compute because answers were generated in a single shot the user prompts the model and the model responds with whatever tokens it predicts come next so inference costs were basically a function of how many prompts an AI model received and the average difficulty of those prompts but today we have techniques like step-by-step verification Chain of Thought reasoning and mixture of experts models all of which generate a lot more tokens on their way to solving a problem as a as a result AI models have a whole new
way to scale their performance spending more tokens to think about a problem in exchange for a better final output for a human that would be like getting more time more money and more access to the right tools and experts to do a job but for AI models that means more GPU hours and better algorithms so now there are three ways to scale AI models the first scaling law is called pre-training scaling where the foundational AI model improves with more training data and more parameters both of which increase the amount of compute power needed the second
AI scaling law is called posttraining scaling that's reinforcement learning on more specialized data and prompts refining the outputs with different kinds of feedback having the AI model practice millions of times using synthetic data and so on think of this as the fine-tuning step where the model can learn over time which also requires a significant amount of compute power and now there's a third AI scaling law called test time scaling that's where models produce better outputs by spending more time energy and tokens on the inference step itself should gp4 provide the answer in one shot or
do some retrieval augmented generation first should grock 3 break its Chain of Thought down into three steps or five or 10 which expert models should deep seek R1 consult with or should it take a majority vote from all of them this is a huge area of research that's going to result in many different algorithms to solve all kinds of different problems in a wide variety of markets but what they all all have in common is benefiting from more compute since letting models generate more and more tokens on the way to solving a problem will lead
to better and better outputs as long as that's true AI research and development teams will keep finding new ways to generate better answers at the cost of more tokens and now we've come full circle because this is exactly why hyperscalers supercomputers and other companies with Big Data Centers will keep buying nvidia's chips which are anywhere from 2 to 8 times more power efficient than the specialized chips meant for specific workloads on specific clouds and that's why Nvidia is pushing the pace of Blackwell's production and deployments even at the cost of their margins over the short
term more chips leads to more Innovations which leads to more AI adoption which leads to more chips and Nvidia understands this feedback loop better than anyone else and now that I've covered nvidia's earnings and the three key points that Wall Street has been overlooking let me highlight the three catalysts that I think will be huge for Nvidia stock and if you feel I've earned it consider hitting the like button and subscribing to the channel that lets me know to make more deep Dives like this thanks and with that out of the way let's talk about
Nvidia stock there are three big catalysts in the very near future first Nvidia GTC is right around the corner which is nvidia's massive developer conference I'll actually be there live to cover Jensen's keynote check out nvidia's newest products and prototypes and attend as many sessions as I can on things like robotics self-driving cars and even Quantum Computing I'll leave a link to the sessions I plan on attending in the description below because they're online absolutely free and a great way to learn more about the science behind this stock but the first catalyst is Jensen's GTC
keynote where I expect him to talk about the B300 Blackwell Ultra gpus which is set to ship in the second half of 2025 remember Blackwell is an accelerated Computing platform which means he should be talking about all the upgrades to the gpus but also the rce CPUs the Bluefield dpus nvlink switch chips infiniband and Spectrum X networking Solutions and it's not just about data centers I want to understand how Blackwell performs at the Edge by touring the entire exhibition FL at GTC and seeing Blackwell chips in everything from robots and self-driving cars to desktop PCS
and even project digits and of course I'll be interviewing as many Nvidia Executives as I can and sharing everything I learn with you the second catalyst is compy 2025 which happens in May at last year's computex Jensen revealed the architecture after Blackwell called Reuben which is slated for 2026 and just like Blackwell Reuben is a platform so we should expect to hear about all six Next Generation chips involved not just the gpus themselves the third and final catalyst is the architecture after Reuben in an article published just yesterday Tom's Hardware confirmed that post Rubin gpus
are already in the works and Jensen said that he'll announce them at GTC the thing is it's not just about these next three chips it's about the overall performance gains from Chip to chip to Chip and the trend and performance that we can expect over the long term the bigger the Boost each one of these chips provides data center throughput the further Nvidia extends its lead over every other Ai chipmaker and the faster AI costs come down which means more AI adoption across the board both of which would obviously be great for Nvidia stock so
between nvidia's awesome earnings call the three big things most Wall Street analysts are missing the upcoming catalysts at GTC at computex and nvidia's stocks PE ratio being near a 5year low my plan is to keep dollar cost averaging in for the long term which means buying more shares as the price goes lower while keeping a healthy amount of cash on the side and if you want to see what other stocks I'm buying before Nvidia GTC make sure to check out this video next thanks for watching to the end even though I gave you everything upfront
and until next time this is ticker simple U my name is Alex reminding you that the best investment you can make is in you