All right, everybody. Welcome back to the AllIn podcast. The number one podcast in the world. You got what you wanted, folks. The original Quartet is here live from DC with a great shirt. Is that is your habitasher making that shirt or is that a Tom Ford? That white shirt is so crisp, so perfect. David Sachs, you're talking about me. You're Zar. You're Zary. I'll tell you exactly what it is. I'll tell you what it is. You can tell me if it's right. Brioni. Yes, of course it's Brioni. Briion spread car. Look at that. How many
years have I spent being rich? When a man turns 50, the only thing he should wear is broni. The stitching is Looks very luxurious. That's how Chimath knew, right, Jimoth? How'd you figure it out? The stitching? It's just how it lays with the collar. To be honest with you, it's the button catch. The Brion has a very specific style of button catches. If you don't know what that means, it's because you're [ __ ] ignorant, malcontent yourself. I'm looking it up right now. Right. Yeah. I just asked you to continue. We'll let your winners ride.
[Music] We open source it to the fans and they've just gone crazy with it. All right, everybody. The All-In Summit is going into its fourth year September 7th through 9th and the goal is of course to have the world's most important conversations. Go to allin.com yada yada yada to join us at the summit. All right. It's a lot in the docket, but there's kind of a very unique thing going on in the world. David, everybody knows about AI doomerism. Basically, people who are concerned uh rightfully so that AI could have some, you know, significant impacts
on the world. Dario Amod said he could see employment spike to 10 to 20% in the next couple years. They're 4% now as we've always talked about here. He told Axio that AI companies and government needs to stop sugar coating what's coming. He expects a mass elimination of jobs across tech, finance, legal, and consulting. Okay, that's a debate we've had here. And entrylevel workers will be hit the hardest. He wants law makers to take action and more CEOs to speak out. Poly market thinks regulatory capture via this AI safety bill is very unlikely. US enacts
AI safety bill in 2025 currently stands at a 13% chance. But uh Sax, you wanted to discuss this because it seems like there is more at work than just a couple of technologists with I think we'd all agree there are legitimate concerns about job destruction or job and employment displacement that could occur with AI. We all agree on that where we're seeing robo taxis start to hit the streets and I don't think anybody believes that being a cab driver is going to exist as a job 10 years from now. So there seems to be something
here about AI dumerism but it's being taken to a different level by a group of people maybe uh with a different agenda. Yeah. Well, first of all, let's just acknowledge that there are concerns and risks associated with AI. It is a profound and transformative technology and there are legitimate concerns about where it might lead. I mean the future is unknown and that can be kind of scary. Now, that being said, I think that when somebody makes a pronouncement that says something like 50% of white collar jobs are going to be lost within 2 years, that's
a level of specificity that I think is just unknowable and is more associated with an attempt to grab headlines. And to be frank, if you go back and look at Anthropic's announcement or Daario's announcement, there is a pattern of trying to grab headlines by making the most sensationalist version of what could be a legitimate concern. If you go back three years ago, they created this concern that AI models could be used to create bioweapons. And they showed what was supposedly a sample I think of claw generating an output that can be used by a bioteterrorist
or something like that. And on the basis of that it actually got a lot of play and in the UK Rishiun got very interested in this cause and that led to the first AI safety summit at Bletchley Park. So that sort of concern really drove some of the initial AI safety concerns. But it turns out that that particular output was discredited. It wasn't true. I'm not saying that AI couldn't be used or misused to maybe create a boweapon one day, but it was not an imminent threat in the way that it was portrayed. There have
been other examples of this. You know, obviously people are concerned about could the AI develop into a super intelligence that grows beyond our control. Could it lead to widespread job loss? I mean, these are legitimate things to worry about, but I think these concerns are being hyped up to a level that there's simply no evidence for. And the question is why? And I think that there is an agenda here that people should be concerned about. So, let's start with maybe Freeberg things that we all agree on here. There are millions of people who drive trucks
and Ubers and lifts and door dashes. You would, I think, agree the majority of that work in but 5 to 10 years, just to put a number on it, will be done by self-driving, robots, cars, etc., trucks. Yeah, Dave, I think it's that might be the wrong way to look at it or I wouldn't look at it that way. And maybe I'll just give frame it a different way, please. If I'm deploying capital, let's say I'm a CEO of a company and I can now have software that's written by AI. Does that mean that I'm
going to fire 80% of my software engineers? Basically, it means one software engineer can output, call it 20, 50 times as much software as they previously could by using that software generation tool. So the return on the invested capital, the money I'm spending to pay the salary of that software engineer is now much much higher. I'm getting much more out of that person because of the unlocking of the productivity because of the AI tool than I previously could. So when you have a higher ROI on deployed capital, do you deploy more capital or less capital?
Suddenly you have this opportunity to make 20 times on your money versus two times on your money. If you have a chance to make 20 times on your money, you're going to deploy a lot more capital. And this is the story of technology going back to the first invention of the first technology of the caveman. When we have this ability to create leverage, humans have a tendency to do more and invest more, not less. And I think that's what's about to happen. I think we see this across the spectrum. People assumed, "Oh my gosh, software
can now be written with one person. You can create a whole startup. You don't need to have venture capital anymore. In fact, what I think we're going to see is much more venture capital flowing into new tech startups. Much more capital being deployed because the return on the invested capital is so so so much higher because of AI." So generally speaking, I think that the the premise that AI destroys jobs is wrong because it doesn't take into account the significantly higher return on invested capital, which means more capital is going to be deployed, which means
actually far more jobs are going to be created, far more work is going to get done. And so I think that the counterbalancing effect is really hard to see without taking that zoomed out perspective. To to to respond to Sax's point, I do think anytime you see a major change socially, society, there's a vacuum. How's the system going to operate in the future? And anytime there's a vacuum in the system, a bunch of people will rush in and say, I know how to fill that vacuum. I know what to do because I am smarter, more
educated, more experienced, more knowledgeable, more moral. I have some superiority over everyone else. And therefore, I should be in a position to define how the new system should operate. And so, there's a natural kind of power vacuum that emerges anytime there's a major transition like this. and there will be a scrambling and a fighting and a whole bunch of different representation. Typically fear is a great way of getting into power and people are going to try and create new control systems because of the transition that's underway. Okay. You're going to see this around the world.
Yeah. I mean, uh, so Chimath, it's pretty clear, you know, Freeberg didn't answer this question specifically, so I'm going to give it to you again. You would agree jobs like driving things are going to go away. If we had to pick a number somewhere between 5 and 10 years, the majority of those would go away. he's positioning, hey, a lot more jobs will be created because there'll be all these extra venture capital and opportunities, etc. But job displacement will be very real and we're seeing, I think, job displacement. Now, you had a tweet recently, you
know, you were talking about entry level jobs and how that seems to be going away in the white collar space. So, where do you land on job displacement? Freeberg's already kind of given the big picture here, but let's step back to for people who are listening who have relatives who drive Uber or a truck or are graduating from college and want to go work at a, you know, I don't know, the Magnificent 7 or in tech and they're not hiring and and we know the reason they're not hiring because they're leaning into AI. So, let's
talk about the job displacement in the medium term. I'm going to ignore your question and I'm going to Why should you be any different than the other? So I now content on this podcast. There's two people not wanting to answer the question about job displacement. Interesting trend. No, no. We'll go back to that. Let me start by just saying that it seems that these safety warnings tend to be pretty coincidental with key fundraising moments in anthropic journey. So let's just start with that. And if you put that into an LLM and try to figure out
if what I just said was true, it's interesting, but you find it's relatively accurate. I think that there is a very smart business strategy here. And I've said a version of this about the other companies at the foundational model layer that aren't Meta and Google because Meta and Google frankly sit on these money gushers where they just generate so much capital that they can fund these things to infinity. But if you're not them, so if you're OpenAI or if you're anthropic, you have to find an angle. And I think the angles are slightly different for
both. But I think what this suggests is that there's a pattern that exists and I think that that explains some of the framing of what we see in the press, Jason, and why we get these exaggerated claims. Perfect. So there are people who are doing this for nefarious reasons is I I guess where you're sort of getting at here. It's a way to market. It's smart. It's smart. If you fall, it's up to you. Yeah. Okay. Well, there's also an industrial complex according to some folks that are backing this. If you've heard of effective altruism,
that was like this uh movement of a bunch of I don't know, I guess they consider themselves intellectual sachs and they uh were kind of backing a large swath of organizations that I guess we would call in the industry astroturfing or what do they call it when you make so many of these organizations that they're not real in politics and flooding the zone perhaps. So if you were to look at this article here, Nick, I think you have the AI existential risk um industrial complex graphic there. It seems like a group of people according to
this article have backed to the tune of 1.6 billion, a large number of organizations to scare the be Jesus out of everybody and make YouTube videos, Tik Toks, and they've they've made a map of it. There's some key takeaways here from that article where it says here that it's an inflated ecosystem. There's a great deal of redundancy. Same names, acronyms, logos with only minor changes. Same extreme talking points. Same group of people just with different titles. Same funding source. There's a funding source called Open Philanthropy which was funded by Dustin Moskovitz who is one of
the Facebook billionaires. Jim, you worked with him, right? I mean he was wasn't he like Zuck's roommate at Harvard or something and one of the first engineers made a lot of money so he funded this he he's he's an EA and he funded this group called Open Philanthropy which then has become the feeder for essentially all these other organizations which are almost different fronts to basically the same underlying EA ideology and what's interesting is that the guy who set this up for Dustin Holden Karnovski who is a major effective altruist and was doing out all
the money. He's married to Daario's sister and she she's I guess associated with EA and she was one of the co-founders of Anthropic. So these are not coincidences. I mean the reality is there's a very specific ideological and political agenda here. Now what is that agenda? It's basically global AI governance if you will. They want AI to be highly regulated but not just at the level of the nation state but let's say internationally supernationally to well if you just do a quick search on global compute governance it'll tell you what the key aspects are so
number one they want regulation of computational resources this includes access to GPUs they want AI safety and security regulation they want international you call from globalist agreements and they want ethical and societal considerations or policy built into this. Now, what does that sound like? That sounds a lot to me like what the Biden administration was pursuing. Specifically, we had that Biden executive order on AI, which was 100 pages of Bernson regulation that was designed to promote AI safety, but had all these DEI requirements. So, you know, it led to woke AI. You remember when Google
launched Black George Washington and so forth, they had the Biden diffusion rule which created this global licensing framework to sell GPUs all over the world. So extreme restrictions on proliferation of servers of computing power. They created what's called the AI safety institute and they again fostered these international AI summits. So if you actually look at what the Biden administration was tangibly doing in terms of policy and you look at what EA's agenda is with respect to global compute governance, they were pushing hard on these fronts. And now if you look at the level of personnel,
there were very very powerful Biden staffers who now all work in anthropic. So probably the most powerful Biden staffer on AI over the past four years was a lawyer named Tun Chabra and he now works at Anthropic for Daario. Elizabeth Kelly who was the founding director of the AI safety institute in the government now works at Anthropic. Like I mentioned Daario's sister is married to Holden Karnnowski who dos out all the money to these EA organizations. So if you were to do something like create a network map, you would see very quickly that there's three
key nodes here. There's the effective altruist movement of which Sam Bankman Freed's the most notable member but which I think Dustin Mos is now the main funer. There's the Biden administration and like the key staffers and then you've got anthropic and it's a very tightly wound network. Now why does this matter? Let's get Yeah. Also the goals I think is Yes. Well, the the goal, like I said, is is global compute governance. It's basically establishing national and then international regulations of AI. Now, but they would claim, let's just pause here for a minute. They would
claim the reason they're doing it. And so, we we'll we'll save if we believe this or not, but they are concerned about job destruction in the short term. They're also concerned as science fiction as it is that the AI when we get to like a sort of generalized super intelligence is going to kill humanity. That this is a nonzero chance. Elon has said this before. They've sort of taken it to a almost like a certainty. Yes, we're going to have so many of these general intelligences. But they only believe that when they're raising money. Well,
that's what I'm sort of getting at. Like, so I think they believe it all the time, but maybe maybe the press releases are time for for the fun building. But let me let me answer that. Right. Yeah. Yeah. Look, I mean, it is a great product. Claude kicks ass. I'm more interested in the political dimension of this. I'm not bashing a specific product or company. But look, I think that there is some nonzero risk of AI growing into a super intelligence that's beyond our control. They have a name for that. They call it X risk
or existential risk. I think it's very hard to put a percentage on that. I'm willing to acknowledge that is a risk. You know, I think about that all the time and I do think we should be concerned about it. But there's two problems I think with this approach. Number one is X-Risk is not the only kind of risk. I would say that China winning the AI race is a huge risk. I don't really want to see a CCP AI running the world. And if you hobble our own innovation, our own AI efforts in the name
of stomping out every possibility of X-risk, then you probably end up losing the AI race to China because they're not going to abide by those same regulations. So again, you can't optimize for solving only one risk while ignoring all the others. And I would say the risk of China winning the AI race is, you know, it might be like 30%. Whereas I think X risk is probably a much lower percentage. So there are there are other risks to to worry about. And I I do think that they are single-mindedly focused on scaring people with some
of these headlines around first it was the bioweapons then it was the super intelligence now it's the job loss and I think it's a tried andrue tactic of people who want to give more power to the government to scare the population right because if you can scare the population and make them fearful then they will cry out for the government to solve the problem and that's what I see here is that you've got this elaborate network of front organizations which are all motivated by this EA ideology. They're funded by a hardcore leftist. And by the
way, I became aware of Dustin's politics because of the Chase Bodin recall. I found out that he was a big funer of Chase Bodin. Remember this? Yeah. Dustin Mossmus and Carrie Tuna, his wife. Also, Reed Hastings just joined the board of of Anthropic. remember when he back in 2016 to tried to drive Peter Teal off of the board of Facebook for supporting Trump. So, you know, these are like committed leftists. They're Trump haters. But the point is that these are people who fundamentally believe in empowering government to the maximum more government and empowering government to
the maximum extent. Now my problem with that is I actually think that probably the single greatest dystopian risk associated with AI is the risk that government uses it to control all of us. To me like you end up in some sort of Orwellian future where AI is controlled by the government and out of all the risks we've talked about that's the only one for which I've seen tangible evidence. So in other words, if you go back to last year when we had the whole woke AI, there was plenty of evidence that the people who were
creating these products were infusing their left-wing or woke values into the product to the point where it was lying to all of us and it was rewriting history. And there was plenty of evidence that the Biden EO was trying to enshrine that idea. Was basically trying to require DEI be infused into AI models. and it wanted to anoint two or three winners in this AI race. So, I'm quite convinced that prior to Donald Trump winning the election, we were on a path of global compute governance where two or three big AI companies are going to
be anointed as the winners. And the quid proquo is that they were going to infuse those AI models with woke values. And there was plenty of evidence for that. You look at the policies, you look at the models. This was not a theoretical concern. This was real. And I think the only reason why we've moved off of that trajectory is because of Trump's election. But we could very easily be moved back onto that trajectory. If you were to look at all three opinions here and and put them together, they could all be true at the
same time. You've got a a number of people, some might call useful idiots, some might call just, you know, people with god complexes who believe they know how the world should operate. Effective altruism kind of falls into that. Oh, we can make a formula that that's their kind of idea where we can tell you where to put your money, rich people, in order to create the most good and you know, we're these enlightened individuals with the best view of the world. They might be, who knows, maybe they're the smartest kids in the room, but they're
kind of delusional. The second piece I'll do here is I think you're absolutely correct, Chimat, that there are people who have economic interests who are then using those useful idiots andor delusional people with god complexes to serve their need, which is to be one of the three winners. And then sack Inherent to all of that is they have a political ideology. So why not use these people with delusions of grandeur in order to secure the bag for their companies for their investments and secure their candidates into office so that they can block further people from
getting H100s cuz they literally want to. By the way, that's the part that's very smart about what they're doing because you know it's not like they're illquid. They're full of liquidity in the sense that you're bringing in people that are very technically capable and you're setting up these funding rounds where a large portion goes right back out the door via secondaries and so there's all these people that are making money having this worldview and so to your point Jason it's going to cement that worldview and then they are going to propagate it even more aggressively
into the world. So I think the threshold question is should you fear government overregulation or should you fear autocomplete and I would say you should not be so afraid of the autocomplete right now it may get so good that it's an AGI but right now it's an exceptionally good autocomplete. Yeah. And I just think that again it's a tried and trueue tactic of people who want to give immeasurably more power to the government to try and make people afraid and they stampede people into these policies. Right. And it gives them power. Exactly. Now, why do
I think this is important to talk about? On last week's show, I talked about the trip to the Middle East and how we started doing these AI acceleration partnerships with the Gulf States who have a lot of resources, a lot of money, and they're intensely interested in AI. and the Biden administration was pushing them away. It basically said, "You can't have the chips. You can't build data centers." And it was pushing them into the arms of China. The thing that I thought was so bizarre is that the various groups and organizations and former Biden staffers
who wrote this policy have been agitating in Washington and they've been trying to portray themselves as China hawks. And I'm like, wait, this doesn't make any sense because this policy again, there's there's basically two camps in this new cold war. It's US versus China. you can pull the Gulf States into our orbit or you can drive them into China's orbit. So, this to me just didn't make any sense. And what's happened is that frankly, you've got this EA ideology that's really motivating things, which is a desire to lock down compute, right? They're afraid of proliferation.
They're afraid of diffusion. That's really their motivation. and they're trying to rebrand themselves as China hawks because they know that in the Trump administration that idea is just not going to get much purchase. Right. And your position as ZAR is a level playing field. People compete and the good guys, you know, the West should be supported to hit artificial general intelligence as fast as possible. So the bad guys, China, don't get it first. That that's a well open competition. I don't know if I would frame it around AGI specifically, but what I would say is
that look, I think our policy should be to win the AI race because the alternative is that China wins it. And that would be very bad for our economy and our military. How do you win the AI race? You got to out innovate. You got to have innovation. That means we can't have overregulation and red tape. We got to build out the most AI infrastructure, data centers, energy, which includes our partners. And then third, I think it means AI diplomacy because we want to build out the biggest ecosystem. We know that biggest app store wins,
biggest ecosystem wins, right? And the policies under the Biden administration were doing the opposite of all those things. But again, you have to go back to what was driving that. And it was not driven by this China hawk mentality. That is now a convenient rebranding. It was driven by this EA ideology, this doomerism. And so this is why I'm talking about it is I want to expose it because I think a lot of people on the Republican side don't realize where the ideology is really coming from and who's funding it. They're obviously Trump haters and
they need to be lumored quite frankly when we look at they do they need to be lumored. I mean, you know, Freberg, I want I want to come back around again cuz I respect your opinion on, you know, how close we are to turning certain corners, especially in science. So, I understand big picture you believe that the opportunity will be there. Hey, we got people out of fields, you know, in the agricultural revolution, we put them into factories, industrial revolution, then we went to this information revolution. So, your position is we will have a similar
transition and it'll be okay. But do you not believe that the speed because we've talked about this privately and and publicly on the pod that this speed the velocity at which these changes are occurring you would agree are faster than the industrial revolution much faster than the information revolution. So let's one more time talk about job displacement and I think the real concern here for a group of people who are buying into this ideology is specifically unions job displacement. This is something the EU cares about. This is something the Biden administration cares about. If truck
drivers lose their jobs, just like we went to bat previously for coal miners, and there were only 75,000 or 150,000 in the country at the time, but it became the national dialogue. Oh my god, the the the coal miners. How fast is this going to happen? One more time on drivers specifically. Okay, coders, you think there'll be more code to write, but driving, there's not going to be more driving to be done. So is this time different in terms of the velocity of the change and the job displacement in your mind? Freedber the velocity is
greater but the benefit will be faster. So the benefit of the industrial revolution which ultimately drove lower price products and broader availability of products through manufacturing was one of the key outputs of that revolution. Meaning that we created a consumer market that largely didn't exist prior. Remember prior to the industrial revolution, if you wanted to buy a table or some clothes, they were handmade. They were kind of artisal. Suddenly, the industrial revolution unlocked the ability to massproduce things in factories. And that dropped the cost and the availability and the abundance of things that everyone wanted
to have access to, but they otherwise wouldn't have been able to afford. So suddenly everyone could go and buy blankets and clothes and canned food and all of these incredible things that started to come out of this industrial revolution that happened at the time. And I think that folks are underestimating and underrealizing the benefits at this stage of what's going to come out of the AI revolution and how it's ultimately going to benefit people's um availability of products, cost of goods, access to things. So the counterbalancing force Jcal is deflationary which is um let's assume
that the cost of everything comes down by half. That's a huge relief on people's need to work 60 hours a week. Suddenly you only need to work 30 hours a week and you can have the same lifestyle or perhaps even a better lifestyle than you have today. So the counterargument to your point, and I'll talk about the pace of change in specific jobs in a moment, but the counterargument to your point is that there's going to be this cost reduction and abundance that doesn't exist today. Give an example. Let's give like some examples that we
could see automation and food prep. So we're seeing a lot of restaurants install robotic systems to make food and people are like, "Oh, job loss, job loss." But let me just give you the counter side. The counter side is that the cost of your food drops in half. So suddenly, you know, all the labor cost that's built into making the stuff you want to pick up, everyone's freaking out right now about inflation. Oh my god, it's $8 for a cup of coffee. It's $8 for a latte. This is crazy, crazy, crazy. What if that dropped
down to two bucks? You're going to be like, man, this is pretty awesome with good service and good experience. And don't make it all dystopian, but suddenly there's going to be this like incredible reduction or deflationary effect in the cost of food. And we're already starting to see automation play it way in the food system to bring inflation down. And that's going to be very powerful for people. Shout out to uh ita cloud cushions and cafe X. We all took swings at the bat at that exact concept is that it could be done better, cheaper,
faster. One of the amazing things of these vision action models that are now being employed is you can rapidly learn using vision systems and then deploy automation systems in those sorts of environments where you have a lot of kind of repetitive tasks that the system can be trained and installed in a matter of weeks. And historically that would have been a whole startup that it would have taken years to figure out how to get all these things together and custom program it, custom code it. So the flip side is like when Uber hit those people
were not drivers. Think about the jobs that all those people had prior to Uber coming to market. And then the reason they drove for Uber is they could make more money driving for Uber or now driving or Door Dash and the flexibility. So their lifestyle got better. They had all of this more control in their life. Their incomes went up. And so there's a series of things that you are correct won't make sense in the future from a kind of standard of work perspective. But the right way to think about it is opportunity gets created.
New jobs emerge, new industry, new income, costs go down. And so I keep harping on this that it's really hard today to be very prescriptive to Sax's point about what exactly is around the corner. But it is an almost certainty that what is around the corner is more capital will be deployed. That means the economy grows. That means there's a faster deployment of growth of new jobs, new opportunities for people to make more money, to be happier in the work that they do. And the flip side being things are going to get cheaper. So, I
mean, I know we're waxing philosophical here, but I think it's really key because you can focus on the one side of the coin and miss the whole other. And that's what a lot of journalist commentators and fearongerers do is they miss that other side. Got it. Well said, Freeberg. Well said. I think I've heard Satcha turn this question around about job loss saying well do you believe that GDP is going to grow by 10% a year because what are we talking about here I in order to have the kind of disruption that you're talking about
where I don't know 10 to 20% of knowledge workers end up losing their jobs AI is going to have to be such a profound force that it's going to have to create GDP growth like we've never seen before. That's right. So, it's easier for people to say, "Oh, well, 20% of people are going to lose their jobs." But wait, are we we're talking about a world in where the economy is growing 10% every year. Like, do do you actually believe that's more income? That's more income for everyone. That's new jobs being created. It's an inevitability.
We've seen this in every revolution. You know, prior to the industrial revolution, 60% of Americans worked in agriculture. And when the tractor came around and factories came around, those folks got to get out of doing manual labor in the fields where they were literally, you know, tilling the fields by hand. and they got to go work in a factory where they didn't have to do manual labor to move things. Yeah, they did things in the factory with their hands, but it wasn't about grunt work in the field all day in the sun. And it became
a better standard of living. It became new jobs. And today, it became a 5day work week. It went from a 7day 7day work week to five, 100 hours a week to 45, 50 hours a week. And now I think the next phase is we're going to end up in less than 30 hours a week with people making more money and having more abundance for every dollar that they earn with respect to what they can purchase and the lives they can live. That means more time with your family, more time with your friends, more time to
explore interesting opportunities. So, you know, we've been through this conversation a number of times. I I know I'm not No, it's important to bring it up, I think, and and really unpack it because the fear is peing now, Sachs. People are using this moment in time to scare people that hey the jobs are going to go away and they won't come back. But what we're seeing on the ground saxs is I'm seeing many more startups getting created and able to accomplish more tasks and hit a higher revenue per employee than they did in the last
two cycles. So it used to be you know you try to get to a quarter million in revenue per employee than 500. Now we're regularly seeing startups hit a million dollars in revenue per employee, something that was rarified air previously, which then speaks to your point, Freeberg, that there'll be more abundance. There'll be more capital generated, more more capital deployed, with more capital deployed for more opportunities, but you're going to need to be more resilient. I think yeah, I think it's actually very hard to completely eliminate a human job. the the ones that you cited
and JK you keep citing the same ones because I actually don't think there's that many that fit in this category the drivers and maybe level one customer support because those jobs are so monolithic but when you think about even like what a salesperson does right it's like yes they spend a lot of time with prospects but they also spend time negotiating contracts and they spend time doing postale implementation and follow-up and they spend time learning the product and giving feedback I mean it's a multifaceted job and you can use AI I to automate pieces of
it, but to eliminate the whole job is actually very hard. And so I just think this idea that boom, 20% of the workforce is going to be unemployed in two years. I just don't think that it's going to work that way. But look, if there is widespread job disruption, then obviously the government's going to have to react and we're going to be in a very different societal order. But my point is, you want the government to start reacting now before this actually happens. We don't need to be precogs and predict it. Yeah. It's a total
power grab. It's a total power grab to give the government and these organizations more power before the risk is even manifested. And let me say this as well with respect to all these regulations that were created, the 100page by NEO and the 200page diffusion rule, none of these regulations solve the excess problem. None of these things actually would prevent the most existential risk that we're talking about. They don't solve for alignment. They don't sign for the kill switch. None of that. Yeah. If someone actually figures out how to solve that problem, I'm all ears. You
know, look, I'm not cavalier about these risks. I understand that they exist, but I'm not in favor of the fear-mongering. I'm not in favor of giving all this power to the government before you even know how to solve these problems. Shimath, you did a tweet about entry-level jobs being toast. So, I think there is a nuance here. Uh, and both parties could be correct. I think the job destruction is happening as we speak. I'll just give what one example and then drop to you, Chimath. One job in startups that's not driving a car or you
know super entry level was people would hire consultants to do recruitment and to write job descriptions. Now I was at a journal last night talking to a bunch of founders here in Singapore and I said how many people have used AI to write a job description? Everybody's hand went up. I said how many of you with that job description was that job description better than you could have written or any consultant? And they they all said yes 100% AI is better at that job. That was a job a highlevel HR recruitment job or an aspect
of it sack. So that was half the job, a third of the job. To your point, the chores are being automated. So I do think we're going to see entrylevel jobs. Shimoth, the ones that get people into an organization, maybe they're going away. And that was that your point of your tweet, which we'll pull up right here. If a GPT is a glorified autocomplete, how did we used to do glorified autocomplete in the past? It was with new grads. New grads were our autocomplete. And to your point, the models are good enough that it effectively
allows a person to rise in their career without the need of new grad grist for the mill, so to speak. So, I think the reason why companies aren't hiring nearly as many new grads is that the the folks that are already in a company can do more work with these tools. And and I think that that's a very good thing. So you're generally going to see OPEX as a percentage of revenue shrink naturally and you're going to generally see revenue per employee go up naturally but it's going to create a tough job market for new
grads in the established organizations. And so what should new grads do? They should probably steep themselves in the tools and go to younger companies or start a company. I think that's the only solution for them. Bingo. The most important thing for whether there are jobs available for new grads or not is whether the economy is booming. So obviously in the wake of a financial crisis, the jobs dry up because everyone's cost cutting and those jobs are the first ones to get cut. But if the economy is booming, then there's going to be a lot more
job creation. And so again, if AI is this driver and enabler of tremendous productivity, that's going to be good for economic growth. And I think that that will lead to more company formation, more company expansion at the same time that you're getting more productivity. Now, to give an example, one of the things I see a lot discussed online about these coding assistants is that they make junior programmers much better because, you know, if you're already like a 10x programmer, very experienced, you already knew how to do everything. And you could argue that the people who
benefit the most are the entry-level coders who are willing to now embrace the new technology and it makes them much more productive. So in other words, it's a huge leveler and it takes an entry-level coder and makes them 5x or 10x better. So look, this is an argument I see online. The point is just I don't think we know how this cuts yet. I agree. And I just think there's like this this dumerism is premature and it's not a coincidence that it's being funded and motivated by this hardcore ideological element. I'll tell you my hiring
experience. We have about 30 people at 8090 and the way that I have found it to work the best is you have senior people act as mentors and then you have an overwhelming corpus of young very talented people who are AI native. And if you don't find that mix, what you have instead are L7s from Google and Amazon and Meta who come to you with extremely high salary demands and stock demands and they just don't thrive. And part of why they don't thrive is that they push back on the tools and how you use them.
They push back on all these things that the tools help you get to it faster. M this is why I think it's so important for the young folks to just jump in with two feet and be AI native from the jump because you're much more hirable frankly to the to the emergent company and the bigger companies you'll have a lot of these folks that see the writing on the wall may not want to adapt as fast as otherwise. Another way for example that you can measure this is if you look inside your company on the
productivity lift of some of these coding assistants for people as a distribution of age. What you'll see is the younger people leverage it way more and have way more productivity than older folks. And I'm not saying that as an aegis comment. I'm saying that it's an actual reflection of how people are reacting to these tools. What you're describing is a paradigm shift. It is a big leap. Is you know it's like when I went to college, when I took computer science, it was object-oriented programming. It was like C++. It was compiled languages. It was gnarly.
It was nasty work. And then you had these highle abstracted languages. And I used to remember at Facebook, I would just get so annoyed because I was like, why is everybody using PHP and Python? This is like not even real. But I was one of these old lights who didn't understand that I just had to take the leap. And what it did was it grew the top of the funnel of the number of developers by 10x. And as a result, what you had were all of these advancements for the internet. And I think what's happening
right now is akin to the same thing where you're going to grow the number of developers upstream by 10x. But in order to embrace that, you just have to jump in with two feet. And if you're very rigid in how you think the job should be done technically, I think you're just going to get left behind. Just a little interesting statistic there. Microsoft announced 6,000 job layoffs, about 3% of their workforce, while putting up record profits while being in incredible cash position. Total confirmation bias. It's like now every time there's a layoff announcement, people try
to tie it to AI to feed this doomer story. I don't think that's an AI story. I well I actually think it I don't think it's an AI story. I think it is because the people they're eliminating are management and I think the the management layer becomes less you're saying it was it was entry level employees. Now you're saying it's management. This is total confirmation. I think those are no no I think those are two areas that specifically get eliminated. Entry level it's too hard. It it's too hard to give them the grunt work. And
then for the managers who are old and I've been there for 20 years. Hold on. Let me finish. Th for those people I think they are unnecessary in this new AI management. What are you talking about? What what is the AI agent that's doing management right now in companies? Oh theory doesn't even make sense. Oh no it it totally does. There are tools now that are telling you this is these are the most productive people in the organization. Shath just outlined who's shipping the most etc. who's using the tools. And then people are saying, well,
why do we have all these highly priced people who are not actually shipping code who are LSAs? You're totally falling for some sort of narrative here. This makes no sense. I don't think I am. Yeah, let me be very clear what I'm saying. What I am saying is AI natives are extremely productive. They use these tools. They're very fil with them. I think it's very reductive, but what you see is the older or more established in your career you are in technical roles, what I see is that it's harder and harder for folks like that
to embrace these tools in the same way. Now, how does it play out in terms of jobs? I think that just these tools are good enough where the net new incremental taskoriented role that would typically go to a new grad, a lot of that can be defayed by these models. That's what I'm saying very clear specifically and I don't think that speaks to management. I agree with Sax. It doesn't do Sergey said Freeberg when he came to uh our F1 that management would be the first thing to go. I was talking to some entrepreneurs last
night again here in Singapore and they are taking all the GitHub and and Jira cards and and things that have been submitted plus all the Slack messages in their organization and they're putting them into an LLM and having it write management reports of who is the most productive in the organization. And in the new version of Windows, it's monitoring your entire desktop. Freeberg management is going to know who in the organization is actually doing work, what work they're doing, and what the uh result of that work is through AI. That is the future of management.
And you take out all bias, all you know, loyalty, and the AI is going to do that. Couldn't disagree with you more saxs on that, but Freeberg, you wanted to wrap somewhere on this point. My point is that AI managers are not losing their job because AI is replacing them. I didn't say that AI wouldn't be a valuable tool for managers to use. Sure, AI would be a great tool for managers, but we're not anywhere near the point where managerial jobs are being eliminated because they're getting replaced by AI agents. We're still at the chatbot
stage of this. Literally, Sergey said he took their internal Slack, went into like a dev conversation, and said, "Who are the underrated people in this organization who deserve a raise?" and it gave him the right answer. So, wait, that doesn't allow you to cut 6,000 people. I think it's happening as we speak. It's just not over. You You fell for this narrative. You grasped onto this Microsoft restructuring where they eliminated 6,000 roles and you're trying to attribute that to AI now. I think it has to do with AI. I think management is looking at it
saying, "We are going to replace these positions with AI. We might as well get rid of them now." It is in flux. We'll see who's right in the coming months. Can I make another comment, Freick? wrap this up here so we can get on to the next topic. This is a great topic. This is I want to make one last point which I think and Sax you may not appreciate this so we can have a a healthy argument about this. I think in the same way that all of this jobs are going to get lost
to AI fear-mongering. There's a similar narrative that I think is a false narrative around there's a race in AI that's underway between nation states. And the reason I think it's false is if I asked you guys the question, who won the industrial revolution? The industrial revolution benefited everyone around the world. There are factories and there's a continuous effort and continuous improvements in manufacturing processes worldwide. That is a continuation of that revolution. Similar if I asked who won the internet race, there were businesses built out of the US, businesses built out of China, businesses built out
of India and Europe that have all created value for shareholders, created value for consumers, changed the world, etc. And I think the same is going to happen in AI. I don't think that there's a finish line in AI. I think AI is a new paradigm of work, a new paradigm of productivity, a new paradigm of business, of the economy, of livelihoods, of pretty much everything uh every interaction humans have with ourselves and the world around us will have in its substrate AI and as a result, I think it's going to be this continuous process of
improvement. So, I'm not sure. Look, there there are different models and you can look at the performance metrics of models, but you can get yourself spun up into a tizzy over which model is ahead of the others. Which one's going to quote get to the finish line first? But I think at the end of the day, the abundance and the economic prosperity that will arise from the continuous performance improvements that come out of AI and AI development will benefit all nation states and actually could lead to a little bit more of a less resource constrained
world where we're all fighting over limited resources and there's nation state definitions around who has access to what and perhaps more abundance which means more peace and uh less of this kind of resourced world. your thought on the kumbaya theory exposed by Freeberg. Yeah, exactly. Um I I'll partially agree in the sense that I don't think the AI race is a finite game. It's an infinite game. I I agree that there's no finish line, but that doesn't mean there's not a race going on. So for example, an arms race would be a classic example of
a competition between countries to see who is stronger to basically amass power and they might be neutralizing each other. The balance of power may stay in equilibrium even though both sides feel the need to constantly uplevel their arms, their power. Yeah. And so I think that to use the the term that Mir Shimemer used at the all-in summit, we are in an iron cage. The US and China are the two leading countries in the world economically, militarily, technologically. They both care about their survival. The best way to ensure your survival in a self-help world is
by being the most powerful. And so these are great powers who care a lot about the balance of power. And they will compete vigorously with each other to maintain the greatest balance of power between them. and high-tech is a major dimension of that competition and within high-tech AI is the most important field. So look, there is going to be an intense competition around AI. Now the question is how does that end up? I mean it could end up in a tie or in it could end up in a situation where both countries benefit. Maybe open
source wins. Maybe neither side gains a decisive advantage. that they're absolutely going to compete because neither one can afford to take the risk that the other one will develop a decisive advantage. Prisoners dilemma. Nuclear proliferation is a good analogy. I would argue nuclear deterrence led to a more peaceful world in the 20th century. I mean is that fair to say Sax that ultimately what happened with nuclear is that the actual underlying technology hit you know an asmtote right it plateaued right and so we ended up in a situation where in the case of the United
States versus Soviet Union where both sides had enough nukes to blow up the world many times over and there wasn't really that much more to innovate so you know the the the underlying technological competition had ended the the dynamic was more stable and they were able to reach an arms control framework to sort of control the arms race, right? I think AI is a little different. We're in a situation right now where the technology is changing very very rapidly and it's potentially on some sort of exponential curve and so therefore being a year ahead even
6 months ahead could result in a major advantage. I think under those conditions both sides are going to feel the need to compete very vigorously. I don't think they can sign up. This is a system of productivity right for an agreement to slow each other down. I just don't nuclear was not a system of productivity. It was not a system of economic growth. It was a system of literally destruction. And this is quite different. This is a system of making more with less which unleashes benefits to everyone in a way that perhaps should be calming
down the conflict in the potential. You got to admit that there's a there is a potential dual use here. There's no question that the armies of the future are going to be drones and robots and they're going to be AI powered. Yeah. And as long as that's the case, these countries are going to compete vigorously to have the best AI and they're going to want their leaders or national champions or startups and so forth to win the race. What's the worst case, Saxs, if if China wins the AI race? What is the worst case scenario?
Ask what it means first. Ask Sax. That's literally what I'm asking. Like what would that scenario be? Would they invade America and they dominate us forever? What does it mean to citizen? Yeah. What does it mean to win? Yeah. To me, it would mean that they achieve a decisive advantage in AI such that we can't leaprog them back. And an example of this might be something like 5G where Huawei somehow leaprogged us, got to 5G first and disseminated it through the world. They weren't concerned about diffusion. They were interested in promulgating their technology throughout the
world. If the Chinese win AI, they will sell more products and services around the globe than the US. This is where we have to change our mindset towards diffusion. I would define winning as the whole world consolidates around the American tech stack. They use American hardware in data centers that again are are fundamentally powered by American technology. And you know just look at market share. Okay? If we have like 80 to 90% market share that's winning. If they have 80% market share then we're in big trouble. So it's very simple. It means like yeah but
if the market grows up by 10x it doesn't matter because the world will have every individual in every country will now have more they will have a more prosperous life and as a result it's not necessarily the framing about if we don't get there first we are necessarily going to lose I get that there's an edge case of conflict or what have you but I do think that there's a net benefit where the whole world suddenly is in this more prosperous state and this is a classic example of a dual use technology where there are
both economic benefits and there are military benefits. Yes, GPS would come to mind in this example, right? Like my summary point is just that it's not all about a losing game with respect to this quote race with other nation states. But at the end of the day, yes, there is risk, but I do think that if the the pace of improvement stays on track like it is right now, holy [ __ ] I think we're in a pretty good place. That's just my point. Okay. Some positivity. Okay. Look, I I hope that the AI race
stays entirely positive and it's a healthy competition between nations and the competition spurs them on to develop more prosperity for their citizens. But as we talked about in the AI summit, there's two ways of looking at the world. There's kind of the economist way that Jeffrey Saxs was talking about and then there's the balance of power way or realist way which Mir Shmer was talking about. And when economic prosperity and survival or balance of power come into conflict, it's the realest view of the world that it's the balance of power that gets privileged. And I
just think that's the way that governments operate is that prosperity is incredibly important. We want economic success, but power is ultimately privileged over that. And this is why we're going to compete vigorously in high-tech. That's why there is going to be an AI race. Okay, perfect segue. We should talk a little bit about what was the topic of discussion yesterday. I had a lunch with a bunch of family offices and capital allocators uh government folks here in Singapore and they were talking about our discussion last week about the big beautiful bill and the debt here
in the United States. It's permeating everywhere. The two conversations at every stop I've made here is the big beautiful bill and the balance sheet of the United States as well as tariffs. So, we need to maybe revisit our discussion last week. Chimath, you had uh and Freeberg did a an impromptu call with Ron Johnson over the weekend, which then spurred him going on 20 other podcasts to talk about this. Steven Miller from the administration has been tweeting some corrections or his perceived corrections about the bill. And Sax, uh, I think you've also started tweeting this.
Where do we want to start? Maybe. Well, I think there are just a couple of facts that should be cleaned up because Okay, so facts from the administration, their view of our discussion. Well, even though I was defending the bill last week on the whole, I wasn't saying it was perfect. I was just saying it was better than the status quo. Yeah, you were clear about that. Yeah. Yeah. But even even I in doing that was conceding some points that I think were just factually wrong. And the big one was that I said I was
disappointed that Doge the Doge cuts weren't included in the big beautiful bill. What Steven Miller has pointed out is that reconciliation bills can only deal with what's called mandatory spending. They can't deal with what's called discretionary spending. And since the Doge cuts apply to discretionary spending, they just can't be dealt with in a reconciliation bill. They have to be dealt with separately. There can be a separate recision bill that comes up, but it can't be dealt with in this bill. And just to be very clear, look, if the Doge cuts don't happen through recision, I'm
going to be very disappointed in that. I really want the Doge cuts to happen, but it's just a fact that the Doge cuts cannot happen in the big beautiful bill. It's not that kind of bill. And I think it's therefore wrong to blame big beautiful bill for not containing Doge cuts when the Senate rules don't allow that. You know, it all goes back to the the Bird rules. There are only specific things that can be dealt with through reconciliation, which is this 50 vote threshold, and it has to be quote unquote mandatory spending. discretionary cuts
are dealt with in annual appropriations bills that require 60 votes. Now look, this is kind of a crazy system. I don't know exactly how it evolved. I guess Robert Bird is the one who came up with all this stuff and maybe they need to change the system, but it's just wrong to blame the big beautiful bill for not containing the Doge cuts. That's just a fact. Okay, so the other thing is that the BBB does actually cut spending. It's just not scored that way because when the bill removes the sunset provision from the 2017 tax
cuts, the CBO ends up scoring that as effectively a spending increase. But tax rates are simply continuing at their current level. In other words, at this year's level. So if you used the current year as your baseline, okay, and then compared it to spending next year, it would score as a cut in spending. So it's just not it's not correct to say this bill increases spending. It does actually result in a mandatory spending cut, but it's not getting credit for that because we're continuing the tax rates at the current year's rates. Do you believe Sachs
that this administration which you are part of in four years will have spent will have balanced the budget. Will it have reduced the deficit or will the deficit continue to grow at 2 trillion a year? What is your belief because there's a lot of strategies going on here. Yeah. My my belief is that President Trump came into office inheriting a terrible fiscal situation. I mean basically that he created and that Biden created. They both put 8 trillion. They both put 8 trillion on the debt. It's a big difference. It's a big difference to add to
the deficit when you're in the emergency phase of CO for that. Sure. It's emergency spending. It was never supposed to be permanent and then somehow Biden made it permanent and he wanted a lot more. Remember build back better? He wanted a lot more. So, you know, it's it's tough when you come into office with a what is $2 trillion annual deficit. So, to my original question, now look, hold on. Would I like to see the deficit eliminated in one year? Yeah, absolutely. But there's just not the votes for that. Well, I asked you for there's
a one vote margin here in the House and the Democrats aren't cooperating in any way. So, I think that the administration is getting the most done that it can. This is a mandatory spending cut and I think the Doge cuts will be dealt with hopefully through recision in a subsequent bill. I'm asking you about four years from now. Will we be sitting here in four years? Will Trump have cut spending by the end of this term? In another three and a half years, will we be looking at a balanced budget? potentially is that the goal
of the administration or will we be at 42 44 45 trillion at the end of Trump's second term? David said, listen, if you want that level of specificity, you're going to have to get Scott Besson on. Okay, this is just not my area. I'm not going to pretend to have that level of detailed answers. But what I believe is that the Trump administration's policy is to spur growth. I think that these tax policies will spur growth. I think that AI will also be a huge tailwind. It'll be a productivity boost. I think let's stop being
doomers about it. We need that productivity boost and I think that the net result of those things will be to improve the fiscal situation. Do I want more spending cuts? Yeah, but look, we're getting more than was represented last week. Let's put it that way. Okay, fair enough. Sax, thank you for the cleanup there. Chimath, our bestie Elon was on the Sunday shows and he said, "Hey, the bill can be big or it can be beautiful. It can't be both." He seems to be, I'll say, displeased or maybe not as optimistic about balancing the budget
and and getting spending under control, but he he still believes in Doge, obviously, and and and hopefully Doge continues. You seemed a little bit concerned last week. A week's passed. You've heard some of Steven Miller's opinions. Where do where do you net out seven days from our big beautiful budget bill debate last week, a week later? Well, I mean, I think Steven's critique of how the media summarized the reaction to the bill is accurate. And I think it's probably useful to double click into one thing that Saxs didn't mention, but that Steven did. A lot
of this pivots around the CBO, which is the Congressional Budget Office, and how they look at these bills, and there's a lot of issues with how they do it. In one specific case, which Sax just mentioned and Stephen talked about, is that they have these arcane rules about the way that they score things. And what they were assuming is that the tax rates would flip back to what they were before the first Trump tax cuts, which obviously would be higher than where they are today. What that would mean in their financial model is we were
going to get all that money now to maintain the tax cuts where we are. They now then would look at that and say, "Oh, hold on. That's a loss of revenue. Why are all of these things important?" I downloaded the CBO model, went through it, and what I would say is at best it's Spartan, which means that I don't think a financial analyst or somebody that controls a lot of money will actually put a lot of stock in their model. I think what you'll have happen is people will build their own versions bottoms up. Do
you trust it, the the CBO's version of this, or do you largely trust it? I don't think the CBO really knows what's going on to be totally honest with you. Okay. I think that there are parts of what they do which they're also opaque on. Nick, I sent you a tweet from Goldman Sachs. So, here is what Goldman put out. Now, the point is when you build a model, what you're trying to do is net out all of these bars, okay? You're trying to add the positive bars and the negative bars, and you figure out
what is the total number at the end of it. Now, in order to do that, when you see the bars on the far right, that's a 20 $34. That's very different than a 20 $25. The CBO doesn't disclose how they deal with that. They don't dis disclose the discount rate. So you can question what that is. The CBO makes these assumptions that, you know, as Steven pointed out, are very brittle with respect to the tax plan. That's not factored in here. So those are the issues with the way the CBO scores it. So you have
to do it yourself. Now, Peter Navaro published an article which I think is probably the most pivotal article about this whole topic. Peter of tariff fame. Yeah. Yeah. Here I think he nails it right in the bullseye, which is the bond market needs to make a decision on one very critical assumption when they build their own model. Okay, so let's ignore the CBO's kind of brittle math and the Excel that they post on their website. People are going to do their own because they're talking about managing their own money. But Navaro basically points to the
critical thing which is listen those CBO assumptions also include a fatal error which is they assume these very low levels of GDP. What you're probably going to see in Q2 is a really hot GDP print. If I'm a betting man, which I am, I think the GDP prints going to come in above three. Not quite four, but above three. And so what Peter is saying here is, hey guys, like you're estimating 1.7% GDP. why don't you assume 2.2 two or why don't you assume 2.7 or any number or really what he's saying is why don't
you build a sensitivity so that you can see the implications of that and I think that that is a very important point okay so where do I net out a week later Jason it's pretty much summarized in the tweet that I posted earlier today so over the last week as people have digested it I think that there are small actors in this play and big actors the biggest actor is obviously President President Trump. But the second biggest actor is the long end of the bond market. These are the central bankers, the long bond holders, and
these macro hedge funds. Why? Because they will ultimately determine the United States's cost of capital. How expensive will it be to finance our deficits irrespective of whatever the number is. It could be a dollar or it could be a trillion dollars. That doesn't matter right now. The point is what is going to be our cost of capital? And what's happened over the last little while is that they've steepened the curve and they've made it more expensive for us to borrow money. That's just the fact. So how do we get in front of this? I think
the most important thing if you think about what Peter Navaro said is this plan and the bill can work if we get the GDP right. Okay. So how do you get the GDP right? And this is where I have one very narrow set of things that I think we need to improve. And the specific thing that I'll go back to is today America is at a supply demand tradeoff on the energy side. What does that mean? We literally consume every single bit of energy that we make. We don't have slack in the system. We are
growing our energy demands on average about 3% a year. So I think the most critical thing we need to do is to make sure the energy markets stay robust. Meaning there's a lot of investment that people are making. On Tuesday I announced a deal that I did building a 1 gawatt data center in Arizona. This is a lot of money. This is little old me. But there are lots of people ripping in huge huge huge checks, hundreds of billions of dollars. I think the sole focus has to be to make sure that the energy policy
of America is robust and it keeps all the electrons online. If there's any contraction, I think it'll hit the GDP number because we won't have the energy we need and that's where things start to get a little funky. So, I think where I am is I think President Trump should get what he wants. I think the bill can work narrowly address the energy provisions and I think we live to fight another day. So Freiriededberg cynical approach might be we're working the refs here. The CBO is not taking into GDP. This GDP has a magical unicorn
in it. AI and energy is going to spur this amazing growth. But the bond markets don't believe it either. So, are we looking at just a GOP, a party, I'll put the administration aside, that is just as recklessly spending as the Democrats, and they want to change the formula by which they're judged in the future, that there's going to be magically all this growth and growth solves all problems. And what we really need to do to your point I think two weeks ago that this is just disgraceful to put up this much spending and we
have to have austerity and we need to increase uh maybe the discipline in the country and both parties have to be part of that. I'm asking you uh from the cynical perspective maybe to represent or steal me on the other side here. We had a conversation with Senator Ron Johnson after we recorded the pod last week and he was very clear in a key point which is that this bill addresses mandatory spending. Just to give you a sense 70% of our federal budget is mandatory spending. 30% falls into that discretionary category. The mandatory spending is
composed of the interest on the debt which is now well over a trillion dollars a year on its way to a trillion five almost a trillion a year. Medicare, Medicaid, Social Security and some other income security programs. And as Ron Johnson shared with us over the years more and more programs have been put into the mandatory spending category and so you can get past the filibustering in the Senate to be able to get budget adjustments done. The key thing he's focused on and Rand Paul is focused on and I've talked about is the spending level
of our mandatory programs. The big beautiful bill proposes a roughly $70 billion per year cut in Medicaid. Okay, and that sounds awful. How could you do that to people? In 2019, the year before COVID, Medicaid spending was $627 billion. 2024 it was 914 billion. So the $70 billion cut gets you down to 840. You're still roughly call it 40% above where you were in 2019. So is that the right level? And fundamentally the opportunity to cut those mandatory programs, which I know sounds awful, to cut Social Security and cut Medicaid, but the reality is they're
not just being cut from a low level. They're being cut from a level that's 60 plus% higher than they were in 2019. And I gave you another example which is the SNAP program, the food stamp program. Again, uh $15 billion of the 120 a year that we spend on food stamps is being used to buy soda and a whole another chunk of that 120 is being used to buy other junk food. So that they have proposed in this bill to cut SNAP down to 90 and it was 60 in 2019. So it's still 50% above
where it was in 2019. So the key point that's being made by Ron Johnson and others is that the spending on these mandatory programs which account for nearly threequarters of our federal budget are still very elevated relative to where we were in 2019. And we are not going to get out of our deficit barring a massive increase in GDP without changes to the spending level. Now I don't put the blame on the White House. This bill passed with one vote in the House. One vote. And so a key point to note, and I've said this
from day one, and every time I've gone to DC and every time we've talked about Doge, I've said there's no way any of this stuff's going to change without legislative action from the Congress. And here we are seeing Congress, for whatever reason, you can listen to Ron Johnson. You can listen to Rand Paul. You can listen to others say, you know what, we can't cut that deep. It is going to be too harmful to our constituents. We need to keep the programs at their current levels or make no changes at all or only modest changes.
And that's where we are. That's the reality. Now, I do think that Navaro did an excellent job in his op-ed for whatever criticism we may want to lay on Navaro for many other things. He pointed out that the CBO projections in 2017 for the next year's GDP growth numbers was 1.8 to 2% and it actually came in at 2.9%. a full one point higher because of the tax and jobs act that was passed by the Trump administration in 2017. So the additional money that goes into investments because lower taxes are being paid fueled GDP growth.
This is what some people call trickle down economics. People ridicule it. They say it doesn't work. It's not real. But in this particular instance, they cut taxes and the GDP grew much faster than was projected or estimated by the economists at the CBO. So the argument that's being made is that we are not capturing many of the upsides in the GDP numbers that are being projected. And I will be honest about this. I don't think anyone knows how much the GDP is going to grow. We don't know the economic benefit and effects of AI. We
don't know the economic benefits and effects of the work that's being done to deregulate. Another key point which is not talked about by Navaro or anywhere else. There's a broad effort to deregulate standing up new energy systems, deregulate industry and pharma, deregulate banking. Besson talked about this in our interview with him. All of those deregulatory actions theoretically should drive more investment dollars because if you can get a biotech drug to market in 5 years instead of 10, you'll invest more in developing new biotech drugs. If you can stand up a new nuclear reactor in seven
years instead of 30, you'll build more nuclear reactors. Money will flow. if you can um get a new factory working because it's a lot easier and faster to to build the factory and cheaper, you'll build more factories and production will go up. People were really taken, by the way, by your comment that you would shut up about the deficit if we had like a really great energy policy. We were dumping a lot on top of it. I want to build on the point that both Jamatha and Freeberg made about growth rates. So, there's a very
important chart here from Fred. This is the Federal Reserve St. Louis. This is Federal Receipts. So basically it's federal tax revenue as a percent of GDP and this goes all the way back to you know the 1930s 1940s. So if you look in the postWorld War II period you can see just eyeballing it that there's a lot of variation around this but the line is around 17.5% plus or minus 2%. And the interesting thing is that this chart reflects radically different tax rates. So, for example, during some of these periods, we've had 90% top marginal
tax rates. We've had 70% top top marginal tax rates. So, yeah, under Jimmy Carter, the top marginal tax rate was, I think, 70%. We've had tax rates, you know, under Reagan or Clinton in the 20s. So, the point is that the the tax rate that you have and what you actually collect as a percent of GDP don't correlate. The most important thing by far is just how the economy is doing. If you look at the top tick, it's around 2000 there. If you just mouse over it, 1999 to 2000. Yeah. Yeah. We get like just
under 20% of federal receipts% of GDP and tax rates were quite low back then. The reason why is we had an economic boom. So look, the point is the most important thing in terms of tax revenue is having a good economy. And this is why you don't just want to have very high tax rates because they clobber your economy. So this point that Navaro was making in that article, it actually makes sense. I mean 1.7% is a pretty tepid growth assumption, we should be able to grow a lot faster. And if we have a favorable
tax policy, you can grow a lot faster. Now, if you go to spending, can you pull up the Fred chart on spending? What you see here is that I mean it's been kind of going up but let's say that since the 19 mid1970s or so federal net outlays as a percent of GDP so basically spending was around 20% of GDP and then what happened is during co it went crazy went all the way up to 30% and now it's back down to you know low 20s but it's still not back down to 20 and what
we need to do is grow the economy we have to grow GDP to the point where federal net outlays are back around 20%. If you could get tax revenue to the historical mean of around 17.5% or 17%, you get spending to 20%, then you have a budget deficit of 3% which is much more tolerable. And I think that's best target under his 33 plan, right? Is you get GDP growth back up to 3% and you get the budget deficit down to 3%. All right, Chimat, you had some charts you wanted to share. Well, I think
what's amazing is if you take last week and now again this week, we're all converging on the same thing. The path out of this is through GDP growth. And I just want everybody to understand where we are. And this is without judgment. This is just the facts. What this chart shows in gray is the total supply of power in the United States and the blue line is the utilization. So what you build for is what you think is a premium above the demand, right? You'd say if there's one unit of demand, let's have 1.2 units
of supply, we'll be okay. But as it turns out, historically in the United States, we've had these cycles where we didn't really know what the demand curve would look like. And so over the last number of years, we've stopped really building supply in power. But what happened with things like AI and all of these other things is that the demand just continued to spike. And so what this chart shows is we are at a standstill sitting here today in 2025. On margin we're actually short power which is to say sometimes there are brownouts sometimes there
is lack of power because we didn't add enough capacity. So that's where we are today. So then we talk about all of these new kinds of energy and this is just meant to ground us in the facts. If you tried to turn on a project today sitting here in May of 2025, here's what the timelines are. We all talk about SMRs, small modular reactors. The reality is that if you get everything permitted and you believe the technology can be derisked, you're still in a 2035 plus time frame. You're a decade away. If you have an
unplanned NAT gas plant today, the fastest you could get that on is four years from now. If we tried to restart a mothball nuclear reactor, of which there are only three we can restart, that's a 20 27 to 2030 time frame. So, let's give us the benefit of the doubt. That's 2 years away. If we needed to plan Nat gas plant, there's already 24 gawatt in the queue which can't get turned on. So where does this end up? And this is where I think we need to strip away all the partisanship and understand what we're
dealing with. We have ready supply of renewable and storage options today. It's the fastest thing that you can turn on. It allows us to turn on supply to meet the demand and utilization. So I just think it's important to understand that we must not lose energy. We cannot lose the energy market because that is the critical driver of all the GDP. All right. Nepon steel and the US steel merger got cleared by President Trump. This was something that was being blocked by Biden obviously for national security reasons. Nepon is going to acquire steel for 14.9
billion. Biden blocked that as we had discussed. On Friday, Trump cleared the deal to go through calling it a partnership that will create 70,000 jobs in the US. And on Sunday, Trump called the deal an investment sync. It's a partial ownership, but it will be controlled by the USA. Chim, there seems to be uh a reframing of this deal and that the United States is going to benefit from it, but it's not a sale. Let's set some context. The United States is always on the wrong side of these deals. Okay? We've been on the wrong
side for 20 years. Meaning, we show up when an asset is stranded or completely run into the ground. For example, we did the auto bailouts at the end of the great financial crisis. If it's not a company and there's toxic assets, we set up something called TARP. What do we get? Not much in return. In this, it's the opposite. And I think that this strategy has worked for many other countries really well. So if you look at Brazil, companies like Embraer and Valet, which are really big Brazilian national champions, have a partnership, a pretty tight
coupling with the Brazilian government. The Brazilians have a golden vote. If you look inside of the UK, there's a bunch of aerospace and defense companies, including Rolls-Royce, that have a very tight coupling with the UK government. They have a golden vote. If you look in China, companies like Bite Dance and CL have a very tight coupling with the Chinese government and the Chinese government has a golden vote. And so what are all of those deals? Those deals are about companies that are thriving and on the forward foot. And so I think this is a really
important example of things that we need to copy. I've said this before, but one part of China that I think we need to pay very close attention to is Hu Jin Tao in 2003 laid out a plan and he said we are going to create 10 national champions in China in all the critical industries that are going to matter for the next 50 years including things like batteries and rare earths and AI and they did it but for those companies that allowed them to thrive and crush it and I think that we need to do
that and compete with those folks on an equal playing field. So in all industries or in very specific strategic ones because that would seem like corrupting capitalism in free markets would be the steelman. Yeah, there's 10 industries that matter and you steel is one. Okay. I think the precursors for pharmaceuticals are absolutely critical. Got it. I think AI is absolutely critical. I think the upstream lithography and EV deposition and chipm capability absolutely critical. I think batteries are absolutely critical and I think rare earths and the specialty chemical supply chain absolutely critical. If you have those
five, you are in control of your own destiny in the sense that you can keep your citizens healthy and you can make all the stuff for the future. So I think if the president is creating a a more expansive idea beyond US deal with this idea of US support, maybe there'll be preferred capital in the future to US deal. But if he creates a category by category thing across five or six of these critical areas of the future, I think it's super smart and we should do more of it. Sax, what do you think? interventionism,
putting your thumb on the scale, golden votes, a good idea for America in very narrow verticals or let the free market decide. What are your thoughts on this golden vote, having a board seat, etc. Well, it depends what the free market, so to speak, produced. And the reality is over the past 25 years is we exported a lot of this manufacturing capacity to China. And I don't think it was a free market because they had all these advantages under the WTO that we talked about on a previous podcast. they were able to subsidize their national
champions while still remaining compliant with the WTO rules because supposedly they were a developing country. It was totally unfair. And what they would do is through these subsidies, they would allow these national champions to essentially dump their products in the global market and drive everyone else out of business. They became the lowcost producers. I think that as the president just said recently, not every industry has to be treated as strategic clothes and toys. is we don't necessarily have to reshore in the United States but steel production is definitely strategic steel aluminum and I'd say the
rare earth we have to have that capacity we cannot be completely dependent on China for our supply chain so some of these industries have to be reassured and if you need subsidies to do it I think that you do it for national security reasons first and foremost there are other sense yeah yeah there are other industries where the private market works just fine and what we need to do to help those companies is simply not get in their Hey, with unnecessary red tape and regulations. So, I would say empower the free market when America is
the winner. And then in other areas where they're necessary for national security, then you have to be willing to basically protect our industries. Freeberg, it seems like the great innovation here might also be the American public getting upside. When we gave loans to Celindra and Tesla and Fiser and a bunch of people for batterypowered, you know, energy under Obama, we just got paid back in some cases by Elon. Other people defaulted, but we didn't get equity. What if we had instead of getting our 500 million back in the loan from from Elon, which he paid
back early and with interest, if we got half back and we got half in equity, RSUs, whatever, stock options, warrants, this would be an incredible innovation. So, what are your thoughts here? because people look to this podcast as, hey, the free market podcast, but this does seem to be a notable exception here of maybe we should get involved and do these golden, you know, share votes, board seats, you know, maybe more creative um structures in order to win faster. What are your thoughts, Reaper? I don't like it. I don't like the government and markets. Keep
the government out of the markets. It creates a slippery slope. First of all, I think markets don't operate well if government's involved. It gets inefficient and that hurts consumers. It hurts productivity. It hurts the economy. Second, I think it's a slippery slope. You do one thing. question though. If government non-intervention results in all the steel production moving offshore, if it results in all the rare earth processing and the rare earth magnet casting industries moving offshore, in fact, not just moving offshore, but moving to an adversarial nation such that they can just switch off our supply
chain for pretty much every electric motor. Is that an outcome of the quoteunquote free market that we should accept? Well, then I think that's where the government can play a role in trade deals to to manage that effect. So you can create incentives that'll drive onshore manufacturing by increasing the tariff or restricting trade with foreign countries so that there isn't a cheaper alternative, which is obviously one of the plays that this Trump administration is trying to do. I' I'd rather have that mechanism than the government making actual market-based decisions and business decisions. You know how
inefficient government runs. You know how difficult it is to assume that that bureaucracy is actually ever going to act and pick any best interest or any good interest at all. They're just going to [ __ ] it all up. So, I'd rather keep the government entirely out of the market. Create a a trade incentive where the trade incentive basically will drive private markets, private capital to build that industry on shore here because there isn't one and there's demand for it because you've restricted access to the foreign market. That I think would be the best general
solution. tax and then I think it's a slippery slope because then you could always rationalize something being strategic, something being security interest in the United States. So then every industry suddenly gets government intervention and government involvement. And then the third thing is I don't want the government making money that the Congress then says, hey, we've got more money, we got more revenue, let's spend more money because then they'll create a bunch of waste and nonsense that'll arise from having increased revenue. one side and I will say I one thing where I do think we do
a poor job is we don't do a good job to answer your question Jal of investing the retirement funds that we've mandated through social security we should be taking the $4.5 trillion that our social security beneficiaries have had deducted from their paychecks over many many years and those social security future retirees or current retirees are getting completely ripped off because their money is being loaned to the federal government. It's not being invested. It's been loaned to the government to spend money and run a deficit and ultimately inflate away the value of the dollar. We should
have been investing those dollars in some of these strategic assets. So if ever there were to be shares or investment that the government does, it should be done through strategic investing through the social security or retirement program. Similar by the way to what's done in Australia where these uh these supers are have created an extraordinary surplus of capital. Same in Norway, same in all the Middle East countries. incredible sovereign wealth funds that benefit the retirees and the population at large. That's where the dollars should be invested from. I do think the fundamental focus priority right
now should be reforming social security while we still have the chance. We have until 2032 when social security will be functionally bankrupt and everyone's going to get overt taxed and kids are going to end up having to pay um through inflation for the benefits of the retirees of the last generation. Freeberg's right. We're on a seven-year shock clock to when social security is not funded. And by the way, this opportunity to fix mandatory spending, it was an opportunity to introduce some structural reform in social security. Another reason why I think that there's a degree of
discretzia in this bill, particularly with how Congress had acted and not addressing what is becoming a critical issue because everyone wants to get reelected in the next 12 months, 18 months. They've got elections coming up. So, everyone's scrambling to not mess with that because you can't touch it. It's like, you know what, guys? This is bankrupt in seven years. It's going to cost us 5 10 times as much when we have to deal with it when everyone runs out of money. Deal with it now. Fix the problem. And by the way, we should flip all
that money, $4.5 trillion into an investment account for the retirees where they can own equities and they can make investments in the markets and they can participate in the upside of American industry and the GDP growth that's coming. Instead, they're getting paid 3.8% or four 4 and a.5% average from treasuries that they own that, by the way, are now have a lower credit rating than they've ever had. You know, it's crazy. I I'm I'm in complete agreement with you and I think it's a lack of leadership on Trump's part. If Trump is going to criticize
Taylor Swift and Zalinski and Putin and everybody, you know, all day long on Truth Social, he can criticize Congress and the Democrats and the Republicans on not cutting spending. I think he should speak up. I think he was elected to do that. It was a big part of the mandate and uh he should tone down the tariff uh chaos and tone up the uh and lean into uh intelligent immigration you know recruiting great talent to this country and he should be pushing to make these bills uh control spending that's just one person's belief for the
chairman dictator hapatia your zar David Sachs in that Chris Brion white shirt very beautiful and the sultan of science deep in his wal E era. I am the world's greatest moderator and as Freeberg will tell you, executive producer for life here at the All-In podcast. We'll see you all next time. Bye-bye. Jason.com. Love you boys. Bye-bye. We'll let your winners ride. Rainman David. We open sourced it to the fans and they've just gone crazy with it. Love you. Queen of [Music] Kino besties are gone. That is my dog taking your driveways. Oh man, my appetiter
will be. You should all just get a room and just have one big huge orgy cuz they're all just like this like sexual tension that they just need to release somehow. Wet your feet. her feet. That's going to be good. We need to get Murphy's [Music] our all in.