I want to say something that you would never expect to hear from More Perfect Union. Elon Musk is right about something. Elon Musk is suing the maker of ChatGPT.
. . .
. . suing OpenAI.
. . suing OpenAI for breach of contract.
. . by putting profit ahead of benefiting humanity.
Elon's lawsuit against OpenAI, the most recognized developer of artificial intelligence was big news. But when I heard it, I was a bit confused. When has Elon ever been against profit?
So I dug in. I read and I watched far too much of these guys talking about AI. I think that AI will be a technological revolution on the scale of the agricultural, the industrial, the computer revolution.
AI is about to revolutionize digital biology and genomics and transportation and retail. Artificial intelligence. .
. this is. .
. it is a renaissance. This is a world changing set of advances.
By the time these lawsuits are decided, we'll have Digital God. Digital God. I noticed three things: that the rise of AI is inevitable, that it will change everything about our lives and that there is a war brewing between a handful of billionaires to seize control of it.
NVIDIA became one of the most valuable stocks of all time, producing chips for AI Microsoft reached an over $3 trillion valuation based on its AI work, and OpenAI tripled in value in just ten months. But where do we- You and I and everyone other than the dozen or so billionaires duking it out for control fit in? We're already seeing plans to replace human nurses with AI, media magnates killing jobs to focus on cost cutting with AI, AI written drivel filling the internet, and AI that does karaoke for you.
Well, let's look at the possible AI future through the rise of Sam Altman, CEO of OpenAI, and just one contender in the billionaire battle for the future. Sam Altman is the ruler of Silicon Valley startups. And that's not me saying that he thinks that.
I think the president of YC is sort of the unofficial leader of the startup movement. YC Is YCombinator a significant venture capital firm and tech incubator that Altman ran from 2014 to 2019. He originally connected with YC in 2008 when the firm funded his app Loopd when he was 19 years old and wore two popped collars, It was a common Silicon Valley story.
They raised a ton of money gathering a bunch of data and got in trouble for texting everyone on your phone when you downloaded the app. Altman sold the company and it all but disappeared. Altman landed at YCombinator, where he rose to president in two years.
Under his leadership, YCombinator invested in massive companies most people have heard of like Reddit, Airbnb, Coinbase, Drpbox, Stripe, Twitch, DoorDash and Instacart. In a lecture to Stanford computer science students on startups, Altman quoted Peter Thiel's advice: as Peter Thiel is going to discuss the fifth class. You want an idea that turns into a monopoly, but you can't get a monopoly in a big market right away.
You have to find a small market in which you can get a monopoly and then quickly expand. Quick expansion is an inherent part of the way venture capital funded tech runs today. This is Tim O'Reilly.
He's been in tech forever. And if you know phrases like 'open source' or 'Web 2. 0', it's because he popularized them.
One of the big problems with today's Silicon Valley is that it no longer really supports free market competition. Early days of VC you were really talking about funding insurgent companies that had an experimental idea. Most companies didn't actually raise massive amounts of capital, But at some point that changed.
But then you fast forward to 2010 in the wake of the of the super low interest rates is all this cheap capital and companies are just buying market share. And I call this the Uber problem. Yeah we didn't see real competition with different business models, different pricing.
We see heavily capitalized companies driving everybody else out of business. By growing rapidly with a bunch of capital. Uber created an ecosystem where if you want a cab, you need to use Uber.
The old system of car services was pretty much pushed out of existence. Uber is not an Altman company, but DoorDash, Airbnb and other companies that used similar strategies totally are. In 2015, we got our first hint of OpenAI.
I actually just agreed to fund a company that is not even really a company, sort of a semi-company. semi-nonprofit doing air safety research, OpenAI is announced as not a company, but a nonprofit focused on air safety, funded and supported by Y Combinator. Elon Musk, Reid Hoffman-the LinkedIn guy, Peter Thiel, Amazon and Infosys, basically a who's who of the people who built the broken tech infrastructure of today.
The stated goal of OpenAI was to advance digital intelligence in the way that is most likely to benefit humanity as a whole with no shareholders to be beholden to. We wanted to build this with humanity's best interest at heart. That doesn't sound too bad, right?
A groundbreaking new technology, its power available to all without any responsibility to shareholders. But let's dig a little deeper. They say they want to make the world better and do it safely.
But what does that mean to them? Most of what Altman talks about in regards to what OpenAI products can do seems to center on productivity, efficiency and margins, boosting our ability to have amazing ideas for our children to like teach themselves more than ever before for people to be more productive. How does that make life better for the rest of us?
Well, Altman claims to have the answer in is essay, say Moore's Law for everything he explains. It would require simply changing the entire economic system. Altman claims A.
I. would drive down labor costs so everything would get cheaper, and the lost jobs would be offset by a universal basic income coming from corporate and property tax rates with no other taxes. Which sounds great.
Maybe, but how true is that? When corporations see falling labor costs and reduce prices, or just keep the profits for themselves? Would a flat tax and UBI mean more money for working people or more tax cheating by the rich in social service cuts for everyone else?
And there was a running theme through all of what Altman says: the inevitability and the danger of artificial intelligence. Listen to this clip. You know, I think AI will probably like most likely sort of lead to the end of the world.
People like Altman benefit from the narrative that AI is this big, scary thing, even as they're the ones trying to build and profit from it. Here's Tim O'Reilly again. It feels a little bit like a kind of misdirection.
They're basically calling for a kind of regulation of an extreme right to avoid the regulation of the many proximate harms we can see today. If they were really afraid of it, they would stop doing their research. Instead, they're racing to accelerate it so they get a monopoly.
It's a lot like the famous line from The Wizard of Oz. Pay no attention to the man behind the curtain. And a lot of where I spent my time in talking about regulation is this.
There is a man behind the curtain or a series of men who are making decisions for their business advantage. And those are the things that we need to be regulating. Why are they moving fast to break things?
I mean, in the Altman clip from before where he says the world ending thing, he literally says this right after you know, I think AI will probably like most likely sort of lead to the end of the world. But in the meantime, there will be great companies created with serious machine learning. Companies- See, it's right back to profit.
But hold on. Isn't opening AI a nonprofit? Not anymore.
Just three years after the founding of Open A. I. , they transition to something they're calling mixed profit.
To do what we needed to go do. We had tried and failed enough to raise the money as a nonprofit. We didn't see a path forward there.
So we needed some of the benefits of capitalism, but not too much. And the new Board of OpenAI is rife with the profiteers who've been extracting value from working people for decades. They're also not open.
None of this stuff is just simply available. An old version of ChatGPT. is free and kind of fun sometimes, but OpenAI's real product is enterprise software, doing partnerships with other giant tech companies and loads of stuff we don't even know about.
Defense contracts included. OpenAI has a product to sell, a product they see as having inevitable near universal proliferation, which is a pretty damn good business plan. But it's dangerous for the rest of us.
Tim O'Reilly said about the Uber problem. Here's how it applies to AI: all of the smaller AI firms are already starting to fall because there are a couple in in the far of OpenAI and Anthropic that are incredibly heavily funded, that have tens of billions of dollars of capital. And we don't know that they're the best companies.
We just know that they're the ones that big investors picked up early. And so we have a defective kind of a market where if you really believe in the wisdom of millions of people making independent decisions based on optimal information to kind of work this free magic market, we don't have that. And we have a central committee of deep pocketed investors.
It all illuminates Elon's lawsuit against OpenAI when he and Altman had once been such good friends. I'm looking for any video game play. Can you give me a recommendation?
Overwatch for overwatch? Yeah. That's great.
Um, I named it OpenAI after open source. It is, in fact, closed source. It should be renamed.
Super Closed Source for Maximum Profit AI But he has his own for sale for profit AI that he wants to be predominant. He made it open source in a seemingly empty gesture. But meanwhile, Nvidia, Microsoft and Google all want to take control of the tech that will change our future and destroy our jobs.
It doesn't matter that these guys say they're doing it for good. It matters what they are actually doing. Look at the dawn of the internet itself or web 2.
0 that's social media and user generated content. All of that seems great. And yeah, the modern Internet has obviously had a lot of clear benefits, but it was also pretty quickly ruined by giant corporations and profit motive.
We're already starting to see effects of the power grab. Nvidia, the main producer of the chips used for the AI, is partnering with a company that wants to replace nurses with an AI that costs $9 an hour. Look at anything from the history of the American medicine industry.
Will a new low cost tool actually helped patients or just pad investor pockets? Companies are using AI to do job interviews and it's shutting out applicants unfairly and on a simply annoying level. They were spitting out content that's making the Internet even more unusable than it already was.
AI generated LinkedIn comments. Why don't you just not post anything if you don't have anything to say? So what are we going to do about this?
Well, let's turn to Sam Altman for advice. He has what he calls the more good guys than bad guys approach. He wrote in that Moore's Law essay, "There are bad humans, but all humans are within a magnitude are as powerful as one another and the good humans and together to stop the bad humans.
It's been that way through all of history so far. " What if the bad humans are actually the people building the system, being built entirely on past profits and monopoly, advancing as quickly as possible. Then we need more good guys.
And we've got up there like 12 people. What if we all stand up and say no? When electricity was first becoming widespread, it was still dangerous for the low paid workers who were setting up the systems.
But those workers stood together and formed the International Brotherhood of Electrical Workers- That's the electrician's union- to demand safer jobs and better wages. It worked. And it didn't hamper progress.
We clearly have electricity today, right? That same concept applies to AI: remember the writers strike? The Screenwriters Guild stood up and said no to taking their jobs.
Senator Bernie Sanders just introduced legislation for a shorter workweek. Do we continue the trend? The technology only benefits the people on top.
Or do we demand that these transformational changes benefit working people? And one of the benefits must be a lower workweek. If A.
I. is going to make workers more productive and labor cheaper, workers should be able to take advantage of it. A.
I. cannot exist without being trained on all that humanity has done before. That's literally how the way most people interact with A.
I. was built by reading a bunch of stuff online and then synthesizing it so it can talk. If all of this is built on the labor of all of humanity, then it should be all of humanity that benefits.