[Music] Hello and welcome to the GZERO World podcast. This is where you'll find extended versions of my interviews on public television. I'm Ian Bremer and today, imagine it's 2027, 2 years away.
Artificial intelligence systems are wreaking havoc on the global order. China and the US are locked in an AI arms race. Engineers warn their AI models are starting to go rogue.
This isn't science fiction. It's a scenario described in AI 2027, a new report that tries to envision AI's progression over the next 2 years. As artificial intelligence approaches human level intelligence, the report predicts that its impact will exceed that of the industrial revolution and warns of a future where governments ignore safety guard rails as they compete to build more and more powerful systems.
What makes AI 2027 feel so urgent is that its authors are experts with inside knowledge of current research pipelines. The project was led by Daniel Kotalo, a former Open AI researcher who left the company last year over concerns it was racing recklessly towards unchecked super intelligence. Kotalo joins me on the show today to talk about the report, its implications, and to help us answer some big questions about AI's development.
What will it mean for the balance of global power and for humanity itself? And what should policymakers, technology firms be doing right now to prepare for an AI dominated future that experts say is only a few short years away? That's a lot to discuss.
Let's get to it. [Music] The GZERO World podcast is brought to you by our lead sponsor, Prologis. Prologis helps businesses across the globe scale their supply chains with an expansive portfolio of logistics real estate and the only end-to-end solutions platform addressing the critical initiatives of global logistics today.
Learn more at prologist. com. [Music] Daniel Cocatella, thanks so much for joining us on gzero world.
Thank you for having me. Okay, I read this report. I thought it was fantastic.
So I'm a little biased but I want to start uh with the definition of artificial general intelligence. How will we know it when we see it? So there are different definitions.
Um the basic idea is AI system that can do everything uh or every cognitive task at least. So once we get to AGI and beyond, then there will be fully autonomous artificial agents that you know are better than the best human professionals at basically every field. Um if they're still limited in serious ways, then it's not AGI.
And from the report, I take it that you are not just reasonably confident that this is coming soon to a theater near you, like 2027, but you you're completely convinced that this is going to happen soon. Let's not even talk about exactly when, but there's no doubt in your mind that AGI of some form is going to be developed soon. There's some doubt, right?
I would say something like 80% in the next, you know, five or six years, something like that. So, like in the next 10 or 20, it gets to like 99% or maybe it gets up to like 90% by the next 20 years or so. But there's there's still like some chance that this whole thing fizzles out, you know, some crazy event happens that that halt say I progress or something like that.
There's still there's still some chance on those outcomes, but uh that's not at all what I expect. I would say if it fizzles out, does it fizzle out largely because humanity prevents the technology from continuing or is it plausible that the tech itself just can't do this that you're just wrong that the people that are covering AI are wrong about the uh move to self-improvement. So I think it's definitely possible in principle to have an artificial system that's uh that counts as AGI and that's you know better than humans in all the relevant ways.
Uh however it might not be possible in practice given current levels of computing technology and understanding of AI and so forth. That said I think that it's quite likely possible in practice too. I mean that's what I just said like 80% 90% right?
So maybe I'd put something like 5% on it turns out to not be possible in practice and 5% on humanity stops building it. So let's first tell everyone what this report is AI 2027. Uh explain uh the contents of the report briefly and why you decided to write it.
Sure. So you may have heard or may maybe you haven't heard that some of these AI companies think they're going to build super intelligence before this decade is out. Um what is super intelligence?
It's AI that's better than the best humans at everything while also being faster and cheaper. This is a big deal. Not enough people are thinking about it.
Not enough people are like reasoning through the implications of what if one of these companies succeeds at what they say they're going to do. AI 2027 is an answer to all those questions. It's an attempt to game out what we think the future is going to look like.
And spoiler, we we do think that probably one of these companies will succeed in making super intelligence before this decade is out. So AI 2027 is a scenario that depicts uh what we think that would look like. AI 2027 depicts AI is automating AI research in over the course of 2027 and the pace of AI research accelerating dramatically.
At that point we branch there's a sort of choose your own adventure element where you can choose two different uh continuations to the scenario. In one of them the AIs end up continuing to be misaligned. So the humans never truly figure out how to control them once they become smarter than humans.
And the end result is a world a couple years down the line that's totally run by super intelligent AIs that actually don't care about humanity at all. And that results in catastrophe for humanity. And then the other branch describes what happens if they do manage to align the AIS and they manage to figure out how to control them even as they become smarter than humans.
And in that world, it's a utopia of sorts. It's a utopia with a lot of power concentration where the people who control the AIS effectively run society. And the report, it's far more detailed about the near future than anything else I've read.
But your views are not way out of whack with all of the AI experts that I know in all sorts of different companies and university settings, right? I mean at this point it is I would say commonly accepted even conventional wisdom that AGI is coming comparatively soon among people that are experts in AI. Is that is that fair to say?
I think that's fair to say. I mean, it's still controversial like almost everything in AI, especially over the last 5 years, there's been this general shift from AGI, what even is that to, oh wow, it could happen in our lifetimes to, oh wow, things seem to be moving faster than we predicted. Maybe it's actually on the horizon, maybe 5 years away, something like that, maybe 10 years, right?
Different people have different guesses. It seems to me that the Turing test, which was for a very long time something that people believed would never be broken, when you or I could have a conversation with a a artificial bot, um, call it what you will, and not be able to distinguish that over a course of a conversation from a human being like we're already there. Yes.
Yes. And no. So, one of the parameters you can use to vary the difficulty of a cheering test is how long the conversation is.
And another parameter you can use to vary the difficulty of the cheuring test is how expert the judges are. My guess is that right now there is no AI system that could pass a you know 20 minute touring test with an expert judge if that makes sense. By contrast with true AGI which would be able to pass, you know, a much longer touring test with an expert judge.
So, but there has been substantial progress as you as you point out. I mean, I think maybe they could do like a one minute Turing test with an expert judge. Maybe they can do a half hour Turing test with an ordinary human being.
There's definitely been a huge leap forward in in sort of Turing test progress in the last 5 years. and because I I'm most interested in the implications for society um as I suspect you are uh in in the way you wrote this um and and there so what kind of matters a lot is a 30 minute conversation with an average human being because of course whether you're talking about um you know a world leader um or um uh you know a grandma that you're trying to you know sort of uh swindle you know engage in fraud um or just someone you want to have a customer relationship with in business. Um those are most likely to be people um that are average and not experts and they're going to have a hard time differentiating already is what you're saying.
Um yeah, I agree with that. I think I would put the emphasis on other things actually. I think that um one core thing to look out for is when AI progress itself becomes automated.
autonomous AI agents are doing all or the vast majority of the actual research to design the next generation AIS. This is something that is in fact the plan. It's what these companies are attempting to do and they think they'll be able doing it in a few years.
The reason why this matters so much is that we're not even used to the already fast pace of AI progress that exists today. Right? The the AI systems of today are noticeably better than the AI systems of last year and so forth.
But I and others expect the pace of progress to accelerate quite dramatically beyond that once the AIs are able to automate all the research. And that means that you get to what you could call true super intelligence fairly quickly. Not just sort of an AI that can hold a conversation for half an hour and seem like a human, but rather AI systems that are just qualitatively better than the best humans at everything while also being much faster and much cheaper.
This has been described as like the country of geniuses in the data center by the CEO of Anthropic. I prefer the term army of geniuses. I would say that they're going to automate the AI research first.
Uh then they're going to get super intelligence and then the world is going to transform quite abruptly and plausibly much for the worse depending on who controls the super intelligences and if anybody controls the super intelligences. I want to take one little step back um because before we get to self-improving systems, we're now at a place it seems where a large amount of the coding is already happening through AI. Is is this the first I mean let's say um largecale job that people should no longer be interested in going into because within a matter of let's say 6 months a year you're just not going to need people to do any coding anymore.
So my guess is it'll be more than six months to a year. So in AI 2027 which at the time that we started writing was my sort of median forecast. Now, I think it's a little bit um too aggressive.
I think if I could write it again, I would have the exciting events happen in 2028 instead of in 2027. In AI 2027, we depict the full automation of coding happening in early 2027. So, you know, maybe two years from now, so a bit longer than 6 months, but still that's sort of what's on the horizon.
Also, notably, when that milestone is achieved, that doesn't necessarily mean that people who today are engineers would immediately lose their jobs. Uh, and you know, if you read AI 227, the first company that achieves this full automation of coding, they don't actually fire all their engineers. Instead, they put them in charge of managing teams of AIS.
But I think that one of the first major professions to be fully automated will actually be uh programming because that's what the companies are trying hardest to achieve because they realize that that will help them to accelerate their own research and compete with each other and and yeah and make the most money in their field doing things they know how to do and they're the ones at the cutting edge of AI. So if you were a major university in the United States or elsewhere, would you simply get rid of your faculties, your departments to teach coding? I mean, I I assume that if you're a mom or dad talking to your kids about what field to go into at the very least, right?
I mean, you're four years away from your degree. the just five years ago we had all of these people around the world that were in jobs that people were worried oh this isn't as relevant learn to code the response was learn to code that seems like literally the worst possible advice you could give to someone going into a university right now only a few years later yeah potentially I mean I think that um it feels kind of strange to be giving career advice or schooling advice in the times that we live in right now it's sort of like imagine that I came to you with evidence that a fleet of alien spaceships was heading towards Earth and it was probably going to land sometime in the next few years. And your response to me was, you know, what does this mean for for the university?
Should they retool what types of engineering degrees they're giving out or something? And I'm like, yeah, maybe. I guess.
Well, I guess I was trying to I was trying to do the 20% before we got to the 80%. which is that even if you're wrong and we don't get to AGI and the aliens aren't actually two or maybe now three years away depending on which version of the paper um you're nonetheless going to get uh all of this coding done because that's not an 80% certainty that's much more of a 95 a 99% certainty and so at the very least I'm trying to help people that aren't spending a lot of time thinking about this understanding that there are like largecale decisions that we aren't discussing adequately that need to be resourced that need to be made that need to be thought through and you you start easy and then you get harder. Okay.
Sure. Well, yeah. I mean, I think that there's going to be I mean already people say that chat GBT and other language models are disrupting education because of making it so easy for students to cheat on class and so forth.
And they're also relatedly making some of the skills that classes teach less valuable because they can be done by JPT anyway, right? And I think a similar thing is going to be happening with coding over the next few years, even if we're totally wrong about AGI. The GZERO World podcast is brought to you by our lead sponsor, Prologess.
Prologis helps businesses across the globe scale their supply chains with an expansive portfolio of logistics real estate and the only end-to-end solutions platform addressing the critical initiatives of global logistics today. Learn more at prologist. com.
[Music] You left Open AI because you felt like those people that are they have the resources, they're driving the business models were acting irresponsibly or at least not acting responsibly um taking into account these things that you're concerned about. Explain explain a little bit about that decision. what went into it and then we'll talk about where we're heading.
The short answer is it doesn't seem like OpenAI or any other company is at all ready for what's coming and they don't seem inclined to be getting ready anytime soon. They're not on they're not on track and they don't seem like they're going to be on track. So to elaborate on that a little bit, there's this important technical question of AI alignment, which is how do we in a word, it's how do we actually make sure that we continue to control these AIs after they become fully autonomous and smarter than we are.
And this is an unsolved technical problem. It's an open secret that we don't actually have a good plan for how we're going to do this. There are many people working on it, but not as many as there should be.
And they're not as well resourced as they should be. And if you go talk to them, they mostly think they're not on track to have solved this problem in the next couple years. So there's a very substantial chance that if things continue in the current path that we will end up with something like what is depicted in AI27 where the army of geniuses on the data center is merely pretending to be compliant and aligned and controls, but they're not actually.
That's one very important problem. Then there's another one which is the concentration of power and sort of uh who do we align the AIs to problem. Who gets to control the army of super intelligences on the data centers?
Currently, the answer is well, I guess maybe the CEO of the company or maybe the president if he intervenes. I think both of those answers are unacceptable from a democratic perspective. We need to have checks and balances.
We need to make sure that the control over the army of super intelligences is not something that one man or one tiny group of people get to have. And there's lots to be more to be said about this, but the short answer is that OpenAI and also perhaps other companies are just not at all really giving these issues the investment that they need. I think they're instead mostly focused on beating each other, winning the race.
Basically, they're focused on getting to the point where they can fully automate the AI research so that they can have super intelligences. I think this is going to predictably lead to terrible outcomes and I don't trust these companies to make the right decisions along the way. No, it's a classic collective action problem.
Um, it's how we got climate change, but this has much more consequential in a much shorter period of time. So, it wasn't like you were going after the bosses of OpenAI or the companies per se. You were just writing about what the scenarios going forward you believe are likely to be, are most likely to be.
And then it branches off into two potential, one really dystopian, one somewhat utopian. um after um this you know sort of breakout occurs uh and we have super intelligence and my question for you is if you had written this piece while you were still at OpenAI would that would that have been grounds for dismissing you or do you think it was plausible for that to occur? I doubt they would have let me publish it had I written it.
If I could add more to what you were just saying. Broadly speaking, the trajectories described in AI 2027 are considered plausible by many of the researchers at these companies. And in fact, many of the researchers at these companies are expecting something like this to happen.
I think it's important for the world to know that and to sort of see this sort of laid out like this is where a lot of people think we're headed is something looking roughly like this whether it happens in 2027 or 2029, whatever, but these are the sorts of things that we're going to be dealing with in the next few years probably. And you think these companies do not want the public to be aware of the trajectory that the researchers in their own companies believe is coming. Yeah.
Basically, I think that the public messaging of the companies is well focused on what's in their short-term interest to message about. So, they're not doing nearly enough to lay out explicitly what these futures look like and especially not to talk about these these risks or these the ways things could go wrong. I kind of get this when you're talking about like Exxon in the 70s, right?
Because their long-term is generational, but I mean here the long-term you're talking about is short-term. I mean, the people that are making decisions and that are profiting, they're the same people that are going to have to deal with these problems when they come in like just a matter of a couple of years. So, it I'm having a harder time processing that.
Well, they each think that it's best if they're the ones in power when all this stuff happens. Part of the founding story for Deep Mind was, wow, AGI incredibly powerful. If it's misaligned, you know, it could possibly end the human race.
also someone could use it to become dictator. Therefore, we should build it first and we should make sure that we build it safely and responsibly. Part of the founding story for OpenAI was exactly that.
And you can go look at the email exchanges uh that came up in the court case between Elon and uh Sam to show how even from the beginning these uh leaders of these companies were talking about how they didn't want Demis to create an AGI dictatorship and that's why they made open AI to do it responsibly. that but that implies that a company like Open AI or DeepMind would want to have precisely an honest take of the scenarios going forward out there as public as possible to attract the resources to help ensure um that the worst futures don't come. Yeah, I mean that's what I think they should be doing.
We can speculate as to why they're not exactly doing that. Again, I think that the the answer is probably that they are really focused on winning and beating each other. Each of these CEOs thinks that the best person to be in charge of the first company to get to super intelligence is themselves.
Yeah. If you control the super intelligence, sure. But if you don't, uh, that might be the worst person to be, right?
Like I if I don't control the super intelligence, I want to be as far away from that super intelligence as possible. I don't want to be the person that actually created it and is trying to control it when it actually controls me. That sounds like a bad position to be in.
My guess is that they don't think about it that way. My guess is that they think that well if we lose control then it sort of doesn't matter whether you're right there at the epicenter or off in Tanzania or something like the same fate will ultimately come for all of you. And then also my guess is that they've basically rationalized thinking that it's not as big of an issue.
For decades they've had people telling them like you need to invest more in alignment research. We need to make sure we actually control this sort of thing. Then they've been like looking at their competition and looking at what they can do to like avoid falling behind and to stay ahead and so forth.
And as a matter of resourcing the clear answer is well we have to focus mostly on winning. And so my guess is that they've partly rationalized why actually maybe this control issue isn't such a big deal after all. We'll sort of figure it out as we go along.
I imagine they each tell themselves that or at least many of them probably tell themselves that they're more likely to keep control of their AIS than those other guys. You know, I the thing that was most disturbing about your piece in many ways is the fact that for the next 2 three years, the baseline scenario is that these companies are going to be right before they're wrong. They're going to become far far wealthier and more powerful than they presently are.
Um, and therefore they are going to continue to want to to be incented to reject your thesis right up until it's too late. Is that Do you think that's right? Yeah.
Basically, I mean, one of the unfortunate situations that we're in as a species right now is that humanity in general mostly solves mostly uh fixes problems after they happen. Like mostly we we watch the catastrophe unfold. We watch people die in car accidents, etc.
for a while and then as a result of that sort of cold hard experience we learned how to effectively fix those problems both on the like governance regulatory side with with regulations and then also just on the technical engineering side. We didn't invent seat belts until after many people had died in car crashes and so forth. Unfortunately the problem of losing control of your army of super intelligences is a problem that we can't afford to wait and see how it goes and then fix it afterwards.
we have to get it right uh without it having gone wrong at all. Basically, we can experiment on weaker AI systems. We can we can look at the AIS of today and experiment on them and try to figure out how to make them, you know, safe and aligned and things like that.
But once we've fully automated, but but that's that's importantly different from having completely automated AI research and having the AI is getting smarter and smarter every day without humans even understanding how they're getting smarter, right? That's that's an understandably different situation. And right now our plan is basically to hope that the techniques that we're using on the current AI systems will continue to work even as things really take off.
And in fact, they're not even working on current systems, right? So you can go read about this, but um current frontier AI systems like pod and chat GPT and so forth lie sometimes. I don't use that word um loosely.
I mean, there is evidence that they know that what they're saying is false and that they're not actually helping the user and they're saying it anyway. And they're saying it for what purpose? For what programmed purpose?
What's the end goal that they are trying to achieve? So, first of all, they don't have programmed purposes because they're not programmed. These are artificial neural networks, not ordinary pieces of software.
So, however they behave is a learned behavior rather than something that some human being programmed into them. And so we can only speculate as to why they're behaving in this way. That said, the speculation would be that during their training, even though the training process was designed by humans who are attempting to train the AIS to be honest, in fact, the training process probably reinforced dishonest statements at least some of the time in some circumstances, right?
Just like how even though you might have a school that attempts to punish students for cheating, but if they're not so great at catching the cheating, if they're imperfect at catching the cheating, then the cheating might still happen anyway. Especially the best cheating, the most effective cheating. It's like, you know, if you put imperfect drugs into a system, then they'll get rid of the weaker viruses, but the stronger viruses will propagate.
And and that's kind of what I see happening here. That's right. And so right now the the training methods are sort of blatantly failing.
We're getting very obvious lies sometimes from the AI systems even though we didn't want that at all and we were trying to train against that. In the future I expect the rate of blatant obvious misbehavior to go down. That leaves open the question of whether we actually deeply aligned these AI systems so that we can trust them to automate the AI research and design better systems without humans understanding what they're doing or if we have basically pushed the problem under the rug and gotten rid of the sort of obvious blatant misalignments but there are still ways in which they're inclined to deceive us sometimes without being caught.
Right? That's an example of the sort of consideration that we have to be thinking about on a technical level for whether or not this is safe. And part of the point that of course I and others have been making is that in the current race condition where all these companies are focused on beating each other that's not exactly setting us up for success on a technical level and as we get closer to super intelligence will we become aware of it as it's almost there or is it the sort of situation then that I've seen discussed a lot that the exponential factor is that it looks kind of stupid to us and then literally almost overnight.
It is well beyond the imagination of the average human being. You get you're not at self-improvement. Self-improvement happens and suddenly we can't do anything about it.
Is it flipping a switch? It's a quantitative question. So I think it's not going to happen literally overnight.
Um but but to a first approximation, yes, to many people and probably to most people, it's going to come as this big shock and surprise for the reasons you mentioned. I think people sort of underestimate exponentials. Obviously, there's a lot of uncertainty and it could go faster, it could go slower than AI27 depicts.
If what we're likely to need is a crisis and I remember uh Sam Alman spoke about that. It was a couple of years ago. He said we'll be better off if the crisis happens sooner rather than later because then it'll be small enough that it won't destroy us and we'll be more likely to be able to respond to it.
Um I don't know whether you think he was being honest about that or not but analytically that seems right you know in the sense um that uh we can't afford for a crisis to be so great uh that it destroys humanity but if we had a crisis with a really weak artificial intelligence now nobody's going to pay attention to it. What's the kind of crisis over the next couple years that would likely or could potentially shake corporates, governments, citizens into taking this far more seriously. So, there's all sorts of possible crises that could happen with AI prior to of of AI R&D.
However, I don't think many of them are that likely. And then the ones that I do think are likely are probably not going to cause that huge of a reaction. So for example, there's a minor crisis happening right now, which is that the AIs lie all the time even though they were trained to be honest.
As you can tell, this crisis isn't is clearly not motivating the companies to change their behavior that much and it's not really causing a huge splash. Right? On the bigger end of the spectrum, some people have talked about, you know, terrorists building bioweapons or something using AI.
And I think that that's possible, and I really hope it doesn't happen, but I think that it's probably not going to happen in the next few years. Uh I don't even know if I'm not sure to what extent there are terrorist groups even attempting this sort of thing. And then if it does happen, it's not clear that the response would even be an appropriate response.
After all, the terrorist building bioweapons with your AI problem is a qualitatively different problem from the you lose control of your AIs when they become smarter than you problem. And it suggests different solutions such as banning open source or heavily restricting who gets to use the models or things like that which are helpful against the terrorists but not at all helpful against the loss of control issue. So Daniel, where I was going was and you're right to raise those and say they're not going to be helpful.
I was going more towards the loss of control like um you know we're getting to an agentic AI capacity where people can use AI to do things as opposed to just to learn things or to tell them things. What happens if you know some kid, some hacker, some whatever creates millions and millions of bots to go out and do something like uh swing an election, do something like make a run on a market, you know, sort of much worse than what we saw with GameStop and all through AI, an AI breakout essentially that has a bunch of agents that aren't just giving information but are actually taking actions. Is that plausible in the next year, year and a half, two years before we get to super intelligence?
One of the remaining bottlenecks on getting to super intelligence in AI27. We talked about a series of stages of of capability milestones. And the first one is the superhuman coder milestone and then after that they automate all of AI research and so forth.
Eventually they get to super intelligence. One of the reasons why we don't already have the superhuman coders is that our AIs are not very good agents. They need additional training to get good at being agents and they might need other things as well.
The same reason why they're not automating coding is also a reason why they would fall on their faces if they were attempting to do something like that, right? If if they're attempting to create this sort of agent botnet of AI hacking around the world and then like influencing election or whatever, I predict that they would just not be able to do that until they can be a competent programmer if that makes sense. And so I just don't think there's going to be that sort of thing happening until after the intelligence explosion is already beginning after uh the AIs are already starting to massively automate the AI R&D.
So AI fundamentally about small problems, a profusion of small problems we don't care about and then a tipping point with massive problems that are too big for us to resolve. That's perhaps one way of putting it. Yeah.
Unfortunately, I think it's is something that we need to prepare for in advance uh rather than just waiting to see what happens. Yeah. Because climate change is the opposite, right?
Right? I mean, climate change is like a whole bunch of small problems then become bigger problems and become even bigger problems in a very obvious way to like global actors everywhere and over time that creates this requirement of like devoting resource and response. AI is not that it does not it from what you are saying it really doesn't lend itself to the kinds of effective crisis response or preemptive response because some of climate is of course preemptive response that is utterly necessary in the next few years I think so yeah unfortunately um but uh hopefully people can have some foresight and start thinking about these problems before they happen and uh take action to make them not happen in the first place okay So given that and I know you're not a policy maker, you are um you know sort of an AI wiz but um you know you did write this paper you are hoping to see action on the back of it.
What are a couple of things that if they were to occur in the next year you would say I actually feel a little better that um my my my uh doomsday scenario is less likely to come to pass. Loads of things right now. The main thing I say when people ask me these questions is transparency.
What we should do is be trying to set ourselves up to make better decisions in the future when things start getting really intense. Information about what's going on inside these companies in general needs to sort of flow faster and in more detail to the public and to the government. Some examples of this, I think that I would like to see regulation that requires companies to keep the public up to date about the exciting capabilities that they're developing.
So that for example if they do some experiments and they manage to get AI as autonomously doing research within their company well that's the five alarm fire sort of thing that they need to like tell the world about rather than doing what they might be tempted to do which is to scale up the automated research happening within their company but not telling the world about it at least for now perhaps because they don't want to tip off their competitors blah blah blah blah that's the sort of thing that the public deserves to know about if it's starting to happen similarly other dangerous capabilities so right now some of these companies tech for like how good are the AIS at bioweapons, how good are they at cyber, etc. The public should be informed about the pace of progress in those domains because the public deserves to know if AIs this year have crossed some sort of threshold where they could massively accelerate terrorists or whatever. Also setting aside the safety concerns, it's important for the public to know what goals, what principles, what values the company is trying to train the AIs to have, so that the public can be assured that there aren't any secret agendas or biases that the company is putting into their AIS.
This is something that everyone should care about, even if you're not worried about loss of control. But it also helps with a loss of control angle because if you have the company write up a model spec that says here are our intended here's what we're aiming for and they're not doing that then you know that there's a gap obviously. Yeah.
Yeah. Exactly. Then you can compare to how the AIs actually behave and you can see the ways in which uh the training techniques are not working.
Similarly there should be safety cases where the company says here is our plan for getting the AIS to follow the spec. Here's the type of training we're going to do blah blah blah blah blah. then that plan can be critiqued you know and academics can say this plan rests on the following assumptions we think these assumptions are false for these reasons right so the scientific community can get engaged into actually making the progress on the technical problem that I mentioned I thought it was interesting when you wrote the scenario that there was a point where the AIs were becoming so intelligent that the main company that had made the initial breakthrough decided that it wasn't going to release cert certain versions of this AI to the public because it would either scare people or be too dangerous.
Do you think that's actually likely? Uh, fortunately, I thought it's less likely now than I did 6 months ago. Ironically, uh, the intense race dynamic between the companies is kind of pushing them into releasing stuff.
Release it all. Yeah. Yeah.
So, that's better, right? Because that means that more people will be aware when there's a problem. Exactly.
Exactly. So, so ironically, I've sort of, it's kind of funny, but I found myself in some ways kind of hoping that the situation will still be a very close race in 2 or 3 years compared to before when I would constantly talk about how because of the race dynamics, nobody's going to prioritize actually solving these problems. I still think that because of the race dynamics, nobody's going to prioritize actually solving these problems.
But, but you want it to be a clo, you actually want it to be like wide open. Then they'll be forced to not keep it a secret. And then that gives broader society the knowledge that they need to notice what's going on and then hopefully actually intervene and do something.
By contrast with if it was a sort of like not so close race, like if if you get something like the leading company is four months ahead of the follower company, which is sort of what we depict in AI27, then they can be tempted to keep a lot of stuff secret because that's how they stay ahead and that's how they like prevent their competitors from getting wind about what they're doing and so forth. That sort of secrecy is poison from the perspective of humanity. And let me be clear, ultimately we need to end this race dynamic.
Otherwise, we're not going to have solved the problems in time and some sort of catastrophe is going to happen along the lines of what's described in AI27. But in the meantime, I think more transparency is good because it gives the public and the government the information they need to realize what's happening and then hopefully end the race. Absolutely.
Well, a lot for everyone to think about. Um, read the piece AI 2027. You can find it online.
Daniel Cocatella, thanks so much for joining us. Yeah. Thank you.
[Music] That's it for today's edition of the GZERO World podcast. Do you like what you heard? Of course you do.
Why not make it official? Why don't you rate and review GZERO World? Five stars.
Only five stars. Otherwise, don't do it on Apple, Spotify, or wherever you get your podcast. Tell your friends.
The GZERO World podcast is brought to you by our lead sponsor, Prologis. Prologis helps businesses across the globe scale their supply chains with an expansive portfolio of logistics real estate and the only end-to-end solutions platform addressing the critical initiatives of global logistics today. Learn more at prologus.
com.