so recently Sam Alman was interviewed by the Free Press and it was a fascinating interview because for the first time he actually talked at length about super intelligence now super intelligence is something that we don't frequently get talks about because essentially that's something that's often seen as something that is too far into the future we can't even imagine what it looks like but recently there have been more and more claims around the industry and Whispers so to speak about potentially super Intelligence coming within a few thousand days now this article and interview is just completely
fascinating because not only do we get some mman statements but we also do get the statement from a former opening ey researcher who actually gives us a little bit of information about super intelligence as well so you can see right here the article is titled Sam Alman AI is integrated super intelligence is coming and then of course he talks about you know the fact that he was interviewed and he talks about the General State of artificial intelligence in their world now one of the things they actually asked Sam mman about was the September Manifesto and
if you are not familiar with the September Manifesto trust me this is something that you should read because I would say that this is potentially the best look at how the future is going to be with all the rapid advancements in AI it's not often that we do get a blog post that is so detailed yet so simple to read and understand where we can truly understand how Society is going to be moving forward so what they're referring to is this called the intelligence age this was published not that long ago but it was something
that I found to be very insightful for looking forward into the future and just understanding how society and everyone is going to be shaped by AI AGI and of course ASI now one of the things that he did say here was that imagining what things will look like in an 18 months you know as we round out the summer of 2026 how does Super intelligence emerge and he says you have to look at the rate of scientific progress he said describing how things might compound advances over the next few years and this is really interesting
because when we do take a look at Super intelligence one of the key fields that super intelligence will impact the most is of course is scientific progress that is the area where super intelligence is likely to have the biggest impact of course you've got mathematics which is of course something that we do know has been rapidly advancing with the AI models we've seen with test time compute how much these models are now scoring on benchmarks which is just simply incredible considering it's only the second iteration of the model of course we've got science questions which
is going to be really fascinating for those who work in scientific Fields because when you have an AI that you know you can essentially say okay I need you to go run this test or do that for me that is going to be something that just has a remarkable level of unlock in terms of not only productivity but also the scalability of research I think one of the things that people are discounting about AI is that you know know maybe the AI doesn't get to 10 times smarter than us but even if it doesn't imagine
you could let's say clone 10,000 scientific companies and tell them all to do scientific research imagine how quickly things could get done of course there are delays because of you know Supply chains and you know physical things but I still think that in terms of pure software development those areas are going to move completely at light speed because there aren't any real botton necks to how quickly you can move and he actually talks about this in this short snipp right here one thing that I use as a sort of my attempt at my own mental
framework for it is the rate of scientific progress um if the rate of scientific progress that's happening in the world as a whole tripled maybe even like 10x you know the discoveries that we used to expect to take 10 years and the technological progress that we used to expect to take 10 years if that happened every year and then we compounded on that the next one and the next one in the next one that to me would feel like super intelligence had arrived and it would I think in many ways change the way that Society
the economy work it what it won't change and I think a lot of the sort of AI commentators get this wrong is it won't change like the Deep fundamental human drives uh and so in that sense you know we've been through many technological revolutions before things that we tend to care about and uh what what drive all of us I think change very little or maybe not at all through most of those but the world in which we exist will change a lot now what's crazy about all of this is that those of you who
are wondering when super intelligent as many people are still asking when we're going to get AGI one of the key things that he actually stated in his blog post or the September Manifesto as Forbes calls it is essentially the fact that you know it is possible that we could have super intelligence in a few thousand days and that may take longer but he's confident that we'll get there so this is going to be something that is super fascinating because of course a few thousand days is a very timeline it does leave a lot up to
speculation but he actually does go into a little bit more detail in terms of the actual timeline so for those of you who wanted a date I'm not sure if you remember this interview that he did where he actually talks about it could be 3,500 days away which is a little bit later than you might be initially expecting but when we think about the level of impact that that kind of Technology would have 3,500 days for a technology that could you know when we think about how discovery Compound on another like the invention of computer
led to society being transformed and then because now society's more connected we've got other things going on I mean it really is fascinating when we really try and think about how the future is going to be in the essay you actually say a really big thing which is ASI super intelligence is actually thousands of days away maybe I mean that's our hope our guess whatever uh but that's a very wild statement yeah um tell us about it I mean that's that's big that is really big I can see a path where the work we are
doing just keeps compounding and the rate of progress we've made over the last 3 years continuous for the next three or 6 or 9 or whatever um you know 9 years would be like 3500 days or whatever if we can keep this rate of improvement or even increase it like that system will be quite capable of doing and for the individuals who would potentially call Sam Alman the Speculator or someone that you know is driving up hype to get more people to invest in his company essentially this is no longer something that Sam Alman is
just saying or for those of you who are paying attention to the wider air Community something that ilkov is saying we have to also take a look at the wider air Community because Logan Kilpatrick the person at Google AI is actually saying that a straight shot to ASI is looking more and more probable by the end of the month this is what IIA saw and remember he says the success of scaling test time compute which is what IIA saw early signs of is a good indication that this direct path is just continuing to scale up
might actually work up so this is something that is truly fascinating because if you watch the talk from Elia satova about super intelligence he actually gives some key details on where we might head next now I just you know I'm astounded by this talk because it's just super insightful and I'm surprised we're getting all of this information from these key industry leaders and of course this company is something that is super super Secret at the moment but he does give us a recent update on where he thinks things will head are an incredible language models
and they unbelievable chat BS and they can even do things but they're also kind of strangely unreliable and they get confused when while also having dramatically superhuman performance on evals so it's really unclear how to reconcile this but eventually sooner or later the following will be achieved those systems are actually go to be agentic in a real ways whereas right now the systems are not agents in any meaningful sense just very that might be too strong they're very very slightly identic just beginning it will actually reason and by the way I want to mention something
about reasoning is that a system that reasons the more it reasons the more unpredictable it becomes the more it reasons the more unpredictable it becomes all the Deep learning that we've been used to is very predictable because if you've been working on replicating human intuition essentially it's like the gut fi if you come back to the 0.1 second reaction time what kind of processing we do in our brains well it's our intuition so we've endowed our AIS with some of that intuition but reasoning you're seeing some early signs of that reasoning is unpredictable and one
reason to see that is because the chess AIS the really good ones are unpredictable to the best human chess players so we will have to be dealing with AI systems that are incredibly unpredictable they will understand things from limited data they will not get confused all the things which are really big limitations I'm not saying how by the way and I'm not saying when I'm saying that it will and when all those things will happen together with self-awareness because why not self-awareness is useful it is part you ourselves are parts of our World models when
all those things come together we will have systems of radically different qualities and properties that exist today and of course they will have incredible and amazing capabilities but the kind of issues that come up with systems like this and I'll just leave it as an exercise to to um imagine it's very different from what we used to and I would say that it's definitely also impossible to predict the future really all kinds of stuff is possible so so that was a snippet from a longer 20 minute talk where you can see that he basically says
super intelligence is going to be a gentic it's going to be able to reason and it's going to be able to understand and be self-aware now if you're wondering how super intelligence actually looks when it arrives actually found a very interesting snippet from this exop ey researcher that gives us an insight to how the next 3,500 days might actually look and this is something that's truly fascinating because we don't really get insights from exop employees but this is one that we should really pay attention to I personally would think that some combination of bigger models
with more compute and models that have been more specifically trained to operate in these types of environments will succeed and the question is just how long will it take to succeed I think it could totally succeed in the next 12 months but probably it will take a few more years than that because everything always takes longer than you expect that's sort of one way of summarizing my view I would imagine that if you could go and these companies and look at their optimistic timelines that it would be something like within 12 months we will have
something like CLA 3.5 Sonic computer using agent except it's actually works really well and we can just delegate tasks to it and have it running autonomously in the backround doing all sorts of useful things for us I would bet that there like having a road map to try to get that this year but realistically things take longer than you expect and there's going to be unforeseen difficulties and so forth so where does the 2027 number come from It's a combination of a bunch of different her istics and a bunch of different Trends and guesses and
so forth one thing I would say is that if you just do the obvious and very good thing of taking various benchmarks and extrapolating performance on those benchmarks some of them get to superhuman performance this year some of them get to superum Performance next year but like around 2027 in my subjective guess based on all the Benchmark extrapolations I've seen is when it feels like I can say all the current benchmarks will be saturated so with that being said what do you guys think about the future of super intelligence does it now seem closer that
we've had many reputable industry figures actually talk about this being a reality or does it seem like another pipe dream to where we are all just in our hype bubble and of course you can see here we have Gary Marcus a notable AI critic this is going to be really fascinating because of course he is someone that I think is very important to the AI industry because he helps keep everyone in check and he's basically saying here will AI be able to you know do these eight things by the end of 2027 will able to
you know watch an unseen mainstream movie without reading the views and be able to follow the plot twist know when to laugh basically testing the long form coherence of the model I think that's going to be something that's really easy considering we already have needle in a Hast stack tests so I'm not sure why that is and he says similar to the above be able to read new mainstream novels without reading reviews reliably answer questions about plot character conflicts and motivations considering infinite memory and context windows are getting larger I think it's going to be
super simple right engaging brief biographies and obituaries without obvious hallucinations that one seems pretty easy but if we actually just gloss over this really quickly because it's kind of boring and we actually take a look at the last ones here that apparently the most difficult but with no little or human involvement right P calber books of fiction and non-fiction I think that's going to be pretty difficult and with little or no human involvement write Oscar Calibur screenplays I think if you do have a model that's able to have some kind of framework where let's say
for example it generates 10,000 screenplays and then you have an audience of 20,000 and then you have that that audience view them and then maybe you take the highest rated one so maybe test time compute for that I think that could work I know that sounds like a crazy framework but if you had like an AI agent audience that was very similar to the audiences of today maybe you just mapped out what the audience liked or whatever I guess that could be something that could happen and this one right here with little or no human
involvement come up with Paradigm shifting Noble caliber scientific discoveries I think that is uh pretty insane because that that that would take a remarkable feat for AI to be able to do that of course take proofs from from the mathematical literature written in natural language and convert them into a symbolic form suitable for symbolic verification and this is what he says in his newsletter so of course Gary is a Critic so he's going to State this stuff it'll be interesting to come back to 2027 to see if this stuff is you know done or not
but at the end of the day I want to pose a question to you do you think this is going to be done do you think this is not going to be done and I will see you guys in the next video