so Sam Alman basically said that everything is going to change and he tweeted this tweet right here that says I always wanted to write a six-word story and here it is near the singularity unclear which side now to most people this might not have been the tweet that you know breaks the internet but for us in the AI space this is a truly profound tweet because number one he's the CEO of the leading lab in terms of artificial intelligence and number two to the singularity is a very very important concept that we need to understand
and when that does occur things will definitely change forever the only problem with the singularity is that many people have a lot of trouble defining when that Singularity is going to occur now if you don't know what the singularity is or if this is your first time hearing this concept this is a hypothetical future point where technological growth becomes just uncontrollable and irreversible leading to transformative changes in human civilization and this concept is often associated with the development of superintelligent AI that surpasses human capabilities and of course this could rapidly accelerate technological progress in ways
that we can't predict or or comprehend now this diagram that you can all see on screen is essentially showing you guys how this happen so you can see that as time moves along human intellect stays pretty much the same it slowly increases over the time with of course modern medicine but then we have the invention of the computer and slowly but surely there comes a point where machine intelligence surpasses human intelligence and starts to rapidly accelerate and that is where we get this intellectual level slow graph going completely vertical and this is something that many
have theorized now there are many different things that happen when the singularity happens but there is a lot of debate about this but arguably it is probably the most important point in the future that we need to understand because after that point the reason that people call it the singularity it's quite hard to predict what happens because as you know in space when you have a black hole you don't really know what goes on Beyond The Event Horizon and that's why they call it the singularity and when you start to truly understand that we are
entering a period of our time where it is possible that we could have the singularity I think we have to understand that this is probably going to be the most important time because we truly don't even know what occurs now of course you might be thinking so nobody knows what occurs well not exactly because some individuals like Ray kwell have actually been trying to predict with remarkable accuracy the rate of development and things going on in the futurist world so you know things like AI Computing Advanced Technologies these are the kind of things that raywell
has been predicting for some time this guy is a renowned American inventor computer scientist and futurist known for you know pioneering advancements in pattern recognition Technologies Ai and human computer interaction and he's actually made significant contributions to the technology now when it comes to his predictions about the singularity and he's one of the most respected people when it comes to discussing this topic is that he actually predicts the singularity will occur by 2045 and so when we do get statements from people like Sam mman that clearly State near the singularity that kind of gives us
a wakeup call because if there was a future moment which is 2045 where we were thinking oh okay that's a very long time away and we don't need to start panicking about it now but then Sam Alman is like Hey we're actually near the singularity one of the leading Labs the CEO said that it's actually a really big wakeup call and one of his key predictions was the fact that we will get AGI by 2029 and as every year goes on and on his predictions start to become a lot more grounded in reality as we
start to see the development in technology and it's really important you guys understand what the singular is because it's something that pretty much changes everything here he is in an interview talking about 2045 and of course as technology is developing it's quite likely that that timeline may change but of course According to some sources his prediction accuracy is around 80 to 90% 45 is a little bit different than AGI AGI is basically equivalent to one human a very good human it knows everything but it's basically one human we'll be able to multiply our intelligence about
a million fold by 2045 and that's what we call the singularity it's not just becoming capable of of being one human but really like have a knowledge of basically a million humans by 2045 and and that's so difficult that we borrow this metaphor from physics and call it Singularity now if you want to know some more crazy things that you think he think is going to happen at this time and I'm going to get into more of samman statement in just a second you're going to want to watch this cuz it's pretty crazy and it's
hard to conceive right now but Rayo has been basically predicting a lot of the things that we've been seeing now Nanobots will connect our brains to the cloud uh just the way your phone does it'll expand intelligence a millionfold by 2045 that is the singularity we will be funnier uh sexier smarter more creative free from biological limitations we'll be able to choose our appearance we'll be able to do things we can't do today like visualize objects in 11 Dimensions we can speak all languages we'll be able to expand Consciousness in ways we can barely imagine
we will experience richer culture with our extra years now samman actually followed up to this tweet and he said it's supposed to be about either one the simulation hypothesis or number two the impossibility of knowing when the critical moment in the takeoff actually occurs now I'm going to focus on point number two from a minute because this is far interesting and the point that he's trying to make here is that you know how on this graph you can literally see that there is a red line that goes all the way up the problem with the
singularity is that we don't know when this occurs and obviously if there was like a super intelligent AI that just started doing incredible things that is of course probably the moment that we would think that the singularity may have occurred but it is very hard to pinpoint when this point is happening and as samman says there is the impossibility of knowing when the critical moment in the takeoff is actually occurring now interestingly Sam Alman has actually spoken about this before a few years ago in 2023 he said short timelines and slow takeoff will be a
pretty good call I think but the way people Define the start of the takeoff may make it seem otherwise essentially what he's talking about here is how the takeoff actually occurs we want to have a slow takeoff and a short timeline because we don't want to you know be surprised by AI capabilities he recently spoke about this in a interview I say recently it was actually in 2023 but he actually talks about how this is actually really important for a variety of different reasons that I'm going to get into in a moment how far away
do you think AGI is he said say I'm I'll probably tell you sooner than you thought the closer we get the harder time I have answering because I think that it's going to be much blurrier uh and much more of a gradual transition than than people people think if you if you imagine like a 2X two Matrix of sort of short timelines until the AGI takeoff era begins and long timelines until it begins and then a slow takeoff or a fast takeoff the world I think we're heading to and the safest world the one I'm
most hope for is the short timeline slow takeoff but I think people are going to have hugely different opinions about when in there you kind of like declare Victory on the AGI thing now there's actually a blog post from Les wrong that actually explains this in a bit more detail and like I said already you know slow continuous takeoff is safer than fast discontinuous takeoff basically this transition that we're going to go to you know to go from just having the average person walking around to an extremely powerful AI we have to make sure that
we do this safely it's like you can't just throw down an artificial super intelligent being into the world we have to get there pretty slowly in a way that makes sense so you can see to successfully navigate the transition to extremely powerful AI we want AI safety and governance efforts to keep Pace with AI capabilities or ideally to exceed them and when compared to fast discontinuous takeoff slow continuous takeoff seems much safer from this perspective basically stating that look vast takeoff where the scenario where we get rapid AI capabilities and it's almost discontinuous where we
have huge jump in capabilities and as I was saying when you turn the crank when you produce a new model we really and ideally don't want to be largely surprised by the ai's capabilities if we produce a model ideally we want to be able to you know predict those capabilities and have those capabilities be something that number one we can control and number two that we can reliably constrain if we need to and of course predict for future models so the reason that we don't want a fast takeoff is because it doesn't give Society time
to figure out how to respond to Advanced capabilities and of course the destabilization that could come from them which means that you know in society if an advanced AI is just able to do a billion different things it's going to destabilize Society many individuals have spoken about how once we do get to the singularity what is money going to mean this is something that I personally covered before actually covered this for quite some time talking about how this you know world is essentially changing and there's going to be a point in which AI is going
to be doing the majority of the work in the economy and what does the average in the life look for the average person if we were to look back 100 years I'm pretty sure what you call work 100 years ago doesn't look like work for a lot of jobs that we do have right now so the destabilization aspect is something that is also very very important I've made tons and tons of videos on that subject but if you want to know more about the destabilization and that's why capabilities need to move slower because of course
you know if you get fired from your job or certain industries get uprooted you don't want it to be uprooted overnight you ideally want that transition to be fairly gradual so people have time to you know transition careers figure out what businesses are going to be created give companies the abilities to you know respond and of course it also doesn't give us the ability to coordinate once the danger is clearer and more imminent so of course with these Advanced systems they are going to be clearly more capable and more dangerous which means we need the
time to be able to research the new capabilities and of course evaluate them completely so that is why when Sam Alman says you know slow continuous takeoff that's why he's stting that this is going to be a pretty good call and these are the two timelines where we have a short timeline where things you know are slow and continuous but it happens relatively quickly I kind of feel like this is what happening right is is what happening right now with you know we're slowly but not really slowly we're quickly getting to that timeline but it's
more gradual like the capability jumps don't feel incredible which is a good thing because you know people governments have time to react we know we're able to conduct studies on how certain AIS are going to be integrated we're able to you know develop programs that are actually effective and not have to you know radically Implement new policies to help people navigate the new AI economy this is of course the scenario that we want to look at so this is the one that I think we're in right now because things are moving really rapidly but also
slowly and this is the scenario which I don't think occurs which is over the long time it takes us a really long time to get that takeoff but like I said already Sam mman is saying that look the singularity is fairly close and open and I have actually spoken about how we need slow takeoff speeds because this actually allows us to give us more time to figure out empirically how to solve the safety problem and how to adapt and one of the things they talk about is of course having less of a compute overhang because
if you have like7 trillion doar of compute and then you decide to go run and then you know produce a new model it's quite likely that if you move from you know a $10 million training run to a billion doll training run that is quite likely that you will have advanced capabilities in that trillion dollar training run if that even occurs now there's also some crazy other things that are occurring because now the second part of this is also quite fascinating because Sam mman talks about something that I haven't really explored that much but it
is a certainly fascinating concept so he talks about the simulation hypothesis and remember how before you know he explained so he says it's either supposed to be number one about the simulation hypothesis but he doesn't go into too much detail but the simulation hypothesis for those of you who don't know is a hypothesis proposed by philosopher Nick brostrom and it basically suggests that our entire universe could be a computer simulation created by an advanced civilization and the core idea here okay just listen to this because it's completely crazy not crazy in a bad way but
crazy in like I didn't think about this before is that like if a civilization becomes Advanced enough to create highly detailed simulated realities they might create many simulations of their own past and if this is possible then statistically speaking we're more likely to be living in one of these many simulations than the original Bas reality as we develop more advanced Ai and Computing systems we can't be certain if we're truly experiencing these technological advancements for the first time or if we're in a simulation recreating this historical period in the context of samman's tweet this is
certainly about whether or not we're in a real Universe approaching the singularity or we're simulating in that period and of course this is why he said it's unclear which side he's on now Nick Bostrom himself actually talks about the fact that you know right now the key period in which we're living is one that is a very interesting period of time because over the next couple of years maybe 10 to 15 years potentially the fate of humanity might be decided and he says that this key juncture where we're at this key is really really important
when it comes to the Future so it's going to be fascinating to see exactly what happens and that gives more Credence to the simulation hypothesis it is weird because it looks like we are uh very close to some like key juncture uh um which you might think is Prim fascia impossible so there have been thousands of generations before us right and if things go well there might be millions of generations after us or people living for like Cosmic durations and like out of all of all of these people that that you and I should happen
to to find ourselves just next to this critical juncture where the whole future will be decided it it is uh striking like that seems to be what this model of the world implies um and and maybe that is like an indication that there is like something slightly puzzling or Impossible about it that there's maybe some more aspects to understanding our situation than is reflected in this naive conception of the world and our position in it um and and you might speculate what that I mean I have this earlier work on the simulation argument and stuff
like that now interestingly enough saman actually spoke about this in length on Lex Friedman's podcast and this one was a super interesting watch because what he talks about here is essentially how we can now easily create incredibly realistic virtual worlds like in video games and how easy it is to get completely immersed in them through VR headsets but potentially at an even deeper level and then in this you know clip he actually talks about the square root function as a metaphor to explain a deeper idea so the square root of 4 equals 2 which is
fairly simple Square Ro T of two is 1.414 and it just gets more complex and then of course the square root of minus1 is where you have imaginary numbers and that opens up a whole new world of mathematics and his main point is that sometimes simple things like the square root function can lead us to discover entirely new Realms of reality that we didn't even know could exist just like how the square root of minus one opened up the imaginary numbers that entire world our ability to create increasingly realistic simulations might lead us to question
the very nature of our own reality and he's basically suggesting that with this pattern where simple ideas lead to profound Revelations about reality this might make more people open to considering the simulation hypothesis more seriously than they did before and considering he just tweeted about the simulation hypothesis I'm guessing it's starting to look a lot more like that given sora's ability to generate uh simulated worlds let me ask you a pothead question uh does this increase your belief if you ever had one that we live in a simulation maybe a simulated world uh generated by
an AI system yeah somewhat I don't think that's like the strongest piece of evidence I think the fact that we can generate worlds should increase everyone's probability somewhat or at least open to it openness to it somewhat but you know I was like certain we would be able to do something like Sora at some point it happened faster than I thought but that I guess that was not a big update yeah but it the fact that and presumably it get better and better and better the fact that you can generate worlds they're novel they're based
in some aspect of training data but like when you look at them they're they're novel um that makes you think like how easy it is to do this thing how easy is to create universes entire like video game worlds that seem ultra realistic and photo realistic and then how easy is it to get lost in that world first with a VR headset and then on the physics based level said to me recently I thought it was a super profound Insight that uh there are these like very simple sounding but very psychedelic insights that exist sometimes
so the square root function square root of four no problem square root of two you okay now I have to like think about this new kind of number um but once I come up with this easy idea of a square root function that you know you can kind of like explain to a child and exists by even like you know looking at some simple geometry then you can ask the question of what is the square root of ne1 and that this is you know why it's like a psychedelic thing that like tips you into some
whole other kind of reality and you can come up with lots of other examples but I think this idea that the lowly square root operator can offer such a profound insight and a new realm of knowledge applies in a lot of ways and I think there are a lot of those operators for why people may think that any version that they like of the simulation hypothesis is maybe more likely than they thought before but for me the fact that Sor work is not in the top five now interestingly enough we had another person at opening
eye Steven mcer who tweeted something rather who tweeted something rather interesting not that long ago that actually relates to Sam alman's tweet and he said that the universe is a computer so that is something that I will leave you with because I think him stating that universe is a computer samman State being that you know potentially we are in a simulation and all these arguments surrounding it I think it is something something that is really quite fascinating so with that being said let me know what you guys think about this for some individuals like Gary
Marcus of course they are essentially thinking that Sam Alman is just hyping he says that Sam's ability to imply that we are near AGI without actually saying it or committing to it despite all the people leaving his company yada yada yada is an absolute Legend always be hyping so I mean at the end of the day it's going to be really interesting to see how this company manages to play out whether or not the critics are right but one thing I can say for sure this will definitely be very interesting