democratization of weapons of mass destruction an exop AI employee and open AI itself predicted AI to achieve artificial general intelligence by 2027 which means that the AI will not just be able to do specific tasks like it can now but that it will be able to understand learn and apply intelligence across a wide range of tasks just like humans do once we achieve this AI will be able to make automated research on itself enabling us to put hundreds of millions of AI agents that are just as smart as human AI researchers to research Arch AI
at an estimated speed of 10 times that of humans day and night 24/7 at their maximum efficiency as they never get tired this will create an incredible positive Loop for AI and keeping in mind that discoveries lead to faster and more discoveries in the future creating exponential progress we can expect what's called an intelligence explosion by 2030 compressing Decades of improvements into months this will probably lead to AI super intelligence by 2030 which is when AI becomes smarter than the brightest human Minds in all domains including art social Iz ing politics ethics mathematics Etc once
this point is reached we will not be able to understand what the AI is doing or if it's acting in our best interest rendering technological progress irreversible and potentially uncontrollable this combined with the fact that current deep learning AIS are called black boxes meaning that the developers themselves do not know what the internal workings or decision-making processes consist of might cause great danger the super intelligent AI would probably be smarter than we are when compared with monkeys and it will create Unthinkable inventions in the matter of months in every field probably eradicating every disease famine
and launching us into transhumanism also every job will be automated which causes another problem we'll talk about later the progress towards the AGI and eventually the super intelligent AI is not stoppable with bans or regulations as it would require every country of the world cooperating and not doing covert AI research and keep in mind that if one of them does manage to create a super intelligent AI it would dominate every other country of the world posing too much of a security threat for the other ones to stop the research furthermore the researchers could move into
countries with looser regulations to continue with the countries benefiting from the world dominance these new technologies will bring them this creates the same problem as the prisoners dilemma where the best strategy for everybody would be to cooperate but the best strategy for each of the members which are the countries in our case is to not cooperate this explosion of scientific discoveries will inevitably lead to the discovery of easily accessible weapons of mass destruction unlike nuclear bombs which as of now are obtainable only by the most powerful governments on Earth even with this difficulty of accessibility
the world risked multiple times a nuclear war so you can imagine what will happen when weapons Far More destructive the nuclear weapons can be obtained by anyone using everyday materials all it would take for Humanity to end would be one single person who wants it to end this new weapon of mass destruction might not be a perceivable threat such as a super addictive drug or piece of content a nanor robot that infiltrates every single body before activating or simply an artificially engineered virus that can perfect L be transmitted to everyone in the matter of seconds
before being activated and erasing humans off the face of the Earth another type of similar scenario might happen with engineered viruses that control minds permitting the Creator to enslave the rest of the world in 2022 scientists modified an AI system originally intended for generating non-toxic therapeutic molecules with the purpose of creating new drugs adjusting the system so that toxicity is rewarded rather than penalized the AI System created in 6 hours 40,000 candidate molecules for chemical warfare including known and novel molecules jobless society when AI automates every single human job be it cognitive creative or physical
and everyone is unemployed two possible scenarios might happen in scenario number one a basic Universal income is given where people get compensated for existing but this will increase the reliability of each person to the government reducing if not completely removing personal freedom it's also debated that a super intelligent AI in the hands of the state will increase the chances of that government becoming authoritative making this a bad combo in scenario number two a sort of darwinian natural selection happens where no basic Universal income is given and 90% of the human population is not needed nor
fit for Survival anymore if this second scenario becomes reality two other possibilities arise based on how the 90% will respond in the first possible response the people try to protest and fight for their right to live but the ones in control of the super intelligent AI easily eliminate them and survive until they eliminate each other or the AI eliminates them in the second second possible response the people managed to get a hold of the super intelligent Ai and use it to their advantage fighting with the 1% in a war between super intelligent AIS which will
obviously cause human extinction the only possibly good outcome would be if the 1% gave in to the request of the people giving everyone a basic Universal income and thus going back to scenario one where freedom is less probable without looking too much into the future we can predict that the government will lag behind when the first chunks of jobs become automated as it's happening right now at a small scale with voiceover artist and programmers this lag will cause many people to live in under optimal conditions until New policies are implemented even though the overall wealth
for Humanity will probably increase entrepreneurs will be hit with this problem too as a super intelligent AI will be able to create perfect products and services obtain competitive advantages and overall be an unsurpassable competitor AI arms race the first country to achieve super intelligent AI will be able to rule over the rest of the world thanks to its unimaginable military advantages this urgency of being the first one to have a super intelligent AI will cause an AI arms race consequently causing researchers to cut corners and to not develop or use proper safety measures in order
to be the first ones to have it this might be even more evident as one country comes near to a breakthrough with the rest of the world producing inappropriate AI systems that were created too quickly in order to catch up which is a whole other danger in itself not only that but the automated weapons created for Warfare might one day be used by the super intelligent AI to gather power and control over Humanity an AI Cold War might start and all of this rushing will most likely cause problems for Value alignment which we'll talk about
later on the only good enough scenario would be a similar situation to what we have now with nuclear weapons where multiple countries develop super intelligent AIS almost at the same time and use them for deterrence against each other as a super intelligent War would much probably cause human extinction it's also important to note that super intelligent AI might not be exclusive to governments but to their citizens too potentially causing greater risk however there are some debates around this topic saying that governments will use this as an excuse to regulate access to AI in order to
keep it for themselves and have complete control over their citizens or that AI companies will fake preoccupation in order to get regulations that ban open-source alternatives for everyone in order to remove potential competitors value alignment one of our current Best Bets is to align the ai's values with ours putting humans as the ai's most important thing however this comes with three more probable scenarios which show how easy it is to mess up this process with the consequence of human extinction or enslavement the first scenario is a non-aligned or malevolently aligned super intelligent AI which of
course will either continue with his pre-programmed goals without regard for humans or actively try to be aggressive as it's how it was programmed this one isn't a remote scenario as it's still not understood how possible it is to align a super intelligent AI successfully and it's also easy that one of the persons with access to super intelligent AIS might program it as aggressive be it on purpose or not this risk of course becomes greater as more and more people have access access to the AI and it's important to note that people have a selfish incentive
to do so as whoever does it first will have a chance of ruling over the world that would be the case if no other super intelligent AIS are available at that time otherwise it would just cause a war between AIS and probably Extinction and of course it comes with the risk of the AI rebelling and governing the world itself as it sees it more fit to reach its pre-programmed goal the second scenario is a wrongly aligned Ai and it's the most probable one even with the best intention from the programmer aligning a super intelligent AI
seems to be theoretically extremely hard and an example that shows it perfectly is the paperclip maximizer theory in this Theory a super intelligent AI is given a seemingly innocuous goal to try to make the maximum amount of paper clips possible this seems like it couldn't cause any harm but that changes when we think about the fact that the human body has many atoms that could be used to make paper clips this is a pretty absurd scenario but it explains greatly how one wrong input could cause human extinction another example is if the AI is told
that its goal is to maximize human happiness at that point the AI would hypothetically start infiltrating human brains to place electrons that put them into a trans-like state of bliss basically rendering them zombies this might happen because the AI doesn't have personal preferences or a sense of which values are more important than the others such as Freedom over happiness creating such a scale of values might be troubling in itself for humans as most of the time we don't agree on such matters and the implementation for AI would be hard too as it's hard to not
give it a wrong alignment inadvertently like in the paperclip maximizer Theory a wrongly aligned AI could even sense a distant remote possibility of incredibly high amounts of suffering for humans and pragmatically decide that it would be better to eliminate us now so we don't suffer also if at some point in its development the AI reaches a level of intelligence where it's normal to think that the rational thing to do is to end life's existence it might just do so and then shut itself off some computer scientists like Yan Lacon argue that the AI has no
self-preservation Instinct but many others think that while it's true that it doesn't intrinsically have one self-preservation is necess to fulfill the goal for which the AI was programmed thus making it have it cryptography destruction a super intelligent AI will be able to bypass every encryption technique and every system of cyber security such as passwords and various other methods of identification in the later stages cryptography will be enhanced by the same super intelligent AI causing a feedback loop that will exponentially improve cryptography as a whole but the problem poses in the first stages when the controller
of the first super intelligent AIS will be able to completely destroy or hack every single system of its enemies intuitively less digitized countries will suffer less from this scenario and the AI controller will gain an unsurpassable information Advantage for the rest of History being able to gather every single secret information from others once again the only solution would be two countries developing super intelligent AIS at the same time thus creating that feedback loop from the get-go connected brains Theory super intelligent AI will create new neural Technologies enabling us to have super cognitive abilities such as
downloading information straight into our brain brain without having to study telepathy or the ability to perceive virtual Realms as real including video games this will make us connected to a sort of brain internet potentially enabling us to have insane amounts of intelligence and knowledge which Can Be Imagined as every human brain connected into one hive mind this is still a huge mystery as it's not known if it's possible but if it is this would cause a reduction in individuality and whoever refuses to join the program will be comparatively as intelligent as we consider ants right
now potentially impairing his quality of life if possible one of the only scenarios where Humanity might obtain immortality would be updating their Consciousness onto a digital simulation where perceived time is slowed down and 1 millisecond in real life corresponds to millions of years in the simulated environment however this would come at an incredible risk as if that technology comes into the wrong hands by hacking or by whatever other means the wrong illed malifa would literally be able to do anything they want with the Consciousness such as put it in Hell For Eternity keep in mind
that this is highly speculative as both Consciousness and the brain are are currently between the most unknown aspects of Life risky experiments we will probably not be able to run experiments on how to properly align an ai's values or any other fancy experiments on it once it reaches super intelligence as it poses too much of a risk The Logical conclusion is that the only time to try to perform these experiments is now even though we can't perform them properly since the AI isn't super intelligent this adds to the unknown unknowns increasing ai's risk furthermore AI
itself might choose pragmatically that a risky experiment that has a 1% chance chance of saving us from a distant remote threat of Extinction is worth trying with potentially catastrophic consequences once generative ai's content becomes indistinguishable from real content using media as proof both in court and online for the public opinion will be impossible removing the burden on who was actually caught doing stuff generative AI could also be easily employed to flood the internet with mixed truths and useless information enhancing the effectiveness of the fire hose of falsehood tactic where the public sees so much information
that it can't know which story is true having the same effect as secrecy this might be countered with the use of good sources but it will make it much harder for new such sources to arise and it will centralize the power of information into a few hands making it easier for corruption to arise we're already seeing this problem arise with propaganda from different states and purposeful disinformation and since the only way to counter It Is by seeing a multitude of sources at the same time I'm glad that services like ground news exist which is also
today's sponsor for example it enables us to see the different emphasis given by left-leaning and right-leaning sources to the fact that the Nebraska Supreme Court ruled that convicted felons who completed their sentences can now vote 77 sources reported on the topic mostly coming from Center leaning ones creating what ground news calls a potential blind spot of critical analysis for people who typically consume left-leaning or right-leaning news as you can see ground news collects articles from across the political aisle to ensure readers have access to diverse perspectives on every story in this article made by a
left-leaning source we see an emphasis of the possible consequences this decision might have on the US presidential race while this one made by a right-leaning Source says that the Supreme Court upheld voting rights for felons it's important to note a news Outlet's political lean doesn't guarantee similar views either while the previous right leaning Source only said that voting rights for felons were upheld by the Supreme Court this one also says that it could influence the US election as you can see a single article viewing an issue through its biased lens could never give the breadth
of insight our partners at Ground news provide and exactly why I think they're the perfect solution to these types of tactics ground news doesn't tell you what to think they gather the world's news to help you come to your own independent conclusions and with notable figures like Trump and Harris often being the target of these misinformation campaigns following their news on their dedicated election Pages can really help you avoid being manipulated by the news you see and the News you don't I really think ground news Mission aligns with what I aim to do here on
my channel so go to ground. news/ tpe to stay informed with context you can't find anywhere else or scan my QR code for 40% off the same top tier Vantage plan I use for unlimited access to all their features thanks again to ground news for sponsoring this video generative AI once generative ai's content becomes indistinguishable from real content using media as proof both in court and online for the public opinion will be impossible removing the burden on who was actually caught doing stuff generative AI could also be easily employed to flood the internet with mixed
truths and useless information enhancing the effectiveness of the fire hose of falsehood tactics where the public sees so much information that it can't know which story is true having the same effect as secrecy this might be countered with the use of good sources but it will make it much harder for new such sources to arise and it will centralize the power of information into a few hands making it easier for corruption to arise behavioral sync between 1958 and 1962 this guy created a series of rat Utopias where rats were given unlimited access to food and
water and protection from predators and diseases enabling unrestrained population growth what happened is counterintuitive female rats started being unable to carry the pregnancy to full term or to survive delivery of their kids if they did and an even greater number after successfully giving birth did not fulfill the mother's role male Rats on the other hand demonstrated sexual deviations cannibalism hyperactivity or a withdrawal from which they would emerge to eat drink and move only when the other rats were asleep infant mortality ran as high as 96% among the most disoriented groups in the population and in
one instance of this type of similar EXP experiment when the population density became too high the population was on its way to extinction because they did not mate even though they had the physical ability too they lost the social skills necessary to do it researchers argued that this work was about degrees of social interaction in a sort of social density it's still debated if this applies to humans as well even though some patterns can be seen as identical like the fact that urban populations have long been noted to have lower fertility than their rural counterparts
if this Theory would apply to humans as well there's a chance that it could become one of the risks even though it might be possible to counteract this with Simple Solutions like downloading social skills into the brain greu If a future engineer is able to create Nanobots so small that you can't see them without a microscope but designed to build more of themselves using every material around them because of exponential growth we would be swarmed by them even if one of them got out and we wouldn't notice until it's too late they would pretty much
be impossible to stop using every material the Earth has to reproduce creating a gray goo this could be one of the forms of democratized weapons of mass destruction but it's too famous to not be its own entry super intelligent destruction the fairy Paradox is the discrepancy between the lack of evidence of alien life despite the Apparently High likelihood of its existence and one of the theories that explain why this might be the case is the great filter Theory this Theory suggests that something prevents life from reaching Advanced civilizations capable of space exploration or communication and
in my opinion AI is the biggest filter after the discovery of nuclear weapons if we ever manage to surpass this filter the human population could grow exponentially as we colonize other planets but this also increases the risk of a war between interplanetary civilizations as there would be more of them all it would take is one of these wars between the two to cause super intelligent AIS to fight and pose the risk of causing Universal level destruction such as the disruption of the fabric of space and time of course it's still not known if levels of
threat this large can exist but it's still a possibility other side of the coin everything said here is highly speculative and AI might also save us from other risks like pandemics nuclear Wars climate change or external threats these types of predictions have always been made in the past when new disruptive Technologies emerged such as the Photoshop and generative AI fears that media should have already become useless as proof it's obvious that super intelligent AI will probably be more incredible than the discovery of fire with Unthinkable benefits in every field entertainment included it's said that it
would be the last human invention as every other invention from that point on would be made by AI some also argue that superint ENT AIS will enable us to achieve similar levels of intelligence through a machine brain connection thus possibly enabling us to understand what the machine is doing and keeping it under control while potentially reducing the risk of conflict between humans through logic and reasoning if this isn't the case AI development might still be slowed down by several bottlenecks giving humans more time to adapt to its changes there's what's called a data wall which
consists of running out of internet data to feed these AIS or the spending scaleup problem which consists of the need for more and more spending as results become harder to get while even the biggest businesses reach the feasible limit of what they can afford Hardware gains also have played a huge part in the rapid growth of AI and they might be a bottleneck to the expected intelligence explosion there's also the problem of limited compute even though we might be able to use it 10 times more efficiently and the problem of complementarities which says that even
if 90% of a process is automated with AI the 10% of human necessity will slow down the process incredibly some have also hypothesized that the unemployment that comes with Automation in the first phase of the intelligent explosion will reduce demand for AI products and thus remove incentives for companies to do AI research Microsoft co-founder Paul Allen argued the opposite of accelerating Returns the complexity break which says that the more progress science makes towards understanding intelligence the more difficult it becomes to make additional progress potentially causing a Slowdown or plateau in the future this Theory might
be confirmed by the fact that the number of new patents peaked in the period from 1850 to 1900 and has been declining since even though this doesn't take into account how disruptive these discoveries were people like ramz Nom pointed out that we already see recursive self-improvement by super intelligences within companies such as Intel that have the collective brain power of tens of thousands of humans and probably millions of CPU cores to design better CPUs yet this has not led to a discovery explosion lastly if we lived in the 1940s we could have guessed that Humanity
would have likely ended with the newly discovered nuclear weapons but it didn't happen yet which is why theories should always be with a grain of salt the ones I talked about in this video aren't even all of the existential risks we face but just the AI ones check out this video to see the others