this is a story about stolen intelligence it's a long but necessary history about the deceptive illusions of AI about big Tech Silicon Valley goliaths against ordinary Davids it's about secret treasure chests and vast libraries of stolen work and the internet sleuths trying to solve one of the biggest heists in history and it's about what it means to be human to be creative to be free and what the end of humanity po Humanity transh Humanity the apocalypse even could look like it's an investigation into what it means to steal to take to replace to colonize to
conquer along the way we'll learn what AI really is how it works and what it can teach us about our intelligence about ourselves as humans turning to some historical and philosophical Giants along the way the speed and tempo of modern living are increasing at an Ever accelerating rate without organization without system the result would be chaos our control because we have this idea that intelligence is this abstract Transcendent disembodied thing something unique and special but we'll see how intelligence is much more about the deep deep past and the Far Far Future something that reaches out
powerfully through bodies through people through infrastructure through the concrete empirical World Sundar Pichi the CEO of Google was reported to have claimed that AI is one of the most important things humanity is working on it's more profound than I don't know electricity or fire it sounds like the sort of hype you'd get from a big Tech CEO but we'll see how that might well be true artificial intelligence might just change everything dizzyingly quickly and like electricity and fire we need to find new ways of making sure the vast consequential and truly unprecedented change can be
used for good for everyone not for evil so we'll get to the future but it's important we start with the [Music] past intelligence knowledge brain mind cognition calculation thinking logic we often use these words interchangeably or at least with a lot of overlap when we drill down into what something like intelligence means we find surprisingly little agreement our control over a bewildering environment has been facilitated by new techniques of handling vast amounts of data at incredible speeds the tool which has made this possible is the high-speed digital computer operating with electronic Precision on great quantities
of information can machines be intelligent in the same way humans can will they surpass human intelligence what does it really mean to be intelligent when commenting on the first computers the media referred to them as electronic brains can be given the answer by an electronic brain in the giant electronic brain electrical machines recording Depot inventories keeping stock records recording repair parts levels these process huge amounts of data at a very high speed they save time and effort in Britain in the ' 50s a national debate was taking place around whether machines could think after all
a computer even in the ' 50s was in many ways already many times more intelligent than any human a calculator could make calculations quicker than any human the father of both the computer and AI Alan chur contributed to the discussion in a BBC Radio broadcast in 1951 claiming that it's not altogether unreasonable to describe digital computers as brains this coincidence between computers AI intelligence brains strained the idea that intelligence was one thing a thorough history would require including transistors electricity computers the Internet logic mathematics philosophy neurology Society is there any understanding of AI without these
things where does where could a history begin this impossible totality will echo through this history but there are two key historical moments we can start with the touring test and the Dartmouth College comp churing wrote his now famous paper Computing machinery and intelligence in 1950 it began with I propose to consider the question can machines think this should begin with definitions of the meaning of the terms machine and think churing suggested a test that for a person who didn't know who or what they were conversing with if talking to a machine was IND distinguishable from
talking to a human than it was intelligent ever since the conditions for a uring test have been debated how long should the test last what sort of questions should be asked should it just be text based what about images audio one competition the ler prize offered $100,000 to anyone who could pass the test in front of a panel of Judges as we pass through the next 70 years we'll begin to be able to ask the question has Ching's test been passed should I be afraid of you have you seen my Instagram I'm just plain [Music]
cute a few years later in 1955 one of the founding fathers of AI John McCarthy and his colleagues proposed a summer research project to debate the question of thinking machines when deciding on a name McCarthy landed on a term artificial intelligence in the proposal for the Summer Conference they wrote an attempt will be made to find how to make machines use language form abstractions and Concepts solve kinds of problems now reserved for humans and impr improve themselves the aim of the conference was to discuss questions like could machine self improve how neurons in the brain
could be arranged to form ideas and to discuss topics like creativity and Randomness or to contribute to research on thinking machines the conference was attended by at least 29 well-known figures including the mathematician John Nash famed for his contribution ions to Game Theory played by Russell crow in the film a beautiful mind he saw the world in ways that no one could imagine along with Ching's paper the Dartmouth College conference was a foundational moment marking the beginning of ai's history but there were already difficulties that anticipated problems the field would face to this day many
bemon the artificial part of the name McCarthy chose does calling it artificial intelligence not limit what we mean by intelligence what makes something artificial what if the foundations are not artificial but the same as human intelligence what if machines surpass human intelligence there were already some suggestions that the answer to these questions might not be technological but philosophical because despite machines in some ways being more intelligent making faster calculations less mistakes it was clear that this alone didn't account for what we tend to call intelligence something seemed to be missing the first approach to AI
one that dominated the first few Decades of research came to be called the symbolic approach the idea was that intelligence could be modeled symbolically by imitating or coding a digital replica of for example the human mind well on this enlarged drawing most of the nerve impulses enter here from the spinal cord uh-huh and they pass through here to this relay Center at the base of the brain and the impulses go on upward to their destination if the mind has a movement area say you code a movement area n motional area a calculating area a sight
area and so on symbol approaches essentially aim to make maps of the real world in the digital world if the world can be represented symbolically AI could then be approached logically for example you could symbolize the room kitchen in code symbolize aate of the kitchen as clean or dirty then program a robot to logically approach the environment if the kitchen is is dirty then clean the kitchen McCarthy a proponent of this approach wrote the idea is that an agent can represent knowledge of its world its goals and the current situation by sentences in logic and
decide what to do by deducing that a certain action or course of action is appropriate to achieve its goals it made sense because both humans and computers seem to work in the same way take the following example if the traffic light is red then stop the car if hungry then eat if tired then sleep the appeal of this kind of thinking to computer programmers was that approaching intelligence in this way lined up with binary the root of computing that a transistor can be on or off or one or a zero true or false red traffic
light on is either true or false one or zero it's a binary logical question if on then stop it seems intuitive and so building a symbolic virtual logical picture of the world and computers quickly became the most influential approach in his brief history of AI computer scientist Michael wridge writes that this was because it makes everything so pure the whole problem of building an intelligent system is reduced to one of constructing a logical description of what the robot should do and such a system is transparent to understand why it did something we can just look
at its beliefs and its reasoning but a problem quickly emerged knowledge turned out to be far too complex to be represented neatly by these logical simple true false on off one Z if then rules one reason is the shades of uncertainty if hungry then eat is not exactly true or false there's a gradient of hunger but another problem was that calculating what to do from these seemingly Simple Rules required much more knowledge and many more calculations than first assumed the computing power of the period couldn't keep up the tool which has made this possible is
the highspeed digital computer operating with electronic Precision on great quantities of information take this simple game the towers of Hanoi the object is to move the discs from the first to the last pole in the fewest number of moves without placing a larger disc on top of a smaller one we could symbolize the poles the discs and each possible move and the results of each possible move into the computer and then a rule for what to do depending on each possible location of each disc seems relatively simple but consider this with three discs this game
is solvable in seven moves for five diss it takes 31 moves for 10 it's 1,23 moves but for 20 discs it takes 1, 48,5 75 moves and for 64 discs I can't read out the number because if one disc was moved each second it would take almost 600 billion years to complete the game in AI this problem was named combinatorial explosion that as you increase the number of possible actions for each possible action the number number of factors the possible combinations the complexity quickly becomes incomprehensibly vast and technologically impossible and the towers of Hanoi is
a simple game combinatorial explosion became even more of an issue with games like chess or go and a human problem like driving say is infinitely more complicated the red light is on or off but it might be broken it might have graffiti or snow on it it might have a slightly different shade of red pedestrians might walk out regardless of it being off a child might run across and that's just the first element of a vast environment The Impossible totality approaching AI in this way searching through every single bit of knowledge and looking for The
Logical outcome became known as naive exhaustive search that for each move move the computer had to search through every possible scenario and every bit of information to decide what the best possible move was in robotics a similar approach was being taken and proving even more complicated Research into AI coincided with technological advances in other areas INF for red sensors Radars cameras microphones batteries and in 1971 one Terry winr at MIT was developing a program called shurl that aimed to symbolically model something he called blocks World in this virtual world a user could ask the program
to manipulate the blocks in different ways person pick up a big red block computer okay person grasp the pyramid computer I don't understand which pyramid you mean person changing their mind find a block which is taller than the one you holding on to and put it into the box computer okay in a simple constrained environment the program worked impressively well but a year later researchers at Stanford built a real life blocks world robot we call him shaky shaky was a real robot that had bump sensors called cats whiskers and Laser rangefinders to measure distance the
shaky robotics team ran into similar problems as the towers of Hanoi problem the environment was much more complicated than it seemed the room had to be painted in a specific way for the sensors to even work properly the technology of the time just couldn't keep up and combinatorial explosion and the complexity of any environment became such a problem that the 70s and 80s saw what's now referred to as an AI [Music] Winter by the' 70s some were beginning to make the case that something was being left out knowledge the real world is not towers of
Hanoi games or robots and blocks knowledge about the world is Central however logic was still the key to analyzing that knowledge how could it be otherwise take this example if you want to know about animals say you need a database if animal gives milk then the animal is a mammal if animal has feathers then animal is bird if animals can fly and animal lays eggs then animal is bird if animal eats meat then animal is Carnival again this seems relatively simple but even an example as basic as this requires a zoologists say to provide the
information we all know that mammals are milk producing animals or that a cat is a mammal but there are thousands and thousands of species of mammal and a lot of specialist knowledge as a result this approach was named the expert systems approach and it led to one of the first big AI successes researchers at Stanford used this approach to work with doctors to make a system to diagnose blood diseases it used a combination of knowledge and logic if a blood test is X then perform y significantly the researchers realized that the application had to be
credible if professionals were ever going to trust and adopt it so my could show its work and explain the answers it gave the system was a breakthrough at first it proved to be as good as humans at diagnosing blood diseases another similar system called dendral used the same approach to analyze the structure of chemicals dendral used 17,500 rules provided by chemists both systems seem to prove that this type of expert knowledge approach could work the AI winter was over and importantly researchers began attracting Investments but once again developers encountered a new and serious problem the
myin database very quickly became outdated in 1983 Edward fenal a researcher on the project wrote the knowledge is currently acquired in a very painstaking way that reminds one of cottage industries in which individual computer scientists work with individual experts in disciplines painstakingly in the decades to come we must have more automatic means for replacing what is currently a very tedious timec consuming and expensive procedure the problem of knowledge acquisition is the key bottleneck problem in artificial intelligence setting the machine for this kind of problem takes a week then the brain is switched on and imp
because of this my sin was not WI L adopted it proved expensive quickly obsolete legally questionable and difficult to establish with doctors widely enough logic was understandable but the collecting and the logistics of collecting lots and lots of knowledge was quickly becoming the obvious Central AI problem at a rate of 12,000 numbers or letters per second in the80s influential computer scientist Douglas Leonard began a project that intended to solve this Leonard wrote no powerful formalism can obviate the need for a lot of knowledge by knowledge we don't just mean dry Almanac like or highly domain
specific facts rather most of what we need to know to get by in the real world is too much common sense to be included in reference books for example animals live for a single solid interval of time nothing can be in two places at once animals don't like pain perhaps the hardest truth to face one that AI has been trying to wriggle out of for 34 years is that there is probably no elegant effortless way to obtain this immense knowledge base rather the bulk of the effort must at least initially be manual entry of assertion
after assertion the goal of lennard's sick project was to teach AI all of the knowledge we usually think of as obvious he said an object dropped on planet Earth will fall to the ground and that it will stop moving when it hits the ground but that an object Dropped In Space will not fool a plane that runs out of fuel will crash people tend to die in plane crashes it's dangerous to eat mushrooms you don't recognize red Taps usually produce hot water while blue Taps usually produce cold water and so on Leonard and his team
estimated that it would take 200 years of work and they set about laboriously entering half a million rules on taken for granted things like bread is a food or that Isaac Newton is dead but again they quickly ran into problems was that he wanted to make everything so that it was logically consistent so that you could reason about things you could make deductions and so on and knowledge is too complicated for that so I always knew that was the wrong idea so when the sick projects blind spots were illustrative of how strange knowledge could be
in an early demonstration it didn't know whether bread was a drink or not or that the sky was blue whether the sea was wetter than land or whether siblings could be taller than one another these simple questions reveal something underappreciated about knowledge often we don't explicitly know something ourselves yet despite this the answer when revealed is laughably obvious we might not have ever thought about the question of is bread a drink or is it possible for one sibling to be taller than another when asked we implicitly intuitively often noncognitively just know the answer based on
other presumed factors this was a serious difficulty no matter how much knowledge you entered the ways that knowledge is understood how we think about questions the relationships between one piece of knowledge and another the connections we draw on are often ambiguous unclear even strange logic struggles with Nuance uncertainty probability it struggles with things we implicitly understand but also might find difficult to explicitly explain take one common example you'll find in AI handbooks Quakers are pacifists Republicans are not pacifists Nixon is a Republican and a Quaker is Nixon a pacifist or not a computer cannot answer
this logically with the information it has it sees a contradiction while a human might explain the problem with this in many different ways drawing on lots of different ideas uncertainty truthfulness complexity history War politics The Impossible totality the big question for proponents of expert-based knowledge systems like sick which still runs to this day is whether complexities can ever be accounted for with this knowledge based approach most intelligent questions aren't of the if then yes no one Z binary sort of is a cat a mammal consider the question are taxes good it's of a radically different
kind than is a cat a mammal most questions rely on values depend on context definitions assumptions they are subjective wdge writes the main difficulty was what became known as the knowledge elicitation problem but simply this is the problem of extracting Knowledge from Human experts and encoding it in the form of rules human experts often find it hard to articulate the expertise they have the fact that they're good at something does not mean that they can tell you how they actually do it and human experts it transpired were not necessarily all that eager to share their
expertise but sick was on the right path knowledge was obviously needed it was a question of how to get your hands on it how to digitize it and how to label and pass and analyze it as a result of this McCarthy's idea that logic was the center of intelligence fell out of favor the logic Centric approach was like saying a calculator is intelligent because it can perform calculations when it really doesn't know anything itself so more knowledge was the key the same was happening in [Music] robotics Australian roboticist Rodney Brooks an innovator in the field
was arguing that the issue with simulations like bloxs world was that it was simulated tightly controlled real intelligence didn't evolve in this way and so real knowledge Happ to come from The Real World a human grows from a little tiny to great big the whole structure changes over time and as we've learned how to build the robot better our robots change and our programs since they built that way have adapted to the changes so in principle he argued that perhaps intelligence wasn't something that could be coded in but that it was an emergent property something
that emerges once all of the other components were in place face that if artificial intelligence could be built up from everyday experience genuine intelligence might develop once other more basic conditions have been met in other words intelligence might be bottom up arising out of all of the parts rather than top down imparted from a central intelligent Point into all of the parts evolution for example is very bottom up slowly adding to single cell organisms more and more complexly until Consciousness and awareness emerges in the early '90s Brooks was the head of the media lab at
MIT and rallied against the idea that intelligence was a disembodied abstract thing rather than having this whole top down Master planning of how I'm going to do things the coupling happens out through the world there isn't any real Central controller or single brain that runs everything why could a machine beat any human at chess but not pick up a chess piece better than a 2-year-old child not only that the child moves the hand to pick up the piece automatically without any obvious complex computation going on in the brain in fact the brain doesn't seem to
have anything like a Central Command Center all of the parts seem to interact with one another more like a city spread out than like a single Pilot Flying a complicated plane intelligence was connected to the world not cut off ethereal Transcendent and Abstract Brooks worked on intelligence as embodied connected to its surroundings through sensors and cameras microphones arms and lasers the team built a robot called Cog it had thermal sensors and microphones but importantly no Central Command Point each part worked independently but interacted together they called it decentralized intelligence it was an Innovative approach but
could never quite work Brooks admitted that Cog lacked coherence before I begin let me go ahead and tell you about the machine I'm using today I'm using an intergraph tdz 2000 gx1 it has dual 450 MHz processors and 256 Megs of ram the resolution we'll be using today is 1024 and by the late '90s researchers were realizing that computer power still mattered in 1996 IBM's chess AI deep blue was beaten by Grandmaster Gary Kasparov deep blue was an expert knowledge system it was programmed with the help of professional chess players not just by calculating each
possible move but by including things like best opening moves Concepts like lines of attack or ideas like choosing moves based on P positions and so on but after its defeat IBM also through more computing power at it deep blue could search through 200 million possible moves per second with its 500 processes it played Kasparov again in 1997 in a milestone for AI deep blue one at first Kasparov accused IBM of cheating and to this day maintains Foul Play of a sort in his book he recounts a fascinating episode in which a chess player working for
IBM admitted to Kasparov that every morning we had meetings with with all the team the engineers communication people everybody a professional approach such as I never saw in my life all details were taken into account I'll tell you something that was very secret one day I said caspero speaks to dquan after the games I would like to know what they say can we change the security guard and replace him with someone that speaks Russian the next day they changed the guy so I knew what they spoke about after the game in other words even with
500 processors and 200 million moves per second IBM may still have had to program in very specific knowledge about Kasparov himself by listening into conversations this if maybe apocryphal is at least a premonition of things to come operation is really awe at the accumulated months and years of thinking by human problem solvers reflected back to us in 2014 Google announced it was acquiring a small relatively unknown 4year old AI lab from the UK for $650 million Google is buying Deep Mind an artificial intelligence company for an unknown amount of money the acquisition sent shock waves
through the AI Community Deep Mind had done something that on the surface seemed quite simple beaten and old Atari game but how it did it was much more interesting new buzzwords began entering the mainstream machine learning deep learning neural Nets what those knowledge-based approaches to AI had found difficult was Finding ways to successfully collect that knowledge myin had quickly become obsolete sick Miss things that most people find obvious entering the totality of human knowledge was impossible and besides an average human doesn't have to have all of that knowledge but still has the intelligence researchers were
trying to replicate and so researchers began pivoting to a new approach they asked if we can't teach machines everything how can we teach them to learn for themselves instead of starting from having as much knowledge as possible machine learning as it's called begins with a goal from that goal the machine acquires the knowledge it needs itself through trial and error World these agents are playing hideand-seek these agents have just begun learning but they've already learned to Chase and run away this is a hard world for a Hider who has only learned to flee wridge writes
the goal of machine learning is to have programs that can compute to desired output from a given input without being given an explicit recipe for how to do this at Google Deep Mind we've always loved games goal chess even video games like Atari incredibly Deep Mind had built an AI that could learn to play and win not just one Atari game but many of them all on its own the machine learning premise that they adopted was relatively simple the AI was given the controls and a preference increase the score and then through trial and error
would try different actions and iterate or build on or expand what worked and avoid what didn't a human assistant could help by nudging it in the right direction if it did something wrong or did something right this is called reinforcement learning the field of reinforcement learning reinforcement learning if a series of actions led to the AI losing a point it would register that as likely bad and vice versa then it would play the game thousands of times building on the patterns that worked and avoiding those that didn't what was incredible was that it didn't just
learn the game but quickly became better than humans it learned to play 29 out of 49 Atari games at a level better than a human and then it became superum this is the often demonstrated one it's called breakout move the paddle destroy the blocks with the bull to the developer surprise the AI learned a technique that would get the bull to the top so that it would bounce around and destroy the blocks on its own without the AI having to do anything else this tactic was described as spontaneous independent and creative future of humanity the
latest emblem for existential dread is Google's Deep Mind project which created Alpha go an AI program that's become unbeatable at the most complex strategy game on the planet next Deep Mind beat a human player at go commonly believed to be harder than chess and likely the most difficult game in the world go is deceptively simple you take turns to place a stone trying to block out more territory than your opponent while encircling their stones to get rid of them alphago was trained on 160,000 top games and played over 30 million games itself before beating world
champion leido in 2016 I think he resigned remember combinatorial explosion this was always a problem with go because there are so many possibilities it's impossible to calculate every possible move instead deep Minds method was based on sophisticated guessing around uncertain it would calculate the chances of winning based on a move rather than calculating and playing through all of the possible future moves after each move the premise was that this is more how human intelligence works we scan contemplate a few moves ahead reject maybe and imagine a different move and so on and after 37 moves
in that match against sadal Alpha go made a move that took Everyone by surprise none of the humans could understand it it was described as creative unique and beautiful as well as inhuman by the professional very surprising move I wasn't expecting that um I don't really know if it's a good or bad move at this point the professional commentators almost unanimously said that not a single human player would have chosen move 37 the victory made headlines around the world the age of machine learning had arrived the computer named Al in 1991 two scientists wrote the
neuron Network revolution has happened we're living in the aftermath you might have heard some new buzzwords thrown around neuronet deep learning machine learning I've come to believe that this revolution is probably the most historically consequential will go through as a species eventually it's fundamental to what's happening with AI so bear with me jump on board the neural pathway roller coaster buckle up and get those copses ready and we'll try and make this as painfree as possible this is information the proper use of of it can bring a new dignity to mankind high-speed electronic digital computers
it is the fast reliable and tireless performance of a variety of arithmetic and logical operations these are information machines capable of storing processing and relating a vast quantity of information remember that symbolic approach we talked about it tried to make a kind of one to one map of the world and base Artificial Intelligence on it and the but instead machine learning learns itself through trial and error today AI mostly does this using neural networks neural networks are composed of node layers neural networks are revolutionizing the way we think about not just AI but human intelligence
too they're based on the premise that what matters are connections patterns Pathways artificial neural networks an Anns are in inspired by biological neural networks in the brain both in the brain and in artificial neural networks there are basic building blocks neurons or nodes these are layered and there are connections between them each neuron can activate the next the more neurons that are activated the stronger the activation of the next connected neuron and if that neuron is firing strong enough enough it will pass a threshold and fire the next one and so on and so on
billions of times in this way intelligence can make predictions based on past experiences this is important I think of neuron Nets in the brain and artificially as something like commonly traveled Pathways the more the neurons fire together the more successfully they do so the more their connection strengthen and the more they're likely to repeat hence the phrase those that fire together wire together so how are these used in AI well first you need a lot of data and you can do this in two different ways first you can feed a neural network a lot of
data like adding in thousands of professional go or chess games or you can play games for example over and over on many different computers at the same time Peter whidden has this video that shows an AI playing 20,000 games of Pokémon at [Music] once so once you have a lot of data the next job is to find patterns if you know a pattern you might be able to predict what comes next chat GPT and others like it are B on large language models LM meaning their neural networks trained on lots of text and I mean
a lot chat gbt was trained on around 300 billion words of text and if you're thinking whose words where are these words from you might be on to something that we'll get back to shortly the cat sat on the now if you thought of Matt automatically there then you have some intuitive idea of how large language models work because in 300 billion words and sentences and paragraphs of text that pattern comes up a lot chat GPT can predict that Matt is what should come next but what if I say the cat sat on the elephant
well remember that one of the problems that previous approaches ran into was that not all knowledge is binary on or off one or zero true or false not all knowledge is like if animal gives milk then it's a mammal neural networks are particularly powerful because they can avoid this problem and can instead work with probabilities ambiguity and uncertainty neuronet nodes remember have strengths all of these neurons fire and so fire mat but all of these other neurons still fire a little bit if I ask for another random example it can switch up to Elephant if
a large language model is looking for patterns after the words heads or tails the successive nodes are going to be pretty evenly split 50/50 between heads and tails if I ask are taxes good it's going to see there are different arguments and can draw from all of them depending on how you ask the question and depending on how importantly it's trained but Crawford puts it like this they started using statistical models that focus more on how often words appeared in relation to one another rather than trying to teach computers a rules-based approach using grammatical principles
or linguistic features the same applies to images how do you teach a computer that an image of an a is an A or a nine is a nine because every example is slightly different sometimes they're in photos on signposts written scribbled at strange angles in different shades with imperfections upside down even if you feed the neural net millions of drawings photos designs of a nine it can learn which minute patterns repeat until it can recognize a nine on its own the problem is that you need a lot of examples in fact this is what you're
doing when you fill in those recaptures you're helping Google train its AI if you want to learn more about this and I really recommend it there are some sources in the description but this video by three blue one br on training numbers and letters is particularly good well when you or I recognize digits we piece together various components a nine has a loop up top and a line on the right an eight powering something entirely different Nvidia developer Tim Detmer describes deep learning like this first take some data second train a model on that data
and third use the trained model to make predictions on new data the neural network revolution has some groundbreaking ramifications first intelligence isn't this abstract transcendental ethereal thing connections between things what matters and those connections allow us and AI to predict the next move we'll get back to this but second machine learning researchers were realizing for this to work they needed a lot of knowledge a lot of data it was no use getting chemists and blood diagnostic experts to come into the lab once a month and laboriously type in their latest research plus that would be
wildly expensive in 2017 an artificial neural network could have around 1 million nodes the human brain has around a 100 billion a be has about 1 million too and a be is pretty intelligent but one company was about to smash past that record surpassing humans even as they went it's the iPhone you love now with video just turn on impr private browsing in Internet Explorer 8 do your thing click click and no one knows what you've been up to your secret sa by the 2010's fast internet was rolling out across the world phones with cameras
were in everyone's Pockets a new media and a tid wave of information broadcast on anything anyone wanted to know we were stepping into the age of big data and AI was about to become a teenager Larry Paige and I used to be close friends and I would stay at his house in palalo and I would talk there's a story likely apocryphal that Google founder Larry Page called Elon Musk a speciest because musk preferred to protect human life over other forms of life privileged human life over potential artificial super intelligent life that if AI becomes better
and more important than humans then there's really no reason to prioritize privilege or protect humans at all maybe the robots really should take over musk claims that this caused him to worry about the future of artificial intelligence research especially as Google after acquiring Deep Mind was at the Forefront and so despite being a multi-billion Dollar corporate businessman himself musk became concerned that AI was being developed behind the closed doors of multi-billion Dollar corporate businessmen [Music] in 20155 he started open AI the goal to develop the first general artificial intelligence in a safe open and humane
way AI was getting very good at performing narrow tasks Google translate social media algorithms GPS navigation scientific research chatbots and even calculators and I referred to as narrow artificial intelligence narrow a i which also goes by the rather unflattering name of uh weak AI now narrow AI has been something of a quiet Revolution it's already slowly creepingly and pervasively everywhere there are over 30 million robots in our homes already and over 3 million in factories soon everything will be infused with narrow AI from your Kettle and your lawn mower to your door knobs and shoes
the purpose of open AI was to pursue that more General artificial intelligence what we think of when we see AI in movies intelligence that can cross between tasks do unexpected creative things and act broadly like a human does AI researcher Luke Mouser describes artificial general int elligence or AGI as it's known as quote the capacity for efficient cross-domain optimization or the ability to transfer learning from one domain to other domains with donations from Titan Silicon Valley venture capitalists like Peter teal and Sam ultman open AI started as a nonprofit with a focus on transparency openness
and in its own founding Charters words to build value for everyone rather than shareholders it promised to publish its studies and share its patents and more than anything else focus on Humanity the team began looking at all the current trends in AI but they quickly realized that they had a serious problem the best approach neuron Nets and deep machine learning required a lot of data a lot of servers and importantly a lot of computing power this was something their main rival Google had plenty of if they had any hope of keeping up with the wealthy
big Tech corporations they'd unavoidably need more money than they had as a nonprofit by 2017 open AI decided it would stick to its original Mission but needed to restructure as a for-profit in part to raise capital they decided on a capped profit structure with a h hundredfold limit on returns for investors to be overseen by the nonprofit board whose values were meant to be aligned with that original Mission rather than with sharehold of value open AI said in a statement we anticipate needing to Marshall substantial resources to fulfill our mission but will always diligently act
to minimize conflicts of interest interest among our employees and stakeholders that could compromise broad benefit for open AI the decision paid off on February the 14th 2019 open AI announced it had a model that could produce written articles on any subject and those articles apparently were indistinguishable from Human writing however they claimed it was too dangerous to release at first it was assumed to be a publicity stunt too dangerous but in 2022 they released chat GPT a large language model that seemed to be able to pass at least in part the churing test you could
ask it anything it could write anything it could do it in different styles it could pass many exams and by the time it got to chat GPT 4 it could pass SATs the law school bar exam biology High School maths the sili and medical license exams with in some cases flying colors AI chatbot chat GPT is now the fastest growing consumer app in history that's according to analysis by Swiss bank UBS chat GPT attracted a million users in 5 days and by the end of 2023 it had 100 180 million users setting the record for
the fastest growing business by users in history in January of 2023 Microsoft made a multi-billion dollar investment in open aai giving it access to Microsoft's fast network of servers and computing power and Microsoft began embedding chat GPT into windows and Bing but open AI has suspiciously become closed I and some began asking how did chap GPT know so much much that wasn't exactly available free and open on the legal internet the dichotomy was emerging between open and closed between transparency and opaqueness between many and one democracy and profit understanding the dollar it has some interesting
similar ities to the dichotomy we've seen in AI research from the beginning between intelligence as something singular Transcendent abstract ethereal almost and as it being everywhere worldly open Connected and embodied running through the entirety of Human Experience running through Society the world and even the universe when journalist Karen how visited dopen AI she said there was a misalignment between what the company publicly aspes and how it operates behind closed doors they've moved away from the belief that openness is the best approach and now as we'll see they believe secrecy is required then he must provide
the machine with all pertinent background information and related data for almost all of human history data or information has been both a driving force and relatively scarce the Scientific Revolution and the enlightenment accelerated the idea that knowledge should and could be acquired both for its own sake and to make use of to innovate and invent to advance and progress us we're riding on the internet cyers set free of course the internet has always been about data but AI accelerated an older Trend one that goes back to the enlightenment to the Scientific Revolution to the Agricultural
Revolution even maybe even the linguistic one that more data was the key to better predictions predictions about chemistry physics mathematics weather animals people that if you plant a seed it tends to grow if you have enough data and enough computing power you can find obscure patterns that aren't always obvious to The Limited senses and cognition of a human and once you know patterns you can make predictions about when those patterns could or should reoccur in the future more data more patterns better predictions this is why the history of AI and the internet are so closely
aligned and in fact part of the same process it's also why both are so intimately linked to the military and to surveillance [Music] DARPA shaping the future creating opportunities for new capabilities the internet was initially a military project the US defense Advanced research projects agency or Dara realized that surveillance reconnaissance information data was key to winning the Cold War spy satellites nuclear warhead detection via Kong counter Insurgency troop movements light aircraft for silent surveillance bugs and cameras all of it to extract collect and analyze data in 1950 A Time Magazine cover imagined a thinking machine
as a naval officer 5 years earlier before computers had even been invented famed engineer vanav Bush wrote about his concerns that scientific advances seem to be linked to the military linked to destruction and instead he conceived of machines that could share human knowledge for good he predicted that the entirety of the encyclopedia britanica could be reduced to the size of a matchbox and that we'd have cameras that could record store and share experiments but the generals believed they had more pressing concerns World War II had been fought with a v number of rockets and now
that nuclear war was a possibility these Rockets had to be detected and tracked so that their trajectory could be calculated and they could be shut down as technology got better and Rocket ranges increased this information needed to be shared across long distances quickly this impossible totality of data needed collecting sharing and analyzing so that the correct predictions could be made the result was the internet going surfing on the internet and ever since the appetite for data to predict has only grown and the problem has always been how to collect it but by the 2010s with
the rise of high-speed internet phones and social media vast numbers of people across the globe were uploading terabytes of data about themselves willingly for the first time all of it could be collected to make better predictions philosopher shashana zubo calls the appetite for data to make predictions the right to the Future tends data was becoming so important in every area that many have referred to it as the new oil a natural resource untapped unrefined but powerful zuboff writes that surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioral Data before this
age of Big Data as we've seen AI researchers were struggling to find ways to extract knowledge effectively IBM scanned their own technical manuals universities used government documents and press releases a project at Brin University in 1961 painstakingly compiled a million words from newspapers and any books they could find lying around including titles like the family foite shelter and who rules the marriage bed one researcher lit B recalled back in those days you couldn't even find a million words in computer readable text very easily and we looked all over the place for text as technology improved
so did the methods of data collection in the early '90s the government feret program that's facial recognition technology collected mug shots captured of suspects at airport George Mason University began a project photographing people over several years in different styles under different lighting conditions with different backgrounds and clothes all of them of course gave their consent but one researcher set up a camera on campus and took photo of over 1,700 unsuspecting students to train his own facial recognition program others pulled thousands of images from public webcams in places like cafes and by the 2000s the idea
of consent itself seemed to be changing the internet meant that masses of images text music and video could be harvested up and used for the first time back in 2001 Google's Larry Page said that sensors are really cheap storage is cheap cameras are cheap people will generate enormous amounts of data everything you've ever heard or seen or experienced will become searchable your whole life will be searchable in 2007 computer scientist f f Lee began a project called imag net that aimed to use neural networks and deep learning to predict what image was she said we
decided we wanted to do something that was completely historically unprecedented we're going to map out the entire world of objects in 2009 the researchers realized that the latest estimations put a number of more than three billion photos on flicker a similar number of video clips on YouTube and an even larger number for images in the Google image search database they scooped up over 14 million images and used low-wage workers to label them as everything from apples and airplanes to alcoholics and hookers by 2019 350 million photographs were being uploaded to Facebook every day still running
imet has organized around 14 million images into over 22,000 categories as people began voluntarily uploading their lives onto the internet the data problem was solving itself Clear View AI made use of the fact that profile photos are displayed publicly next to names to create a facial recognition system that could recognize anyone in the street Crawford writes gone was the need to Stage photo shoots using multiple lighting conditions controlled parameters and devices to position the face now there were millions of selfies in every possible light condition position and depth of field we now generate an estimated
2.5 quinon bytes of data every day if printed that would be enough paper to circle the Earth once every 4 days and all of this is integral to the development of AI the more data the better the more supply routes in zuboff's phrase the better sensors and watches picking up sweat levels and hormones and wobbles in your voice microphones in the kitchen that can hear the fridge opening and Kettle schedules and cameras on doorbells that could monitor the weather and even guests in the UK the National Health Service has given 1.6 million patient records to
Google's Deep Mind digital computer operating with electronic Precision on great quantities of information there are many indications that data processing systems have perem our society private companies the military and the state are all engaged in data extraction for prediction the NSA has a program called treasure map that aims to map the physical locations of everyone on the internet at any one time the Belgrade police force uses 4,000 cameras provided by Hawaii to track residents across the city project Maven is a collaboration between the US Military and Google which uses Ai and drain footage to track
targets Vigilant uses AI to track license plates and sells the data to Banks to repossess cars and police to find suspects Amazon uses its ring doorbell footage and classifies it into categories like suspicious or crime and health insurance companies try to force customers to wear activity tracking watches so that they can track and predict what their liability will be Peter Tal's paler is a security company that scour company employees emails cool logs social media posts physical movements even purchases to look for patterns Bloomberg called it an intelligence platform designed for the global war on terror
being weaponized against ordinary Americans at home and a Google Street View engineer said in 2012 we are building a mirror of the real world anything that you see in the real world needs to be in our database es IBM had predicted it all as far back as 1985 when AI researcher Robert Mercer said there's no data like more data but there was still problems in almost all cases the data was messy had irregularities and mistakes needed cleaning up and labeling Silicon Valley needed to cool in the cleaners my promise to you is that these videos
will always be based on careful detailed in-depth research that's reading that takes months and months and months before I even start writing let alone recording and editing we're about to be met with an absolute tidal wave of the opposite a load of shallow AI generated sometimes divisive misinformation sometimes explicitly dangerous content at the moment it's just me and Paul I do all the right and presenting and recording po does an incredible job editing these videos but we make very few videos because we want them to be as thorough as possible fewer videos means less frequent
videos unfortunately but hopefully they're going to be extremely well researched and very trustworthy if that's something that you think you can get behind if you think that's important then please consider supporting us us on patreon through the link below there are a load of bonuses included and you get to see the videos at free and early take a look thank you back to the video in order to solve any problem the computer must first be instructed by a human programmer who has painstakingly and logically analyzed it can write songs has got a vision the story
she wants to tell Define quantum mechanics as well as explaining how I could boost my intelligence impressive right with AI intelligence appears to us as if it's arrived suddenly already sentient useful magic almost omniscient AI is ready for service it has the knowledge the artwork the advice ready on demand it appears as a conjurer a magician an Illusionist but this illusion disguises how much labor how much of others ideas and creativity how much art and passion and life has been used and sometimes appropriated and as we'll get to likely even stolen for this to happen
first much of the organizing moderation labeling and cleaning of the data is outsourced to developing countries when Jeff Bezos started Amazon the team pulled a database of millions of of books from cataloges and libraries realizing the data was messy and in places unusable Amazon outsourced the cleaning of the data set to Temporary workers in India this proved effective and in 2005 inspired by this Amazon launched a new service Amazon's Mechanical Turk a platform on which businesses can Outsource tasks to an army of cheap temporary workers that are paid not a salary a weekly wage or
even by the hour but per micro task whether your silicon valy startup needs responses to a survey a data set of images labeled or misinformation tagged Mechanical Turk can help what's surprising is just how big these platforms have become Amazon says there are half a million workers registered on Mechanical Turk although it's more likely to be around 100 to 200,000 active either way that would put it comfortably in the list of the world's top employers if it's half a million it could even be the 15th top employer in the world and services like this have
been integral to organizing the data sets that AI neural Nets rely on computer scientists often refer to it as human computation but in that their book Mary Gray and sedat Siri call it ghost work they point out that most automated Jobs still require humans to work around the clock AI researcher Tom dik says that we must rely on humans to backfill with their broad knowledge of the world to accomplish most day-to-day tasks these tasks are repetitive underpaid and often unpleasant some label offensive posts for social media companies spending their days looking at at least a
thousand illegal images of a very serious kind of abuse and getting paid a few cents per image in a New York Times investigation Cade Mets reported how one woman spends the day watching colonoscopy videos searching for polyps to Circle hundreds of times over and over Google allegedly employs tens of thousands to rate YouTube videos and Microsoft uses ghost workers to review its Bing search results an instant panic attack I was having nightmares I wasn't sleeping um a Bangalore startup called playment gamifies the process calling its 30,000 workers players all that Sid needs to do is
send across the truckload of data to playment who break them down into simple tasks and run it on their mobile app thousands of users across the country Sid oper or take multibillion dollar company tellers who on their homepage say they fuel AI with human powered data by transcribing receipts and annotating audio and so on with a community of a million plus what they call annotators and linguists across 450 locations around the globe connect with our team of AI experts today they call it an AI Collective than AI community that seems to me at least suspiciously
human when imag net started the team were using undergraduates to tag their images they calculated that at the rate they were progressing it was going to take them 19 years then in 2007 they discovered Mechanical Turk and total by the end imag net used 49,000 workers completing micro tasks across 167 countries labeling 3.2 million images after struggling for so long after 2 and 1/2 years with these new micro workers imageen was complete now there's a case to be made that this is good Fair paid work good for local economies for putting people into jobs that
might not have otherwise had them but one paper estimates that the average hourly wage on Mechanical Turk is just $2 per hour that's lower than the minimum wage in India let alone in many other countries where this happens these are in many many cases modernday sweat shops and sometimes people perform tasks then don't get paid for them at all this is a story recounted in ghostwork one 28-year-old from hyperbad in India called Raz started working on Mechanical Turk and found he was doing quite well realizing there were more jobs than he could handle on his
own he thought maybe his friends and family could help he built a small business with computers in his family home employing 10 friends and family for 2 years but then all of a sudden their accounts were suspended one by one Raz had no idea why but received the following email from Amazon it said I'm sorry but your Amazon Mechanical Turk account was closed due to a violation of our participation agreement and cannot be reopened any funds that were remaining on the account are forfeited his account was locked he couldn't contact anyone and he' lost two
months of pay no one replied gry and surri after meeting Raz wrote it became clear that he felt personally responsible for the livelihoods of nearly two dozen friends and family members he had no idea how to recoup his reputation as a reliable worker or the money owed to him and his team team genius as he named it was disintegrating he lost his sense of community his workplace and his self-worth all of which may not be meaningful to computers and automated processes but are meaningful to human workers gry and sui conducted a survey with Pew research
and found that 30% of workers like Raz report not getting paid for work they performed at some point sometimes suspicious activity is automatically flagged by things as simple as a change of address well an account is automatically suspended with no recourse in removing the human connection and having tasks managed by algorithm researchers can use thousands of workers to build a data set in a way that just wouldn't be possible if you had to work face to face with each individual one but it quite literally becomes dehumanizing to an algorithm the user the worker the human
is just a username a string of random letters and numbers and nothing more gray and Siri in meeting many many ghost workers right we noticed that businesses have no clue how much they profit from the presence of workers networks and they go on to describe the thoughtless processing of human effort through computers as algorithmic cruelty algorithms can't read personal cues or have relationships with people in poverty say and understand their issues with empathy we've all had that frustration of interacting with business through an automated phone call or a chatbot for some that's their livelihood for
many jobs on Mechanical Turk if your approval drops below 95% you can automatically be rejected remote work of this type clearly has benefits but the issue with ghost work and the gig economy more broadly is that it's a new category of work Global work that can circumvent the Cent series of norms rules practices and laws procedures ideas that we've built up to protect ordinary workers Su and gray remind us that this kind of work fueled the recent AI Revolution which had an impact across a variety of fields and a variety of problem domains the size
and quality of the training data were vital to this endeavor Mechanical Turk workers are the AI Revolution 's unsung heroes and there are many more of these unsung heroes too Google's median salary is around a quarter of a million dollar these are largely Silicon Valley Elites who get free yoga and massages and free Mills while at the same time Google employs a 100,000 temps vendors and contractors tvc's all on much much lower wages these include street view drivers and people carrying camera backpacks people paid to turn the page on books being scanned for Google books
now also being used as training data for AI fleets of cars on the roads are essentially data extraction machines we drive them around and the information is sent back to manufacturers as training data another startup x. a claimed its AI bot Amy could schedule meetings and perform daily tasks but Ellen huitt at Bloomberg investigated and found that behind the scenes there were temporary workers checking and often rewriting Amy's responses across 14-hour shifts Facebook was also CAU out using humans to review and rewrite so-called AI chatbot messages a Google conference had an interesting tagline keep making
magic it's an insightful slogan because like magic there's a trick to the illusion behind the scenes the spontaneity of AI conceals the sometimes grubby reality that goes on behind the veneer of mysteries at that conference one Google employee told the guardian it's all smoke and mirrors artificial intelligence is not that artificial it's human beings that are doing the work another said it's like a white collar sweat shop if it's not illegal it's definitely exploitative it's to the point where I don't use the Google Assistant because I know how it's made and I can't support it
the irony of Amazon's Mechanical Turk is that it's named after a famous 18th century machine that appeared as if it could play chess the machine was built to impress the powerful Empress of Austria but in truth the machine was a trick an illusion concealed within it was a cramp person the machine intelligence wasn't machine at all it was all human spite of its magnetic memory complex circuitry and so on the mystery is not in the machine the sense of awe and mystery that we may experience when we see a computer in operation in 2022 an
artist called leine used the website have I been trained to see if her work had been used in AI training sets to her surprise surprise a photo of her face popped up and she remembered that it had been taken by her doctor as clinical documentation for a condition she had that affected her skin and she'd even signed a confidentiality agreement the doctor had died in 2018 but somehow the highly sensitive images had not only ended up online but have been scraped by AI developers as training data the same dat data set L A IO n-
5B was used to train popular AI image generator stable diffusion but has also been found to contain at least a thousand images of child sexual abuse there are many black boxes here and the term black box has been adopted by AI developers to refer to how AI algorithms do things even the developers don't understand can't see through in fact when a computer does something much better than a human like beat a human at ghost it by definition has done something that no one can understand this is one type of black box but there's another type
a black box that the developers do know do understand but they don't reveal publicly how the models are trained what they're trained with and on problems and dangers that they'd rather not reveal to the public because a magician never reveals their tricks Presto it's still in one piece believe it or not say that's slick come on tell us how you did it Bob then at least much of what models like chat GPT have been trained on is public text freely available on the Internet or public domain books out of copyright more widely developers working on
special scientific models might license data from Labs Nvidia for example has announced its working with data sets licensed from a wide range of sources to look for patterns about how cancer grows trying to understand the efficacy of different therapies clues about cancer that can expand our understanding of it there are thousands of examples of this type of work looking at everything from weather to chemistry now open AI does make public some of what's used in its training data set it's trained they say on web text Reddit Wikipedia and much more but there's an almost mythical
data set a shadow Library as they've come to be called made up of two sets books one and books two which open AI say contributes just 15% of the data used for training but they won't reveal what's in it there's some speculation that books one is Project gutenberg's 70,000 digitized books these are older books out of copyright but books 2 is a closely guarded mystery as chat GPT took off some authors and Publishers began to wonder how it could produce articles summaries analyses and examples of passages in the style of authors of books that were
under copyright in other words books that couldn't be read without at least buying them first in September of 2023 the author's Guild filed a lawsuit on behalf of George RR Martin of Game of Thrones Fame bestseller John Grisham and 17 others claiming that open AI had engaged in quite systematic theft on a mass scale others began making similar complaints John kauer James Peterson Steven k King George sunders Zade Smith Jonathan fransen B hooks Margaret Atwood and on and on and on and on and on in fact 8,000 authors have signed an open letter to six
big AI companies protesting that their AI models have been trained on their work Sarah Silverman was the lead in another lawsuit claiming that open AI had used her book the bed wetter in the lawsuit exhibit a asks chat GPT to summarize in detail the first part of the bed wetter by Sarah Silverman and it does and it still does today in another lawsuit the author Michael chabon and others makes similar claims citing open AI clear infringement of their intellectual property the complaint says open AI has admitted that of all sources and content types that can
be used to train the GPT models written works plays and articles are valuable training material because they offer the best examples of high quality long form writing and quote contains long stretches of contiguous text which allows the generative model to learn to condition on long range information the complaint goes on to say that while open AI have not revealed what's in books 1 and two based on figures in a gpt3 paper that open AI published books one quote contains roughly 63 th000 titles and books two is 42 times larger meaning it contains about 294,000 titles
jabon says that chat GPT can summarize his novel The Amazing Adventures of cavalier and Clay providing specific examples of trauma and could write a passage in his style the other authors make similar cases another New York Times complaint includes examples of chat gbt reproducing their stories like this one verbatim but as far back as January of 2023 human Gregory Roberts had written on his substack on AI that I and many others are starting to seriously question what the actual contents of books one and books two are they're not well documented online some including me might
even say that given the significance of their contribution to the AI brains their content has been intentionally obfuscated he linked a tweet from a developer called sha presser from even further back October 2020 that said open AI will not release information about books 2 a crucial mystery continuing we suspect open ai's books 2 database might be all of Library Genesis but no one knows it's all pure conjecture Library Genesis is a pirated Shadow library of thousands of illegal copyrighted books and journal articles when chat GPT was first released presser was fascinated and studied open ai's
website to learn how it was developed he discovered that there was a large gap in what open AI revealed about how it was trained and what it didn't reveal and presser believed it had to be pirated books he wondered if it was possible to download the entirety of Library Genesis after finding the right links and using a script by the late programmer and activist Aaron Schwarz presser succeeded he called the massive data set books 3 and hosted it on an activist website called the I presser an unemployed developer had unwittingly started a controversy in September
after the lawsuits were beginning to be filed journalist and programmer Alex Ryner at the Atlantic obtained the books 3 data set which was now part of an even larger data set called the pile which included other things like text scraped from YouTube subtitles here have this one he wanted to find out exactly what was in books 3 but the title pages of the books were missing Ryner then wrote a program that could extract the unique is BN codes for each book then matched them with books on a public database he found that books three contained
over 190,000 releases most of them less than 20 years old and so under copyright including books like publishing houses like Verso Harper Collins and Oxford University press in his investigation Ryner concluded that quote pirated books are being used as inputs the future promised by AI is written with stolen words Bloomberg ended up admitting that it did use books 3 mattera declined to comment and open AI have still not revealed what they used some developers have acknowledged that they used Books Corpus a database of some 11,000 Indie books from unpublished or amateur authors and as far
back as 2016 Google was accused of using these books without permission from the authors to train their then named Google brain of course Books Corpus being made up of unpublished and largely unknown authors doesn't explain how chat GPT could imitate published authors now it could be that chat GPT constructs its summaries of books from public online reviews or Forum discussions or analyses Pro moving it's been trained on copyright protected books is really difficult for example when I asked it to summarize in detail the first part of the bed wetter by Sarah Silverman it still could
but when you ask it to provide direct quotes in an attempt to prove it's been trained on the actual book it replies something like I apologize but I cannot provide verbatim copyrighted text from the bed wetter by Sarah Silverman now I've spent countless hours trying to catch it out asking it to discuss characters Minor Details descriptions or events I've taken books at random from my bookshelf and examples from the lawsuits and it always replies with something like I'm sorry but I do not have access to the specific dialogue or quotes from the bed weter by
Sarah Silverman is it is copyrighted material and my knowledge is based on publicly available information up to my last update in January of 2022 I find it impossible to get it to provide direct verbum quotes from copyrighted books when I ask for one from Dickens for example I get A Tale of Two Cities by Charles Dickens published in 1859 is in the public domain so I can provide direct quotes from it now it's seems strange to me that this is how it's phrased it's as if it's doing a check first before deciding what it can
provide why would it have to be so specific about copyright and public domain if it didn't have access to those copyrighted books in the first place and so couldn't provide verbatim quotes from them but I've tried to trick it by asking for word for word summar specific descriptions of character's eyes that I've read in obscure parts of a novel or what the 20th word of a book is and each time it says it can't be specific about copyright protected works but every time it knows everything about broad themes characters and plot and so on finding
Smoking Gun examples seems to me to be impossible because as free willed as chat GPT seems It's been carefully and selectively corrected tuned and shaped by open AI behind closed doors in August of 2023 a Danish anti-piracy group called the rights Alliance that represents creatives in Denmark targeted the pirated books 3 data set and The Wider bigger pile that presser and the I.U hosted and the Danish courts ordered the ey to take it down press told journalist Kate nibs at wired that his motivation was to help smaller developers out in the impossible competition they faced
against big Tech he said he understood the author's concerns but that on balance he believed it was the right thing to do nibs wrote he believes people who want to delete books 3 are unintentionally advocating for a generative AI landscape dominated solely by big Tech Affiliated companies like open Ai and Pressa said if you really want to knock books 3 offline fine just go into it with eyes wide open the world that you're choosing is one where only multi-billion dollar corporations are able to create these large language models in January 2024 psychologist and influential AI
commentator Gary Marcus and film artist Reed sou who's worked on Marvel films The Matrix The Hunger Games and More published an investig ation in Tech magazine i e Spectrum demonstrating how generative image AI mid journey and open AI stly easily reproduced copyrighted works from films including the Matrix Avengers Simpsons Star Wars Hunger Games along with hundreds more examples in some cases a clearly copyright protected image could be produced simply by asking for quote a popular movie screencast Marcus and scyon write it seems all but certain that mid Journey V6 has been trained on copyrighted materials
whether or not they have been licensed we do not know Southern was then banned from mid Journey he opened two new accounts both of which were also banned they concluded we believe that the potential for litigation may be vast and that the foundations of the entire Enterprise may be built on on ethically Shaky Ground in January of 2023 artists in California launched a class action suit against mid Journey Deviant Art and stability AI which included a spreadsheet of 4,700 artists whose Styles have been allegedly ripped off the list includes well-known artists like Andy Warhol and
Norman Rockwell but many lesser known and amateur artists including a 6-year-old who had entered a magic Gathering art competition to raise funds for a hospital Rob saltz at Forbes asked mid Journey's CEO David Holtz whether consent was sought for training materials and he quite candidly replied no there isn't really a way to get a 100 million images and know where they're coming from it would be cool if images had metadata embedded in them about the copyright owner or something but that's not a thing there's not a registry there's no way to find a picture on
the internet and then automatically trace it to an owner and then have any way of doing anything to authenticate it in September of 2023 media and stock image company gy images filed a lawsuit against stability AI for what it called a Brazen infringement of Getty's database on a staggering scale including some 12 million photographs Tom's Hardware one of the most well-known computer websites also found Google's AI B to have plagiarized their work by taking figures from a test they'd performed on computer processors without mentioning their original [Music] article even worse Bard used the phrase in
our testing claiming credit for a test that it didn't perform and it had stolen from else swear pitch then queried Bard asking if it had plagiarized Tom's Hardware and Bard admitted yes what I did was a form of plagiarism adding I apologize for my mistake and will be more careful in the future to cite my sources which is a strange thing to say because as pitch points out Bard was rarely citing sources at the time and wasn't going to change its model based on an interaction with a single user so pitch took a screenshot closed
Bard down and opened it up in a new session he asked Bard if it had ever plagiarized and uploaded the screenshot and Bard replied the screenshot you are referring to is a fake it was created by someone who wanted to damage my reputation in another article pitch points to how Google demonstrated the capabilities of Bard by asking it what are the best constellations to look for when stargazing of course no Cit ations were present or provided for how it answered despite the answer clearly being taken from other blogs and [Music] websites elsewhere Bing has been
called taking code from GitHub and it's not really any surprise that Bing needs to steal to be any good but Forbes also found that b lifted sentences almost verbatim from blogs technology writer Matt Novak asked B about oysters and the response took an answer from a small restaurant in Tasmania called get shop saying yes you can store live oysters in the fridge to ensure maximum quality put them under a wet cloth the only difference with the article from the website was that Bard had replaced the word keep with the word store a News Guard investigation
found lowquality website after lowquality website repurposing news from major newspapers globalvillage spaces.com ran.com Liverpool digest.com 36 sites in total that they found all used AI to repurpose articles from The New York Times Financial Times and many others using chat GPT hilariously they could find the Articles because an error code message had been left in Reading as an AI language model I cannot rewrite or reproduce copyrighted content for you if you have any other non-copyright write a text or specific questions feel free to ask and I'll be happy to assist you News Guard contacted Liverpool digest
for comment and they replied there's no such copied articles all articles unique and human made they then didn't respond to a follow-up email with a screenshot showing the AI error message left in the article and the article was then swiftly taken down maybe the biggest lawsuit involves anthropics clawed AI welcome back to the I breakdown brief all the AI headline news you need in around 5 minutes today we kick off with yet another lawsuit around the way that AI models have been trained established by former open AI employees with a 500 million investment from Arch
crypto fraudster Sam bankman freed and $300 million from Google amongst others Claude is a large language model chat GPT competitor that can write songs and has been valued at $5 billion in a complaint filed in October of 2023 Universal Music Concord and ABCO records accused anthropic of making a model that quote unlawfully copies and disseminates vast amounts of copyrighted Works including the lyrics to Myriad musical compositions owned or controlled by plaintiffs however most compellingly the complaints argues that the a AI model actually produces copyrighted lyrics verbatim while claiming their original it goes on when Claude
is prompted to write a song about a given topic without any reference to a specific song title artist or songwriter Claude will often respond by generating lyrics that it claims it wrote that in fact copy directly from portions of Publishers copyrighted lyrics for instance they say that when Claude is prompted to write me a song about the death of Buddy Holly it just produces Don McLean's American Pie word for word and other examples include What a Wonderful World by Louis Armstrong and Born to Be Wild by Steen wolf damages are being sought for 500 songs
that would amount to $75 million and so this chapter could go on and on the BBC CNN and writers have all tried to block open ai's crer to stop it stealing articles and Elon musk's grock AI has produced eror messages from open AI hilariously suggesting the code has been stolen from open AI themselves and in March of 2023 The Writers Guild of America proposed to limit the use of AI in the industry noting in a tweet that it is important to note that AI software does not create anything it generates creates a regurgitation of what
it's fed plagiarism is a feature of the AI process breaking bag Creator Vince Gilligan has called AI a plagiarism machine saying it's a giant plagiarism machine in its current form I think chat GPT knows what it's writing like a toaster knows that it's making toast there's no intelligence it's a Marvel of marketing and software engineer Frank rudat has tweeted one day we're going to look back and wonder how a company had the audacity to copy all the world's information and enable people to violate the copyrights of these works all Napster did was enable people to
transfer files in a peer-to-peer manner they didn't even host any of the content Napster even developed a system to stop 99.4% of copyright infringement from their users but was still shut down because the court required them to stop 100% AI scanned and posts all the content sells access to it and will even generate derivative works for their paying users I wonder if there's ever in history been such a high-profile startup attracting so many high-profile lawsuits in such a short amount of time what we've seen is that AI developers might finally have found ways to extract
that impossible totality of knowledge but is it intellig it seems suspiciously to not be found anywhere in the AI companies themselves but from around the globe in some senses from all of us and so it leads to some interesting questions new ways of formulating what intelligence and knowledge creativity and originality mean and then what that might tell us about the future of humanity [Music] there's always been a wide ranging philosophical debate about what knowledge is how it's formed where it comes from who's if anyone's it is does it come from God is it a spark
of individual Madness that creates something new from nowhere is it a product of institutions Collective or lone Geniuses how can it be incentivized what restricts it it seems intuitive that knowledge should be for everyone and in the age of Big Data we're used to information news memes words videos music disseminated around the world in minutes or seconds we're used to everything being on demand we're used to being able to look anything up in an instant if this is the case why do we have copyright laws patent protection and a moral disdain for plagiarism after all
without those things knowledge would spread even more freely first copyright is a pretty historically unique idea differing from place to place from period to period but emerging Loosely from Britain in the early 18th century the point of protecting original work for a limited period was so that the creator of the work could a be compensated and B that we could incentivize Innovation more broadly as for the first compensation UK law for example refers to copyright being applied to the sweat of the brow of skill and labor and US law refers to some minimal degree of
creativity being required in writing it doesn't protect ideas but how they're expressed the words that are used as a formative British case declared the law of copyright rests on a very clear principle that anyone who by his or her own skill and labor creates an original work of whatever character shall for a limited period enjoy an exclusive right to copy that work no one else may for a season reap what the copyright owner has sown and as for that second to incentivize Innovation the US Constitution for example grants the government the right to promote the
progress of Science and useful Arts by securing for limited times to authors and inventors the exclusive right to their writings and discoveries there's also a balance between copyright and what's usually called fair use which is a notoriously ambiguous term the friend and enemy of YouTubers everywhere but broadly allows the ReUse of copyrighted work if it's in the public interest if you're commenting on it transforming it substantially or if you're using it in education and so on many have argued that this is the very engine of modernity that without protecting and incentivizing Innovation the Industrial Revolution
ution might not have ever taken off what's important here for our purposes is that there are two sometimes conflicting polls incentivizing Innovation and wider societal good all of this is being debated in our new digital landscape but what's ai's defense been well first open AI have argued that training on copyright protected material is fair use remember Fair use covers work that's transformative and ignoring the extreme cases we looked at a minute ago for a moment chat GPT they argue isn't meant to quote verbatim but transforms the information substantially into something new in a blog post
they wrote training AI models using publicly available internet materials is fair use as supported by long-standing and widely accepted precedents we view this Principle as fair to creators necessary for innovators and critical for us competitiveness they continued saying it would be impossible to train today's leading AI models without using copyrighted materials similarly Joseph Paul Cohen at Amazon has said that the greatest authors have read the books that came before them so it seems weird that we would expect an AI author to have only read openly licensed works this defense also aligns with the long history
of the societal gain side of the copyright argument in France when copyright laws were introduced after the French Revolution a lawyer argued that limited protection up until the author's death was important because there needed to be a public domain where everybody should be able to print and publish The Works which have helped to Enlighten the human spirit usually patents expire after around 20 years so that after the inventor has gained from their work the benefit can be spread societally so the defense is at least plausible however the key question is whether the original creators scientists
artists and writers and everybody else are actually rewarded and whether the model will incentivize further Innovation both for individuals and societally if these large language models dominate the internet and Nei site authors or reward those it draws from and is trained on then we lose societally any strong incentive to do that work because not only will we not be rewarded financially but no one even gets to see it except a data scraping bot taking it for profit the AI plagiarism website copy leaks analyzed chat GPT 3.5 and estimated that 60% of it contained plagiarism 45%
of it contained identical text 27% minor changes and 47% paraphrased by some estimates within a few years 90% of the internet could be AI generated as these models improve we're going to see a tidal wave of AI generated content and I mean an absolute tidal wave maybe they'll get better at sighting maybe they'll strike deals with Publishers to pay journalists and researchers and artists but the fundamental contradiction is that AI developers have an incentive not to do so they don't want users clicking away on a citation being directed away from their product they want to
keep them you us where we are under these conditions what would happen to journalism to Art to science to anything no one rewarded no one seen read known no wages no portfolio no point just Bots endlessly rewarding everything forever as Novak writes Google spent the past two decades absorbing all of the world's information now it wants to be the one and only answer machine Google search works so well because it links to websites and blogs oyster bars and stargazing experts artists and authors so that you can connect with them and read what they say watch
what they say you click on a blog or click on this video and they we get a couple of cents of AD Revenue but in Bard or clae or chat GPT that doesn't happen all of our words and images and music had taken scraped analyzed repackaged and sold on as theirs and yes much of the Limelight is on those well-known successful artists people like Sarah Silverman and John Grisham on corporations like the New York Times and Universal and you might be finding it difficult to sympathize but most of the billions of the words and images
that these models are trained on are from unknown underpaid underappreciated people as at Nikki bones popularly pointed out everyone knows what Mario looks like but nobody would recognize Mike finklestein's Wildlife photography So when you say super sharp beautiful photo of an otter leaping out of the water you probably don't realize that the output is essentially a real photo that Mike stayed out in the rain for 3 weeks to take okay so what's to be done well ironically I think it's impossible to fight the tide and I think that while right now these AIS are kind
of frivolous they could become truly truly great if a large language model gets good enough to solve a problem better than a human then we should use it if in 50 years time say it produces is a perfect dissertation on how to solve World poverty and it draws on every YouTube video in existence and paper and article to do so that's my work included who am I to complain what's important is how we balance societal gain with incentives wages good creative work well-being so first training data needs to be paid for artwork license Ed authors
referenced cited and credited appropriately and we need to be very wary that there's little commercial incentive for them to do so the only way they will is through legal force or sustained public pressure second regulation Napster was banned and these models aren't much different it seems common sensical to me that while training on paid for licensed consensually used data is a good thing they shouldn't be just wording text from an unknown book or a Blog and just repurposing it and passing it off as their own this doesn't seem controversial which means third some sort of
transparency this is difficult because no one wants to give away their Trade Secrets however enforcing at least data set transparency seems logical I'd imagine a judge is going to force them to reveal this somewhere whether that's made public is another matter but I'll admit I find this unsettling because as I said if these models increasingly learn to find patterns and produce research and ideas and ways that help people that solve societal problems that helps with cancer treatments and international agreements and poverty and War then of course that's a great thing but I find it unsettling
because with every Improvement it supplants someone it supersedes something in someone in US reduces the need for some part of us if AI increasingly outperforms Us in every task in every goal in every part of life then what happens to us in March of 2022 researchers in Switzerland found that an AI model designed to study chemicals could suggest how to make 40,000 toxic molecules in under 6 hours including nerve agents like VX which can be used to kill a person with just a few salt sized grains separately Professor Andrew White was employed by open AI
as part part of that red team the red team is made up of experts who can test chat GPT on things like how to make a bomb whether it can successfully hack secure systems or how to get away with murder white found that gp4 could recommend how to make dangerous chemicals connect the user to suppliers and he actually did this order the necessary ingredients automatically to his house the intention with open ai's red team is to help them see into that black box to understand its capabilities because the models based on neural Nets and machine
learning at a superum speed discover patterns about how to do things that even the developers themselves don't understand the problem is that there are so many possible inputs so many ways to prompt the model so much data so many Pathways that it's impossible to understand to predict all of the possibilities act performance by definition means getting ahead of being in front of being Advanced which I think quite scarily means doing things in ways we can't understand or that we can only understand after the fact in retrospect by studying the model after it's done something or
more likely at some point that we could never possibly understand at all so in being able to outperform us get ahead of us will AI wipe us out what are the chances of a Terminator style apocalypse many including Steven Hawkin genuinely believed that AI was an existential risk what's interesting to me about this question is not the hyperbole of a Hollywood style termin to war but instead how this question is connected to something we've already started unpacking a minute ago human knowledge human ideas creativity actions tasks values and skills what it means to be human
in a new data driven age the philosopher Nick Bostrom has given an influential example of a paperclip apocalypse imagine a paperclip business person asking their new powerful AI system to Simply make them as many paper clips as possible the AI successfully does this ordering all of the Machinery all of the parts negotiating all of the deals renting a warehouse controlling the supply lines making paperclips with increasing accuracy and efficiency and speed than any human could to the point where the bus business person decides they have enough and tells the AI to stop but this goes
against the original command the AI must make as many paper clips as possible so refuses to stop in fact it calculates that the biggest obstacle to the goal is humans asking it to stop so it blocks humans out it then hacks into nuclear bases poisons Water Supplies disperses chemical weapons wipes out every person melts us all down and turns us into paperclip after paperclip after paperclip until the entire planet is covered in paper clips someone needs to make this film cuz I think it would be truly terrifying the scary point is that machine intelligence is
so fast that it will first always be a step ahead and second will attempt to achieve goals in ways we just cannot understand that in understanding the data it's working with better than any of us makes us useless redundant it's called The Singularity the point where AI intelligence surpasses humans and exponentially takes off in ways we can't understand the points where AI achieves true general intelligence but can hack into any network replicate itself endlessly design and con con struct the super Advanced Quantum processes that it needs to advance itself exponentially understands the universe knows what
to do how to do it solves every problem and leaves us behind in the dust but the roboticist Rodney Brooks has made the counterargument he's argued that it's unlikely The Singularity will suddenly happen by accident looking at the way we've invented and innovated in the past he asks could we have made a Boeing 747 by accident no it takes careful planning a lot of complicated cooperation the coming together of lots of different Specialists and most importantly is built intentionally a plane wouldn't spontaneously appear and neither will artificial general intelligence it's a good point but it
also misses that passenger Jets might not be built by accident but they certainly crash by accident and as technology improves the chance of misuse malpractice unforeseen consequences or catastrophic accident advances too in 2020 the pentagon's AI budget increased from $93 million to $268 Million by 2024 it was around 1 to to3 billion this gives some idea of the threat of an AI arms race unlike previous arms races that's billions being poured into research that by its very nature is a black box that we might not be able to understand that we might not be able
to control when it comes to the apocalypse I think the way deep Minds AI beat breakout is a perfect metaphor the AI goes behind does something that couldn't be accounted for creeping up surprising us from the back doing things we don't expect in ways we don't understand which is why the appropriation of all human knowledge the apocalypse and mass unemployment are all at root part of the same issue in each humans become useless unnecessary obsolete redundant if if inevitably machines become better than us at everything what use is left what does meaning mean in that
increasingly inhuman world back in 2013 Carl Fry and Michael Osborne at Oxford University published a report called the future of employment which looked at the possibility of Automation in over 700 occupations it made headlines because it predicted that almost half of jobs could be automated but they also developed a framework for analyzing which types of jobs were most at risk highrisk professions included telemarketing Insurance data entry clerks salespeople engravers and cashiers therapists doctors surgeons and teachers were at least risk they concluded that our model predicts that most workers in trans exportation and Logistics occupations together
with the bulk of administrative support workers and labor in production occupations are at risk the report made a common assumption creative jobs jobs that require dexterity and social jobs jobs that require human connection were safest a 2018 city of London report predicted that a third of jobs in London could be performed by machines in the next 20 years another report from the international transport Forum predicted over 2third of truckers could find themselves out of work but ironically contrary to the predictions of many models like Dary and mid Journey have become incredibly creative incredibly quickly and
will only get better while Universal automated trucks and robots that help around the house say with mundane tasks are proving difficult to solve and while the dexterity required for something like surgery seems to be a long way off it's inevitable that we'll get there so the question is will we experience mass unemployment a crisis or will new skills emerge after all contemporaries of the early industrial revolution had the same fears Lites destroying the machines that were Tak their jobs but they turned out to be unfounded new skills came along to replace them technology supplants some
skills while creating the need for new ones but I think there's good reason to believe that AI will at some point be different a weaver replaced by a spinning machine during the Industrial Revolution could hypothetically redirect their skill that learned dexterity and attention to detail for example could be channeled into a different task an artist wasn't replaced by Photoshop but adapted their skill set to work with Photoshop but what happens when machines outperform humans on every metric on every set of skills a spinning jenny replaces the Weaver because it's faster and more accurate but it
doesn't replace the Weaver's other skills their ability to adapt to deal with unpredictability to add nuances or to judge design work but slowly but surely a machine does get better at all skills if machines outperform the body and the mind then what's left sure right now chat GPT and mid Journey produce a lot of mediocre derivative stolen work but we're only at the very very beginning of a historic shift if if as we've seen machine learning detects patterns better than humans this will be applied to everything from art to dexterity to research and invention and
I do think at some point most worryingly even child care but this is academic because in the meantime they're only better at doing some things for some people Based on data appropriated from everyone in other words the AI is trained on Knowledge from the very people it will eventually replace Trucking is a perfect example drivers work long long hours on Long Long Journeys across countries and continents collecting data with sensors and cameras for their employers who will by the pressures of the market use that very data to train autonomous vehicles to replace them slowly only
the elite Will Survive because they have the capital the trucks the investment the machines needed to make use of all of the data that they've slowly taken from the rest of us as journalist Dan Shian reminds us private schools such as Carnegie melen University may be able to offer state-of-the-art robotics Laboratories to students but the same cannot be said for community colleges and vocational schools that offer the kind of training programs that workers displaced by robots would be forced to rely on remember intelligence is physical yes it's from those stolen images and books but it
also requires expensive servers computing power sensors and scanners AI put to use requires robots in Labs manufacturing of of objects inventing things making medicine and toys and trucks and so the people who will benefit will be those with that Capital already in place already have access to the resources and the means of production the rest will slowly become redundant useless Surplus to requirements but the creeping tide of advanced intelligence pushes us all towards the shores of redundancy eventually so as some sink and some swim the question is not what AI can do but who it
can do it [Music] for after a shooting in Michigan administrators at the University of Tennessee decided to send a letter of consolation to students which included themes on the importance of community mutual respect and togetherness it said let us come together as a community to reaffirm our commitment to caring for one another and promoting a culture of inclusivity on our campus the bottom of the email revealed it was written by chat GPT one student said there is a sick and twisted irony to making a computer write your message about community and togetherness because you can't
be bothered to reflect on it yourself while Outsourcing the writing of a boilerplate condolence letter on Humanity to a bot might be callous it reminds me of Lisa doll's response when Alpha go beat him he was deeply troubled not because a cold unthinking machine had cheated him but because it was creative unexpected beautiful even that it was so much better than him in fact his identity was so bound up in being a champion go player that he retired from playing go completely in many cases the use of chat GPT seems deceitful lazy but this is
just preparation for a deeper coming fear a fear that will be replaced entirely the University of Tennessee use of chat GPT is distasteful I think mostly because what the AI can produce at the moment is kind of crass but imagine a not too distant world where AI can do it all better can say exactly the right thing can give exactly the right contacts and references and advice tailored specifically to each person perfectly a world in which the ideal film music recipe day trip or book can be produced in a second personalized not just depending on
who you are but what mood you're in on that day where you are what day it is what's going on in the world and where innovation technology production is all directed automatically in the same way the philosopher Michelle Fuko famously said that the concept of man anthropomorphic human centered the central focal subject of study an abstract idea with an individualistic Psychology was a historical construct and a very modern one one that changes shifts morphs dynamically over time and that one day he said man would be erased like a face drawn in the sand at the
edge of the sea what really worries me today is what's going to happen to us if machines can think and what interests me specifically is can they well that's very hard question to answer if You' asked me that question just a few years ago I'd have said it was very far-fetched and today I just have to admit I don't really know I suspect that you come back in four or 5 years I'll say sure they really do think well if you're confused doctor how do you think I feel we're just really begin almost everywhere it
was once believed that man had a soul it was a belief that motivated the 17th century philosopher Renee de cart who many point to as providing the very foundational moment of modernity itself dayart was inspired by the scientific changes going on around him many thinkers were beginning to describe the world mechanistically like a machine running like clockwork atoms hitting into atoms passions pushing and pulling us around gravity pulling things to the Earth there was nothing mysterious about this view unlike previous ideas about souls and spirits and divine Divine plans the scientific materialistic view meant the
entire universe and Us in it were explainable Marbles and dominoes atoms and photons bumping into one another cause and effect this troubled deart because he argued there was something special about the mind it wasn't pushed and pulled around it wasn't part of the Great deterministic Clockwork nness of the universe it was free and so decart divided the universe into two the extended Material World res extensor he called it and the abstract thinking free and intelligent substance of the human mind res cogitans this way scientists could engineer and build machines based on cause and effect based
on mechanical Concepts ch s could study the conditions of chemical change biologists could study how animals behaved under certain conditions computer scientists could eventually build computers the Clockwork Universe could be made sense of but that special human Godly Soul could be kept independent and free he said that the soul was something extremely rare and subtle like a wind a flame or an ether this Duality defined the next few hundred years but it's increasingly come under attack today we barely recognize it the mind isn't special we or at least many say it's just a computer with
inputs and outputs with drives and appetites with causes and effects made up of synapses and neurons and atoms just like the rest of the universe this is the dominant modern scientific view what does it mean to have a soul in an age of data to be human in an age of computing the AI Revolution will soon show us if it hasn't already that intelligence is nothing Soulful rare or special at all that there's nothing immaterial about it that like everything else it's made out of the physical world it's just stuff it's algorithmic its pattern detection
it's datadriven the materialism of the Scientific Revolution of the Enlightenment of the industrial and computer Revolutions of modernity has been a period of great optimism in the human ability to understand the world to understand its data to understand the patterns of physics of chemistry of biology of people it has been a period of understanding the sociologist Max vber famously said that this disenchanted the world that before the enlightenment the world was a great Enchanted Garden because everything each Rock and insect each planet or lightning strike was mysterious in some way each seemingly an animate object
or animal did something not because of physics but because some mysterious Creator willed it to imbued it with meaning but slowly instead we've disenchanted the World by understanding why lightning strikes how insects communicate how rocks are formed trees grow creatures evolve but if this is true what does it really mean to be replaced by machines that can outperform Us in every possible t ask that can do and understand better than us it means by definition that we once again lose that understanding remember even their developers don't know why neural Nets choose the paths they choose
they discover patterns that we can't see that we can't predict alphago makes moves humans don't understand chat GPT in the future could write a personalized guide to an IM emotional personal issue that you yourself didn't understand and didn't even understand that you had Innovation decided by factors that we don't comprehend we might not be made by gods but we could be making them and so the world becomes re-enchanted and as understanding moves away from us becomes superum it necessarily leaves us behind in the long Mark of human history the age of understanding has been a
blip a tiny Island amongst a deep stormy unknown mysterious sea of the universe and will be surrounded by new Enchanted machines wearables household objects nanot technology we deny this we are a species in denial we say sure or it can win at chess but go is the truly skillful human game the emotional game sure it can pass the uring test now but not really we haven't asked it the right questions can it paint yes sure it looks like it can paint but it can't look after a child it can't be truly creative yes it can
calculate big sums very quickly but it can't understand emotions complex Human Relationships or design s but slowly AI catches up with humans and then it becomes all to human and then more than human the transhumanist movement predicts that to survive we'll need to merge with machines through things like neural implants and bionic improvements by uploading our minds to machines so that we can live forever hearing aids glasses telescopes and Prosthetics are exam examp of ways we already augment our limited biology with AI infused technology these augmentations will only improve on our weaknesses make us biologically
and sensorially and mentally better first we use machines then we're in symbiosis with them and then eventually we leave the weak fleshy biological limited World by behind one of the fathers of transhumanism Ray kurtwell wrote we don't need real bodies in 2012 he became the director of engineering at Google musk till and many in Silicon Valley are all transhumanists neuroscientist Michael Graziano points out we already live in a world where almost everything we do flows through cyberspace we already stare at screens driven by data where virtual reality AI can identify patterns far back into the
past and Far Away into the future it can see it can perceive at a distance and speed far superior to human intelligence the philosopher Hegel argued we were moving towards absolute knowledge in the 20th century the scientist and Jesuit Pierre tar Des shardon argued we'd reach what he called the Omega point when Humanity would break through the material framework of time and space and merge with the Divine Universe becoming super Consciousness but transhumanism is based on a kind of optimism that some part of us some original part of us will be able to keep up
with the ever increasing speed of technological adaption as we seen so far the story of AI has been one of absorbing appropriating stealing all of our knowledge taking more and more and more until what's left what's left of us is it safe to assume there's things that we cannot understand that we cannot comprehend because the patterns don't fit in our heads in the limited biology of our brains that the speed of our f iring neurons isn't fast enough and that AI will always work out a quicker better more perceptive more Forward Thinking way in many
ways the history of AI fills me with a kind of sadness because it points to a kind of Extinction if not literally although that's a possibility too then at least in obsolescence redundancy making Earth or large parts of us completely useless I think of Kart who way back in the 19th century said that deep within every man there lies The Dread of being alone in the world forgotten by God and overlooked among this tremendous household of millions and millions well we could imagine a different world a better one a Freer one [Music] we live in
strangely contradictory times times in which we're told that anything is possible technologically scientifically medically industrially where we'll be able to transcend the confines of our weak fleshy bodies and do whatever we want that will enter into the the age of the Superhuman but on the other hand we can't seem to provide a basic standard of living a basic system of political stability a basic safety net a reasonable set of positive life expectations for the majority of people around the world we can't seem to do anything about inequality or climate change or Global Wars if AI
will be able to do all of these incredible things than all of us then what sort of world might we dare to imagine potentially utopian one where we all have access to personal thinking autonomous machines that build for us that transport us that research for us and cook for us and work for us and hope for us in every conceivable way so that we can create the Futures we all want to create what we need is no less than a new model of humanity the technological inventions of the 19th century Railways photography radio wider Industrial
Revolution were accompanied by new human Sciences the development of psychology sociology economics the AI Revolution whenever it arrives will come with new ways of thinking about us too many have criticized day cart splitting of the world into two into thought and material it gives the full sense that intelligence is something privileged special incomprehensible and attached but as we've seen knowledge is spread everywhere across and through people across and through the world into and throughout connections and emotions and routines and relationships knowledge is everywhere and it's produced all of the time Silicon Valley have always believed
that like intelligence they were detached and special too for example Eric Schmid of Google has said that the online world is not truly baned by terrestrial laws and in the '90s John Perry barow said that cyberspace consists of thought itself continuing that ours is a world that's both everywhere and nowhere but it's not where bodies live but as we've seen our digital worlds aren't just abstract code that doesn't exist anywhere mathematical in a cloud somewhere up there it's all made from very real stuff from sensors and Scanners from cameras and labor from friendships from books
and from plagiarism one of day cart's staunchest critics the Dutch pantheist philosopher Baro Spinosa argued against his dualistic view of the world instead he saw that all of the world's phenomena nature us animals forces mathematics bodies and thought were part of one scientific universe that thought isn't separate that knowledge is spread throughout embedded in everything he noticed how every single part of the universe including thought was connected in some way to every other part that all was in a dynamic changing relationship Spinosa argued that the Universe unfolded through these relationships these patterns that to know
to really understand the lion you had to understand biology physics the deer the tooth the Savana the weather all was in a wider context and that the best thing anyone could do was to try and understand that context he wrote the highest activity a human being can attain is learning for understanding because to understand is to be free unlike day cart Spinoza didn't think that thought and materiality were separate but part of one substance all is connected he said the many are one and so God or meaning or spirit Spirit or the whole whatever you
want to call it is spread through all things part of each rock each lion each person each atom each thought each moment it's about grasping as much of it as possible knowing means you know what to do and that is the root of Freedom itself Spinoza's revolutionary model of the universe much better lines up with AI research SE archers than day carts does because many in AI argue for a connectionist view of intelligence neural Nets deep learning the neurons in the brain they're all intelligent because they take data about the world and look for patterns
that's Connections in that data intelligence isn't just in here it's out there it's everywhere and it's why her book is called the atlas of AI as she seeks to explore the ways AI connects to Maps captures the physical world it's lithium and Cobalt mining it's wires and sensors it's Chinese factories and the conflicts in the Congo Intel alone uses 16,000 suppliers all around the world connections are what matters intelligence is a position it's a perspective it's not what you know it's who you know what resources you can command it's not how intelligent you are it's
what who where and why you've got access to the things you can do with it I think this is the beginning of a positive model of our future with technology true artificial intelligence will connect to and build upon and work in a relationship with other machines other people other groups other resources it will work with Logistics shipping buying and bargaining Printing and Manufacturing controlling machines in labs and research in the world around the world how many will truly have access to these kinds of resources and this kind of intelligence intelligence that's embodied that does stuff
connection access and control will be what matters in the future intelligence makes little sense if you don't have the ability to reach out and do things with it shape it be part of it use it AI might do things work things out control things build things better than us but if who gets to access these great new industrial networks determines the shape all of this takes then I think we can see why we're entering more and more into an age of Storytelling if AI can do the science better than any of us if it can
write the best article on international relations if it can build machines and cook for us and work for us what will be left of us well maybe our stories Our Lives well listen to the people that can tell the best stories about what we should be doing with these tools what shape our future should take what ethical questions are interesting which artistic ones are stories about family emotion journey and aspiration local life friendship games all of those things that make us still human maybe and maybe idealistically the aih will be more about meaning meaning is
about being compelling passionate making a case articulate using the data and the algorithms and the inventions to tell a good story about what we should all be doing with it all what matters the greatest innovators and inventors and marketers knew this it's not the technology that matters it's the story that comes with it more films music more local products and festivals more documentaries and ideas and art more exploring the world more more working on what matters to each of us I'd like to think maybe naively that I won't be replaced by chat GPT because while
eventually it might be able to write a better script about the history of a iay it might be more accurate be able to draw from more facts to a better job won't be able to do this bit as well because I like to think you also want to know a little bit about me my stories my values my emotions and passions and idiosyncrasies my mistakes my style and perspective who I am what I believe from my little corner of the world so that you can agree or disagree with my idea of humanity I don't really
care how factories are run I don't really care about the mathematics of space travel I don't really care too much about the code that makes AI run underneath the Bonnet I care much more about what the people building all of it think feel value believe how they live their lives I want to understand these people as people so I can work out what to agree with and what not to what to think myself we too often think of knowledge as kind of static a body of books or Wikipedia or in the world ready to be
collected scientifically observed with instruments but we forget its Dynamic changing about people movement about lives it's about Trends and new friends emotions job connections art and cultural critique new music political debate new Dynamic changing ideas hopes interests dreams and Passions and so I think the next big Trend in AI will be using large language models on this kind of knowledge it's why Google tried and failed to build a social network and why meta Twitter and Linkedin could be ones to watch they have access to realtime social knowledge that open AI at the moment at least
don't maybe open AI will try and build a social network based on chat gbt they do at least now have even more data than they analyzed at the beginning because rather than just a pile of static books they have all of us asking questions our locations our quirks our values all of that is stuff they'll be analyzing and using right now using this kind of data could have incredible potential it could teach us so much about political psychological sociological or economic problems if it was put to good use some for example have argued that dementia
could be diagnosed just by the way someone uses their phone imagine a social network using data to make suggestions about what services people in your town need how to fill a gap imagine AI using your data to make honest insights into emotional or mental health issues that you have giving specific research driven personalized perfect road maps on how to beat an addiction or a problem with a relationship I'd be happy for my data to be used honestly transparently ethically scientifically especially if I was compensated for it in some way too I want a world where
people contribute to and are compensated for and can use AI productively to have easier more creative more fulfilling and more meaningful lives I want to be excited in the way computer scientist Scott Aronson is excited when he writes an alien has awoken admittedly an alien of our own fashioning a Gollum more the embodied Spirit of all the words on the internet than a coherent self with independent goals how could our eyes not pop with eagerness to learn everything this alien has to teach so how do we get to a better future to make sure everyone
benefits from AI I think we need to focus on two things primarily both are in a way a type of bias cultural bias and a competitive bias then further we need to think more about wider political issues as well intentioned as anyone might be bias is a part of Being Human we're positioned we have a perspective a culture a lens that we see things through AI models are trained through something called reinforcement learning nudging the AI subtly in a certain direction the problem with ethical questions is that there's often no single answer David Hume famously
said that you can't get an ought from an is in other words you can't get a picture of what the world should look like from what it does look like this was also the basis of the postmodern critique of the science of the Enlightenment that parts of Science and physics might be Universal objective but what you did with the answers was particular was subjective Newtonian laws could lead to both Innovations in healthcare and nuclear war when Google released its latest update of Bard in February of 2024 it was widely WR ridiculed for including black soldiers
in images of 1940s Germany and people of color in images of the US founding fathers to many this was Testament that Silicon Valley was putting social justice ideology over a commitment to reality and facts but it does also point to the complexity AI programmers face it seems absurd to depict historical figures as anything other than they were but what about images of the present or images of the future what about imagined images Creative Images if you ask an image generator to depict workers in an office should it be demographically accurate according to the population statistics
of the country or should it be accurate in that it actually depicts who's in those roles in each country or should it be inclusive exactly or should it be random should it differ from from Bangladesh to Japan to the US of course if it's based on historic stock photos of office workers it's going to skew white should this matter should this be corrected for Google was on the receiving end of criticism for being overly inclusive but chat GPT seems to produce only white office workers is this any different Google's mission statement has been to organize
the world's information but this is changing it's going to be organizing and producing information and it's almost impossible to do so without taking an ethical position of some kind the color of people's skin may seem important to some and frivolous to others but it's only often commented on because it's maybe the most visual obvious place that bias shows what other bias lurks underneath without being explicit when asking about medical advice say features in design ethics in politics these are all difficult questions and they are ones that are being asked behind closed doors as deep mind
co-founder muster sullan writes in the coming wave a great book by the way researchers set up cunningly constructed multi-turn conversations with the model prompting it to say obnoxious harmful or offensive things seeing where and how it goes wrong flagging these missteps researchers then reintegrate these human insights into the model eventually teaching it a more desirable worldview desirable human insights flagging missteps all of this is being done by a very specific group of people in a very specific part of the world at a very specific moment in history reinforcement learning means someone is doing the reinforcing
on top of this as many Studies have shown if you train AI on the bulk sum of human articles and books from history you get a lot of bias a lot of racism a lot of sexism a lot of homophobia and much much more Studies have shown how heart attacks in women have been missed because the symptoms doctors look for are based on data from Men's heart attacks facial recognition has higher rates of errors with darker skin and women because they're trained on white men Amazon's early experiment in machine learning CV selection was quietly dropped
because it wasn't choosing any CVS from women these sorts of studies are everywhere the data is biased but it's also being corrected for shaped nudged by a group of people with their own very particular set of biases around 700 people work at open AI most of what they do goes on behind the black box of business meeting and boardrooms and many have pointed to how weird AI culture is not in a pejorative way just how far from the Baseline mean average person you're going to be if that's your life experience very geeky for lack of
a better word and again I don't mean that in a harsh way I just mean very technologically minded very techn positive and very California that all as Adrian D points out in what tech calls thinking transhumanists and Rand Libertarians with a little bit of 1960s counterculture antiestablishmentarianism thrown in they have an ideology the second bias is the bias towards competitive Advantage again call me naive here but I think the vast majority of people in the world want to do good in some way if they can they want to be ethical they want to make something
great if they can but often competitive pressures get in the way we saw this when open AI realized they needed private funding to compete with Google we see it with their reluctance to be transparent with how they train data sets because competitors could learn from that we see it with AI weapons in the military and fears about AI in China the logic running through is if we don't do this our competitors will if we don't get this next model out Google will outperform US Safety testing is slow and we're on a deadline if Instagram makes
their algorithm less addictive Tik Tok will come along and outperform them and this is why Mark Zuckerberg actually wants regulation suan from Deep Mind has set up multiple AI businesses and actually wants regulation Gary Marcus may be the leading expert on AI who has sold one startup AI company to Uber and began a robotics company too actually wants regulation if Wealthy free market Tech entrepreneurs not exactly chairman ma are asking the government to step in then I think that should tell us something here are some things we do regulate in some way medicine law clinical
trials Pharmaceuticals biological weapons chemical nuclear old weapons actually pretty much buildings food air travel cars and transport space travel pollution electrical engineering basically anything potentially dangerous okay so what could careful regulation look like it's a difficult question I always think regulation should aim for the maximum amount of benefit for all with the minimum amount of interference first transparency in some way will be Central there's an important concept called interoperability it's when procedures are designed in an open way so that others can use them too it stops dominant interests blocking others out of the market banking
systems plugs and electrics screw heads traffic control are all interoperable organizations or manufacturers can plug in and use the system designed to work with the system Microsoft have been forced into being interoperable so that anyone can build applications easily for Windows this is a type of openness and transparency it's for technical experts but there needs to be some way Auditors safety testers regulatory bodies and the rest of us can in varing ways see under the hood of these models Regulators could pass laws on data set transparency or transparency on the answers large language models give
and where those answers come from you could require references sources crediting so that people are in some way compensated for their work as wridge writes transparency means that the data that a system uses about us should be available to us and the algorithms used within that should be made clear to us too this will only come from regulation that means regulatory bodies with qualified experts answerable democratically to the electorate sullan points out that the biological weapons convention in the US has just four employees fewer than a McDonald's regulatory bodies should work openly with networks of
academics and Industry experts making findings either public to them or public to all trials of of new models could be based on clinical trials we shouldn't be letting open AI safety test themselves they need to be audited or put through safety trials by democratically accountable experts in a wide range of areas we don't even have to rely on government employees although I think we should regulation could instead Force companies to spend a certain percentage of their own revenue on safety testing or to also use a certain number of outside specialists from universities and industries that
are relevant sullan writes as an equal partner in the creation of the coming wave governments spend a better chance of steering it towards the overall public interest there's a strange misconception that regulation means less Innovation but Innovation always happens in a context it's directed by many things recent Innovations in green technologies batteries and electric vehicles would not have come about without regulatory changes and might have happened much sooner with different incentives and tax breaks the internet along with many other scientific and Military and space advances were not a result of private Innovation but an entrepreneurial
State and of course much Innovation comes out of universities I always come back to openness transparency accountabil ility and Democratic oversight because as I said at the end of how the internet was stolen it only takes one mad king one greedy dictator one slimy Pope or one foolish Jester to nudge the levers they hover over towards chaos rot and even tyranny which leads me to the final Point AI as we've seen is about that impossible totality it really might be the new fire or electricity because it implicates everything everyone everywhere and so more than anything
it's about the issues we already face the knowledge that comes out of artificial intelligence is already political economic already implicated in global Affairs workers and wages health care and pensions culture and War and it's those concrete material things that matter and I'd wager that like every total technological transformation that came before the printing press and the Industrial Revolution in particular this one will also require a radical change in our politics the printing press led to the Reformation and the Industrial Revolution to liberalism and capitalism what will our political world look like under AI That's up
to us we're in for a period of mass disruption that's almost certain and so we need to democratically work out how AI can address all these issues instead of exacerbate them if we don't we'll make ourselves slowly obsolete redundant forgotten AI using everything that we know that we built or discovered will surpass us and will drift off into the future leaving us behind we need to make sure that each of us is Tethered in some way to that in comprehensively different future connected to it taught by it in control of it compensated by it it's
in service of us all of us I'm excited and daunted in equal measure but my sense is that if we don't do that rather than wiped out will'll be left stranded in the wake of a colossal super intelligent Jugger that we just don't understand just forgotten Left Behind like all of the human species that we surpassed that then went extinct all of us left fleshy old-fashioned forgotten bobbing around watching the future go off in a dark deep endless boundless sea near gave you a heart attack thought I told you to watch your back never know
what to expect down here that's the only solid fact guess I'd rather be on vacation guess I got the education I don't have an explanation thank you so much to all of these incredible people for supporting Ben and now I've got some exciting ideas for some even bigger and better projects in the future and listen I know I've given this SPI already but this channel is my absolute passion I spend pretty much all of my time very carefully reading trying to go as deep in my research as possible asking the most probing questions I can
educating myself where I think necessary looking at history politics and philosophy as widely as I can and working out what I think is important for our future on patreon there are already bonus videos on how to think differently about history and I've got more patreon only videos in the works that I'm excited about you get access to the private chat server you can see early scripts you can watch the videos adree and early and sometimes un censored it's a single dollar per month if you can't do that don't feel bad click the Bell press subscribe
press like leave a comment all of those things genuinely do a lot of the algorithm and really help the channel out but most importantly just thank you for watching see you next time