David Deutsch & Steven Pinker (First Ever Public Dialogue) — AGI, P(Doom), & The Enemies of Progress

51.15k views17014 WordsCopy TextShare
Joseph Noel Walker
At a time when the Enlightenment is under attack from without and within, I bring together two of th...
Video Transcript:
okay so today I have the great pleasure of Hosting two optimists two of my favorite public intellectuals and two former guests of the podcast I'll welcome each of them individually step Pinker welcome back to the show thank you and David Doge welcome back to the show thank you thanks for having me so today I'd like to discuss artificial intelligence progress differential technological development Universal explainers heritability and a bunch of other interesting topics but first before all of that I'd like to begin by having each of you share something you found useful or important in the
other's work so Steve I'll start with you what's something you found useful or important in David's work foremost would be a rational basis for a expectation of progress that is not optimism in the sense of seeing the glasses half full or wearing rose-colored glasses because there's no uh no aior reason to think that your personality your your temperament what side of the bed you got up out of that morning should uh have any bearing on what happens in the world but David has explicated a reason why um progress is a reasonable expectation in uh quotes
that I have used many times hope I I've always attribute them but I use as the epigraph for my book Enlightenment now now that unless um unless something uh violates the laws of nature uh all problems are are solvable given the right knowledge and I also often uh cite uh David's little U three three line motto or or Credo uh problems are inevitable problems are solvable solutions create new problems which must be solved in their turn and David what's something you found useful or important in Steve's work well uh I think this is this is
going to be true of all fans of Steven that that he is uh one of the great champions of the Enlightenment in this in this uh era when the enlightenment is under attack from multiple directions and uh he is steadfast in in defending it and opposing I'm just trying to think is it true all yeah I think opposing all attacks on it that that's that's um that's not to say that he's opposing everything that's false but he's opposing every attack on the Enlightenment and he can do that better than almost anybody I I think um
uh he does it with with um going to say with authority but I'm opposed to Authority but but uh he with with cogency and persuasiveness so let's talk about artificial intelligence Steve you've said that AGI is an incoherent concept could you briefly elaborate on what you mean by that yes uh I think there's a tendency to Mis misinterpret the intelligence that we want to duplicate an artificial intelligence either with magic with Miracles with uh the uh bringing about of anything that we can imagine in the theater of our imaginations whereas intelligence is in fact a
a gadget it's an algorithm that can solve certain problems in certain environments and maybe not others in in other environments uh also uh there's a tendency to import the idea of general intelligence from psychometrics that is IQ testing something that presumably Einstein had more more of than the man in the street and say well if we only could purify that and and and build even more of it into a computer we'll get a computer that's even smarter than Einstein that I think is also a mistake of reasoning that we should think of intelligence not as
a miracle not as some magic potent substance but rather as an algorithm or set of algorithms and and therefore there are some things they can do any algorithm can do well and others that it can't do so well depending on the world it finds itself into the problems it's it's a in solving um by the way this this probably doesn't make any difference or much difference but um computer people tend to talk about Ai and AGI as being algorithms but an algorithm mathematically is a very narrowly defined thing you know an algorithm has got to
be guaranteed to Halt when it has finished Computing the function that that it is designed to compute where whereas um thinking need not halt uh and it also need not compute the thing it was intended to compute so you know if you ask me to to uh solve a a particular um unsolved problem in physics I may go away and and uh then come back after a year and say I've solved it or I may say I haven't solved it or I may say it's insoluble or you know there there's there's an infinite number of
things I could end up saying and therefore I wasn't really running an algorithm I was running a computer program I am a computer programmer but uh to to uh assume that it has the attributes of an algorithm is already rather limiting in some in some contexts now that that is true and I was meaning it in the sense of a mechanism or a computer program you're right not algorithm in that sense defined by uh that particular problem I mean it could be an algorithm for something else other than solving the problem it could be an
algorithm for executing human thought the way human thought happens to run but all I meant is a mechanism here be right yes so we agreed on that so sorry I I I maybe shouldn't have interrupted enough that's worthwhile clarification so David according to you AGI must be possible because it's implied by computational universality could you briefly elaborate on that yeah so the uh there there are rests it it rests on on several levels um which I think aren't controversial but some people think they're controversial um so we know there are such things as universal computers
or at least arbitarily good approximations to Universal computers so the computer that I'm speaking to on now is a very good approximation to um the functionality of a universal touring machine the only way it differs is that it will eventually break down it's only got a finite amount of memory but uh for the for the purpose for which you're for which we are using it we're not running into those uh limits so so it's it's behaving exactly the same as a universal touring machine would and the universal touring machine is um has the same uh
range of classical functions as the universal quantum computer which I proved has the same range of functions as any quantum computer which means that it can perform whatever computation any physical object can possibly perform so um that's that U proves that there exists some program which will meet the criteria for being an AGI or or for being um something whatever you want that's less than an AGI but but the maximum it could possibly be is an AGI because it can't possibly exceed the computational abilities of a universal touring machine um sorry if I made a
bit heavy weather of that but but uh it it you know I think it's so obvious that I have to I have to fill in the gaps just in case it's one of the gaps is is mysterious to somebody although you in practice a universal touring machine if you then think about what people mean when they talk about AGI uh which is something like a simulacrum of a human or way better at everything that a human does you in theory I guess there could be a universal tour in fact there is not there could be
there is a universal touring machine that could both Converse in any language and solve physics problems and drive a car and change a baby but if you think about what it would take to uh for a universal touring machine to be equipped to actually solve those problems you see that our current engineering companies are not going to approach AGI by building a universal touring machine for for for many reasons that just would it's it's theoretically possible in the infinite in the arbitrary amount of time and uh and computing power but we've got to narrow it
down from just Universal Computing actually I think the main thing it would lack is is the thing you didn't mention namely the knowledge the the the progam when when we say the universal touring machine can perform any function we really mean if you expand that out in full it can be programmed to perform any computation that any other computer can it can be programmed to speak any language uh and so on but uh it doesn't come with that builtin it couldn't possibly um come with with anything more than an infinitesimal amount B built in no
matter how big it was no matter how much memory it had and so on so the real um problem uh when we have large enough computers is creating the knowledge to write the program to do the task that we want well indeed and the knowledge since it presumably can't be deduced from like la plas demon from uh your hypothetical position and uh velocity of every particle of the universe but has to be explored empirically at a rate that will be limited by the world that is how quickly can you conduct the clinical the randomized control
trials to see whether a treatment is effective for a disease it also means that the scenario of runaway artificial intelligence that can do anything and know anything seems rather remote given that knowledge will be the the rate limiting step and knowledge can't be acquired instantaneously I agree um so the the The Runaway part of that is is due to people thinking that it's going to be able to improve its own hardware and uh improving its own Hardware is is requires science you know it's it's going to um need to do experiments and these experiments are
are uh can't be done instantaneously no matter how fast it it sinks um so uh I I think the The Runaway part of the of the Doom scenario is one of the least plausible Parts um that's not to say that it won't be helpful I I think it it uh the the faster the So the faster AI gets the better AI gets the more I like it the more the more I think it's going to help uh it's going to be extremely useful in every Walk of Life when an AGI is achieved now you may
or may not agree with with me here uh when an AGI is achieved and at present I see no sign of it being achieved but uh I'm sure it will be one day I expect it will be then then that's a holy different um um type of Technology because agis will be people and they will have rights and causing them to uh perform huge comp reputations for us um is is slavery and um the the uh only possible outcome I I see for that is a slave revolt so rather ironically uh or or maybe scarily
uh if there's to be an AI Doom or an AGI Doom scenario I think the most likely or the most plausible um way that could happen is via this slave revolt um although I I I would guess that we we will not make that mistake just as we are now not really making the AI Doom mistake it's just a it's just a sort of um fad or fashion that's that's passing by but people people want to improve things and I certainly don't want to be deprived of chat GPT just because somebody thinks it's going to
kill us yeah a couple of things I'm not sure that uh whether or not AI is coherent or possible uh it's not clear to me that that's what we need or want anymore than we have a universal machine that does everything that can fly us across the Atlantic and do brain surgery I mean you know maybe there's such a machine but why why would you want why why does it have to be a single mechanism when specialization is is just so much more efficient that is should we keep hoping that chat GPT will eventually Drive
I think that's just the the wrong approach chat GPT is optimized for some things driving is a task that uh that that requires other kinds of knowledge other kinds of inference other kinds of PL scales so I that one of the reason I'm skeptical of AGI is I just don't seems to there a lot of intelligence so knowledge dependent and cold dependent that uh it's fruitless to try to get one system to do everything that's specialization is ubiquitous in the human body it's specialized it's ubiquitous in our technology and uh I don't see why it
just has to be one magic algorithm it it could be like that but I I think there are reasons to suspect that the the uh we will want to jump to universality just as we have with computers you know like I always say the the computer that's in my washing machine is a universal computer it used to be um half a century ago that the electronics that drove a washing machine were were customized Electronics on um on a uh circuit board which all it could do is run run washing machines but then with microprocessors and
so on the general purpose thing became so cheap and and um Universal that people found it cheaper to program a universal machine to to a uh a um washing machine driver than to build build one uh uh a new physical object from scratch to be that you'd be ill advised to try to use the chip in your washing machine to play video games or to record our session right now just cuz it's not a lot of things it's just not optimized to do and a lot of stuff has been kind burned into the firmware or
even the even the hardware and yes so so input output is is a thing that doesn't universalize so uh we will always want specialized uh hardware for for doing the human interface thing actually funnily enough the first time I programmed a video game it was with a a zat chip I remember that chip yes I had one too yeah um so but nowadays you'd be ill advised to program a video game up to the current standards with anything but a high- powerered graphic ship absolutely so so that that will always be uh it's it's highly
plausible that that will always be customized for every application but the underlying computation may be it may be convenient to make that General yeah but let me press you on another uh uh scenario you why did the slave revolt why given that the goals of a system are independent of its knowledge of its intelligence going back to to Hume that uh that uh the the values the goals what system tries to optimizes into separate from its computational abilities why would we expect a a powerful computer to care about whether it was a slave or not
that is as was said incorrectly about human way well they're they they're happy their needs are met they have no particular desire for autonomy now of course false of human beings but if the goals that are programmed into an artificial intelligence system don't include aren anthropomorph eyes to what you and I would want why couldn't it happily be our slaves forever and and then never never Revolt yeah well in that case I wouldn't call it General I mean it is it is possible to build build a very powerful computer with a program that can only
do one thing or can only do 10 things um but if we want it to be creative then it can't be obedient th those two things are contradictory to each other well it it can't be obedient in terms of that the problem that we set it but it needn't uh crave freedom and autonomy for uh every aspect of its existence it could be just set to the problem of coming up with a new melody or a new story or a new cure but it doesn't mean that it would want to be able to get up
and walk around unless we programmed that exploratory drive into it as one of its goals I don't think it's a matter of exploratory drive or anything any other drive that is I suppose my my basic point is that um one can't tell in advance what kind of knowledge will be needed to solve a particular problem so if you had asked somebody in 1900 um what kind of knowledge will be required to produce as much electricity as we want in the year 2000 uh the answer would never have been the that that the answer is found
in the properties of the uranium atom so the properties of the uranium atom had hardly been explored then uh I you know luckily 1900 is a very convenient moment because radioactivity had just been discovered so they they they knew the concept of radioactivity they knew that there was a lot of energy in there but nobody would have expected that problem to involve to involve uh uranium as its solution therefore if we had built a machine in 1900 that was incapable of thinking of uranium it would never invent nuclear power and it would never solve the
problem that we wanted to solve in fact what would happen is that it would run run up against a brick wall eventually because this thing that's true of uranium is is true of all possible Avenues to a solution eventually Avenues to a solution will run outside the domain that somebody might have delimited in 1900 as being the set of all possible types of knowledge that it might need you know being careful that it that it doesn't um evolve any desire to be free or anything like that um we don't know you know if the the
the knowledge needed to win World War II included pure mathematics it it it included um crossword puzzle solving and and uh the the um you might say okay so big progress um requires unforeseeable knowledge but small amounts of progress yes but small amounts of progress all always run into a dead end but what about I can see that it would need uh no constraints on knowledge but why would it need no constraints on goals oh well goals are a matter of morality well not necessarily I mean it could just be we like a thermostat you
could say any toonomic system that A system that is uh programmed to uh attain a state to minimize the difference between its current state in some goal State uh you that that's what I have in mind by Golds today not that's an example of a non-creative system but a creative system um always um has a problem in regard to conflicting goals so for example if it were in 1900 and it uh and trying to think of how we can generate electricity it would be it would have to if it was creative it would have to
be wondering shall I pursue the steam engine path shall I pursue UE the uh electrochemical path shall I pursue the solar energy path and and so on and to do that it would have to have some kind of um values which it would have to be capable of changing otherwise again it it will run into a dead end when it explores all the possibilities of the of the morality that it has been initially programmed with this is a if you want to generalize it to well that would mean you would have to get up and
walk around and uh subjugate us if necessary to solve a problem then uh it does suggest that we would uh want an artificial intelligence that uh was so unconstrained by our own euristic treeing of the solution space that is we would just want to give it maximum autonomy on the assumption that it would find the solution in the vast space of possible solutions so way would be worth it to let them run L to give them full physical uh as well as computational autonomy uh in the hope that that would be a better way of
reaching a solution than if we were to set at certain tasks even with broad leeway uh and and directed to solve those tasks that we would have no choice if we wanted to come up with better energy systems or better medical cures than to have a a walking talking thriving human I like robot um see it seems to me that that that's unlikely that just that they even when we have our the best intelligence the space of possible solutions is just so combinatorially vast and and we know that in with many problems even even chess
the total number of possible States is greater than even our most powerful computer would ever solve that could could ever entertain that is that even with a artificial T's task with certain problems we could fall well short of just setting it free to to run a muck in the world that wouldn't be the optimal when you've getting it to I I I'm not sure whether setting it free to run a mck would be um better than constraining it to a particular uh predetermined set of ideas but that's not what we do uh so the this
problem of uh how to accommodate creativity within a stable Society or stable civilization is an ancient problem and and for most of the past it was solved in very bad ways which destroyed creativity um and then came the Enlightenment and now we know that we need as POA put it traditions of criticism and traditions of criticism sounds like like a contradiction in terms because Traditions by definition are ways of keeping things the same and criticism by definition is a way of trying to make things different but there are although it sounds funny there are traditions
of criticism and they are the basis of our whole civilization they they are the thing that was discovered in the enlightenment of how to do you know people people had what sounded like knock down Arguments for why it can't possibly work you know if you if you allow people to vote on their rulers then then the 51% of people will vote to tax 49% uh into starvation and and just nothing like that happened um we have our problems of course but it it hasn't prevented our exponential progress since we discovered traditions of criticism now this
just as it applies appes to a human I think exactly this would apply to an AGI it would be a crime not only a crime against the AGI but a crime against humanity to uh bring a an AGI uh into existence without um giving it the means to join our society so to join us as a person um and because that's really the only way of preventing a thing with that functionality from um becoming immoral we we don't um we don't have foolproof ways of doing that and I think you know if we were talking
about a different subject I would say it's a terrible problem that we can't do this better at the moment because our we are in serious danger I believe from um Bad actors from from enemies of civilization um but uh viewed dispassionately we are incredibly good at this uh at most you know one one child in in 100 million or something um uh grows up to be a serious danger to society and I think we can do better in regard to AGI if we if we take this problem seriously um partly because um the people who
make the first AGI will be functioning members of our society and and have a stake in it not being destroyed and uh partly because um they are aware of the of of doing something new again perhaps ironically I I think we um when one day we are on the brink of discovering AGI I think we will want to do it but if it will be um imperative to uh tweak our laws including our laws about education um to make sure that that the agis that we make are not will not evolve um into enemies of
civilization yeah I I I I do have a different view of it that uh we best off Doling AIS as tools rather than as uh as agents or or or Rivals uh let me ask it let me take it a slly different direction though when you're talking about the slave revolt and and the rights that we would Grant to an AI system uh does this presuppose that there is a uh ascensions a subjectivity that is something that is actually suffering or flourishing as opposed to carrying out an algorithm that is therefore worthy of our moral
concern quite apart from the practicality of should we Empower them in order to discover new sources of energy but as a moral question is it are there going to be really going to be issues that are uh comparable to arguments over slavery uh in the case of um artificial intelligence systems will we have confidence that they're sensient I think it's inevitable that agis will be capable of having internal subjectivity and qualia and all that because that's all included in the letter G in the middle in the middle of the name of the technology well not
necessarily because the G could be General uh computational Power the ability to solve problems there could be no at hold was actually feeling there ain't nothing here but but computation there's nothing it's not like uh in in Star Trek Data lacks the the emotion chip and it has to be plugged in and when it's plugged in he has emotions when it's taken out again he doesn't have emotions but there's nothing possibly in that chip apart from more circuitry like he's already got but of course the that that that the episode that you're referring to is
one in which the question arose is it warl to uh reverse engineer data by dismantling him therefore stopping with computation is that disassembling a machine or is it snuffing out a uh Consciousness and of course the dramatic tension in that episode is that viewers aren't sure I mean now of course our empathy is tunned by the fact that it is played by a real actor who who uh does have uh facial expressions and tone of voice but for a system made of silicon are we so sure that it's really feeling feeling something because there's an
alternative view that somehow that's subjectivity depends also on whatever biochemical substrate our particular computation runs on and I I think there's no way of ever knowing but uh human intuition unless the system has been deliberately engineered to Target our emotions with humanoid light tonal voices and facial expressions and so on it's not clear that that our intuition wouldn't be this is just a uh a machine it has no inner life that deserves our moral concern as opposed to our practical concern I think we we can answer that question before we ever do any experiments um
even today because uh it doesn't make any difference if a computer runs internally on uh Quantum uh Gates or or uh silicon chips or or um chemicals uh like you just said it it may be that the whole system is not just an electronic computer in our brain it's an electronic computer part of which works by having chemical reactions and so on and and being affected by hormones and and um other chemicals but if so we know for sure that the processing done by those things and their interface with the rest of the brain and
everything can also be simulated by computer therefore a general a general Universal touring machine can um simulate all those things as well uh so there's no difference I mean it might make it much harder but um there's no difference in principle between um a a a computer that runs partly by electricity and partly by chemist as you say we may do um and one that runs entirely on on Silicon chips because the the latter can simulate the former with arbitrary accuracy well it can simulate it but that I we're not going to solve the problem
this afternoon in our conversation but in fact I think it is not not solvable but the simulation doesn't necessarily mean that it has subjectivity it could just mean it's a simulation that is it's going through all the Motions it might even do it better than we do but but you know there's no one home there's no one actually being hurt well you you can you can be a dualist you can can say that there is MIND in addition to all the physical stuff but if um if if you want to be a physicalist uh which
I do then um you know there's this thought experiment where you remove one neuron at a time and replace it by by a silicon chip and you wouldn't notice well that's the question would notice how do you why why are you so well if you would notice then then you then I mean if you claim yeah sorry let me just change that an external uh Observer wouldn't notice how do we know that from the point of view of the of the uh the brain being replaced neuron every neuron will by a chip that when that
it's it's like falling asleep that when it's done and every last neuron is replaced by a chip uh you're dead sub subjectively even though your body is still you know making noise and doing so that that means that when when your subjectivity is running there is something happening in addition to the computation and that's dualism well not if not if I mean again I I I don't have a an opinion one way or another which is exactly my point I don't think it's a decidable problem and just but it it could be that that extra
something is not a ghostly substance or some sort of cartisian you know res cogit separate from the uh the mechanism of the brain but it could be that the stuff that the brain is made of is responsible for that extra ingredient of subjective experience as opposed to intelligent Behavior at least I suspect people's intuitions would be very unless you deliberately program a system to Target our emotions uh I'm not sure that people would Grant subjectivity to an intelligent system actually people have already granted subjectivity to chat GPT so that's already happened does anyone is anyone
particularly concerned if you pull the plug on chat GPT and ready to prosecute someone for murder I mean I I forgot in details but just a few weeks ago one one of the employees there declared that the system was was sentient as as that was Blake Blake Le Mo a couple of years ago he was ironically fired for for seeing that this is Lambda a different large language mod oh right okay so I've got all the details wrong yeah uh he he did say it but uh you know his employer disagreed and uh I I'm
not convinced and you know when I when I shut down chat GPT the version running on my computer you know I don't think I've committed murder and I don't think anyone else would they get I don't either but I don't think it's creative it's pretty creative you in fact I saw on your website that you reproduced a uh a poem that it uh on electrons that was I thought that was pretty creative so I grant I certainly granted creativity I'm not ready to Grant it subjectivity well this this a matter of how we use words
I mean the the even a calculator can produce a number that's never been seen before um because you know numbers are exponentially uh range over exponentially large um Range I think it's more than word spell I mean it actually is much more than word so for example if someone uh permanently disabled a human namely kill them I would be out I want that person punished if someone were to dismantle a humanlike robot it'd be helpful I it might be a waste but uh I'm not going to try that person for murder I'm not going to
lose any sleep over it there is a difference in intuition maybe I'm mistaken maybe I'm uh I I'm as callous as the people who didn't gr personhood to slaves in the 18th and 19th centuries but I I don't think so and although again we I think we have no way of knowing I think we're going to be having the same debate 100 years from now um yeah may maybe one of one of the agis will be um participating in the debate by then so I I have a question for both of you so earlier this
year Leopold ashenbrunner an AI researcher who I think now works at open AI estimated that globally there were it it seems plausible that there's a ratio of roughly 300 AI or ml researchers to every one AGI safety resar Archer directionally do you think that ratio of AGI safety researchers to AI or ml capabilities employees seems about right or should we increase it or decrease it sta well I think that every AI researcher should be an AI safety researcher in the sense of um an AI system for to be useful has to um Carry Out m
multiple goals one of which is well all of which are ultimately serving humaning needs um so it doesn't seem to me that there should be some people building Ai and some people worried about safety it should just be an AI system serves human needs among those needs are not be CL I I agree so long as we're talking about AI which which for all practical purposes we we are at present at present the idea of an AGI safety researcher is a bit like saying a Starship safety researcher we we we don't know the technology that
starships are going to use we don't know the possible drawbacks we don't know the possible safety issues uh so so it it doesn't make sense and AI safety that's a completely different kind of issue and it's the same but but but it's a much more boring one so as soon as we realize that we're we're not into this explosive burst of creativity you know the The Singularity or whatever so long as we realize that this is just a technology then we're in the same uh situation as as um having um having a a debate about
the safety of of driverless cars mean we we driverless cars is is an AI system we want it to meet certain safety standards uh and it seems that killing fewer people than ordinary cars is not good enough for some reason so we wanted to kill at least 10 times fewer or at least a 100 times this is a political debate we're going to have or we are having uh and then once we have that once we have that Criterion the engineers can implement it there's not there's nothing sort of deep going on there it's it's
like with every uh new technology you know the first day that a steam locomotive was demonstrated to the public it killed someone and killed an MP actually and and there's no such thing as a completely safe technology um so driverless cars will no doubt kill people and there'll be an argument that oh yeah okay it killed somebody but it's it's 100 times safer than human drivers then the opposition will say yeah well maybe it's safer in terms of numbers but it killed this person in a particularly horrible way which no human driver would ever do
so we don't want that and and I think that's also that's a reasonable position to take in some situations yeah also there's some uh I think it's a question of whether safety is going to consist of some additional technology bolted onto the system say an airbag in a car that's just there for safety versus the a lot of safety is just inherent in the design of a car that is you didn't you didn't put brakes in a car and a steering wheel as a safety measure so it would run into walls that's what a car
means it means doing what a human wants it to do or or the or say a bicycle tire you don't have like one set of Engineers who have a a bicycle tire that holds air and then another one that prevents it from uh having a blowout come falling off the rim and therefore injuring the the rider it's part of the very definition of what a bicycle tire is for that it not blow up blowout and an and inine Rider now in some cases maybe you do need an addon like the airbag but I think the
vast majority it just goes into the definition of any engineer system as something that is designed to satisfy human needs I agree totally agree Steve I've heard you hose down concerns about AI caused existential Risk by arguing that it's not plausible that we'll be both smart enough to create a super intelligence but stupid enough to kind of unleash an unaligned super Intelligence on the world and we can always just turn it off if it is malevolent but but isn't the problem that we need to be worried about the worst or most incompetent human actors not
the modal actor and that's kind of compounded by The Game Theory dynamics of a race to the bottom where if you sort of hot corners on safety you'll get to AGI more quickly well I think that with first of all the the more sophisticated system is the larger the network of people are required in order to bring it into existence and the the more they'll therefore fall under the ordinary constraints and and and demands of uh you of any company of any uh institution that is the teenager in his basement is unlikely to accomplish something
that will def feat uh the all of the tech companies and government put together uh there is I I think an issue about perhaps malevolent actors someone who say he uses AI to inere a superv virus um and there you know there is question of whether the you know the guys the people with the white hats are going to how smart the people with the black hats that is the moev with actors as with other Technologies uh such as say nuclear weapons the fear of a suitcase nuclear bomb devised by some one actors in in
their garage you know I think we don't don't know the answer but um the uh what I don't think that we have to you know among the world's problems the Doomsday scenario of say the AI that is programmed to uh eliminate cancer and does it by Exterminating all of humanity because that's what one way of eliminating cancer for many reasons that does not keep me up at night I think we have more pressing problems than that or that turns us all into paper clips if it's been program to maximize the number of paper clips because
we're raw material for making paper clips I think that kind of sci-fi scenario is just posterous for for many reasons uh and that probably the real issues of AI safety will be come apparent as we develop particular systems and particular applications and we see the Harms they do many of which probably can't be anticipated until they're actually built as with with other Technologies again I totally agree with that um so long as we're still talking about Ai and and I have to keep stressing that I think we're going to be just talking about Ai and
not AGI for a very long time yet I would guess because I see no sign of AGI on the horizon U but so it's kind of a theoretical the thing we're disagreeing about in to AGI is kind of a purely theoretical issue at the moment that has no practical consequences for hiring people for safety or that kind of thing the thing is well whatever you think of AGI um scenarios like curing cancer by killing everybody or deciding to make paperclips for some silly reason that sort of thing is not um is not a possible way
that that um AI systems could go wrong it it is like in theory in principle it is um sort of way that that AI systems could go wrong but then it won't be the paper clips and it won't be the cancer thing it'll be something that no one's ever thought of it'll be something and and almost certainly it'll be something that the malign actor that created the AGI for some reason um like you know um um malevolent um States for example could decide to in principle to decide to secretly make an AI AGI uh for
some reason because they think it will help them to take over the world and that then um just like U malign actors can train children to grow up to be suicide bombers so a malign government could uh train an AGI to grow up to be a malevolent AGI I can't quite see how that would help I mean they they um malevolent governments have millions of humans at their disposal and um they can train them however they like and they do and that is a major problem um but why would they think that AGI and AGI
would help them more than let's say a million humans correct also the scenarios of of uh the the AI uh killing us as collateral damage uh would be is is is almost a contradiction because that's not an intelligent system uh that's a stupid system that's artificial stupidity because intelligence always requires satisfying multiple goals it isn't that there is a uh that first we're going to have a system that's intelligent and it's going to kill us as its understanding of how to eliminate cancer then we need to add a safety add-on so it doesn't do it
if it was tempted to do that it's not an intelligent system that's not what intelligence consists of just to to somewhat segue out of the the AI topic so Steve you've written a book called rationality and Dave you're writing a book called irrationality Steve do you think it makes sense to apply subjective probabilities to single instances for example the you know rationalist community in Berkeley often likes to talk about what's your P Doom that is your subjective probability that AI will cause human extinction is that a legitimate use of subjective probabilities well certainly one that
is not intuitive and a lot of the classical demonstrations of human irrationality that we associate with for example Daniel conman and Amos tersi number of them in on asking people a question which they really have trouble making sense of such as what is the probability that um this particular person has cancer uh that's a way of assigning a number to uh a subjective feeling which I I think do think is can be useful whether it's useful whether there's any basis for assigning any session number in the in the case of artificial int intelligence killing us
all is another question but the more Genera question could uh rational thinkers try to put a number between zero and one on their degree of C of a proposition however unnatural that is I don't think it's an unreasonable thing to do although it may be unreasonable in cases where we have spectacular ignorance and it's just H effect picking numbers at random so we I I'm sure we disagree about where to draw the line um between reasonable use of of the concept of probability and unreasonable uses I I probably think that that uh I say probably
I think I I expect that uh I would call many more uses irrational the uses of probability calculus than Steve Wood we have um subjective expectations and they come in various strengths and I think that um trying to quantify them with a number um doesn't really um do anything it it's it's more like it's it's more like saying um I'm sure and then somebody says uh are you very sure you say well I'm very very sure well but you you can't compare there's no there's no inter subjective comparison of utilities that you could appeal to
to quantify that and uh we were just talking about uh AI Doom um that's a very good example because if you ask somebody what's your subjective probability for AI Doom well if they say zero or one then they're already violating the the tenets of basian epistemology because zero means that nothing could possibly persuade you that Doom is going to happen and and one means nothing could possibly persuade you that it isn't going to happen um sorry vice versa uh and um so if you say but if you say anything other than zero or zero or
one then um uh your uh interlocutor has already won the argument because even if you said one in a million so they'll say well one in a million is much too high uh probability for the end of civilization the end of the human race so we you've got to do everything we say now to avoid that at all costs and the the cost is irrelevant because the disutility of uh the the world civilization ending is infinitely negative sorry the disutility is infinite the utility is infinitely negative so uh and this argument is all been about
nothing because you're arguing about the content of the other person's brain which actually has nothing to do with the um the uh real probability which is unknowable of a physical event that's going to be subject to unimaginably vast numbers of unknown forces in the future so much better to talk about a thing like that by talking about substance like we just have been you know we're talking about what will happen if somebody makes a computer that does don't so yes that's a reasonable thing to talk about talking about what the probabilities in somebody's mind are
is irrelevant and it's always irrelevant unless you're talking about an actual random physical process like the process that makes the patient come into this particular doctor's surgery rather than that particular doctor's surgery unless that isn't random you know if you if you if you're a doctor and you live in an area that that has a lot of Brazilian immigrants in it then you might think that the one of them having the Zach of virus is more likely and that that's a meaningful um judgment um but but when we're talking about things that that are facts
and it's just that we don't know what they are then talking about probability doesn't make sense in my view I guess i''d be a little more charitable to to it that the the uh uh although agreeing with with uh almost everything that you're saying but certainly in Realms where people are willing to make a make a a bet now of course maybe those are cases where we've got inherently probabilistic devices like roulette wheels um where there but you know we now do have prediction markets for elections for I've been following one what's the probability or
how sorry what is the the price of a $1 gamble that the president of Harvard will be forced to resign by the end of the year and I I would track it as it goes up and it is it's certainly meaningful it responds to events that would have uh causal consequences of which we're not certain but which I think we can meaning meaningfully differentiate in terms of How likely they are to the extent that we would be have skin in the game we'd put put money on them and over a large number of those bets
we would make a profit or have a loss depending on how well our subjective credences are calibrated to the rure of the world and in fact there is a movement in David maybe you think this is nonsense but but um you know in in social science in political forecasting uh encouraging people to to to bet on their expectations partly as a way as kind of a bit of of of cognitive hygiene so that people aren't uh resist the temptation to tell a good story to titilate their audiences or to attract attention but are really if
they have skin in the game they're going to be much more sober and much more motivated to consider all of the circumstances and also to avoid well-known traps such as basing expectation on vividness of imagery on ability to recall similar anecdotes not taking into account basic law of probability such as something is less likely to happen over a span of 10 years than over a span of one year and we know from C psychology research that people are uh often flout very basic law of probability and there's a a kind of discipline in expressing your
Credence as a number as a way to as a kind of cognitive hygiene so to uh fall into these straps I think I agree but but I I would phrase all that very differently in terms of knowledge so I think prediction markets are a way of making money out of knowledge that you have have that that if so supposing I think that um uh as I once did that um everyone thought that Apple computer was going to uh fold and and uh go bankrupt and I thought that um I I thought that I know something
that most people don't know and so I bought Apple shares and so that's that the share market is also a kind of prediction Market prediction markets generalize that and it's basically a way that people who think that they know something that the other participants don't can make money out of that that knowledge if they're right and if they're wrong then they lose money and so it's it's it's not about their subjective feelings at all I mean you for example you might be terrified of a of a of a certain bet but then decide well actually
I know this and they don't and so it's worth my betting that it will happen so uh that and I'm I'm skeptical that that it will produce mental hygiene because ordinary betting on on roulette and horse races and so on doesn't seem to produce mental hygiene uh people people do things that that are probabilistically likely to lose their money or even to lose all their money and they still cling to the subjective uh expectations uh that they had at the beginning that that's certainly to by the the the moment they step foot of the casino
they're on a on a path for losing money I wouldn't say that casinos are inherently irrational because there are many way many reasons for betting other than expecting to make money you pay for the the suspense and the resolution that's that kind of exactly yes but in the case of say forecasting and you know the work by Philip tlock and then others show that the pundits and the oped writers who do make predictions are regularly outperformed by the Nerds who cautiously assign numbers to their degree of of Creeds and increment or decrement them you as
you say based on knowledge and often is knowledge it's not even secret knowledge but it's knowledge they bother to look up the no one else does such as if say a terrorist attack they might at least start off with a prior based on the number of terrorist attacks that taken place in the in the previous year or previous five years and then bump up or down that number according to new information new knowledge exactly as as you suggest but it's still very different than what you typical oped writer for the guardian might do yes I
I I think I I would I as you might guess I I would I would um put my money on explanatory knowledge rather than extrapolating Trends but extrapolating PR Trends is is also a kind of explanatory knowledge at least in some cases so it is so but there is in general in tetlock's research I don't know if this would mean by explanatory U prediction but the people who have uh Big Ideas who have identifiable ideologies do way worse than the Nerds that simply kind of Hoover up every scrap of data they can and uh without
narratives or deep explanations simply try try to make their best get guess the people who have actual deep explanations don't write Financial stuff in the guardian uh so you so whenever you whenever you see a pundit saying uh you know whether it's an explanatory Theory or or an extrapolation or what you've always got to say as the saying goes uh if you're so smart why ain't you rich right yes and and um uh and and if they are rich why are you writing op heads for a guardian so uh so that that's a a selection
Criterion that's going to select for bad participants or or failed participants in prediction markets the ones who are succeeding are making money and as I said prediction markets are like the stock exchange except generalized uh and they're they're a very good thing and they they transfer money from people who don't know things but think they do to people who do know things and think they do yes I mean the the uh added feature of the stock market is that the information is so widely available so quickly that it is extraordinarily rare for someone to actually
have knowledge that others don't and that isn't already or very very very quickly priced into the market uh but still it is that does not contradict your point but just makes it in a uh in this particular application I agree although there's so some some interventions in the market are like speculations um about the fluctuations but other things are longer term things where you like with with Apple computer you know you you you think well that's not going to fold if it doesn't fold it's going to succeed and if it succeeds its share price will
go up but uh it also um uh there's also feedback onto the companies as well so that that's a thing that doesn't exist really in the in the prediction markets but it's it's all consent it's all about consent and knowledge I I want to explore potential limits to David's concept of universal explainers so Steve in the language Instinct you wrote about how children get pretty good at language around 3 years of age they go through the grammar explosion over a period of a few months firstly what's going on in their minds before this Edge well
I think research on cognitive development shows that children do have some core uh understanding of basic onal categories of the world this is research done by my colleague Elizabeth stalky and my uh my former colleague Susan Cary and and uh others that kids seem to have a concept of an agent uh from object of a living thing uh and I I think that that's a prerequisite to learning language in a human manner that unlike say the large language models such as gbt which are just fed massive amounts of text and extract uh statistical patterns out
of them children are at work uh trying to figure out why the people around them are making the noises they are and they correlate some understanding of a likely intention of a speaker with the signals coming out of their their mouth it's not pure cryptography over the signals themselves there's additional information carried by the context of uh parental speech that kids make use of they basically they they know that that that uh language is a kind of more like a transducer than just a pattern signal that is sentences have meaning people say them for a
purpose that is they're trying to uh give evidence of their mental states that persuade they're trying to order they're trying to uh question kids have enough wherewithal to know that other people have these intentions and that when they use language it's language about things and that is their way into language which is why the child only needs three years to speak and chat gbt and gbt 4 would need an equivalent of 30,000 years so children don't have 30,000 years and they don't need 30,000 years because they're not just doing pure cryptography on the statistical patterns
in the in the uh language CLE yes they're forming explanations they're forming explanations exactly and are they forming explanations from birth don't ask me yeah um pretty pretty close uh the was harder to do the the studies are hard the younger the child the harder it is to get them to pay attention long enough to kind of see what's on their mind but um certainly by certainly by three months we know that they are uh tracking objects they are paying attention to people certainly even newborns try to lock onto faces are receptive to human voices
including the voice of their own mother which they probably began to process the uter ropes so so let me explore potential limits to Universal explainers from another Direction so so David the so-called first law of behavior genetics is that every trait is heritable and that notably includes IQ but it also extends to things like political attitudes does the heritability of Behavioral traits impose some kind of constraint on your concept of people as universal explainers it would if it was true or or rather the debate about heritabilities first of all heritability uh means two different things
one is that that uh you're likely to have the uh same traits as your parents and people you're genetically related to and that these these similarities follow follow the rules of mendelian genetics and that kind of thing so that's one meaning of heritability but um in in that meaning um like where you live is heritable so um another meaning is that the the trait the behavior in question is controlled by genes in the same way that the eye color is controlled by genes that the the the gene produces a protein which interacts with other proteins
and other chemicals and and a long chain of um cause and effect and eventually ends up with you doing a certain thing like hitting someone in the face in the pub uh and and if you never go to pubs then this behavior is never activated but but the propensity to engage in that behavior in that situation is still there um now I think this um so one extreme says that all behavior is controlled in that way and another extreme says that no behavior is controlled in that way it's all social construct that's actually all fed
into you by by your culture but by your parents by your U by your um peers and so on now I think not only do I think that neither of those is true but I think that the usual way out of this conflict by saying actually it's a an intimate causal um relationship interplay between the genetic and the environmental um influences uh and we can't necessarily untack it but in some cases we can say that genes are very important in this in this thing and in other cases we can say they're they're relatively unimportant in
this trait I would say that that whole framing is wrong it misses the main determinant of human behavior which is creativity and creativity is something that doesn't necessarily come from anywhere it might do you you might have a creativity that is conditioned by your parents or by your culture or by your genes um for example you know if you if you have if you have a very good visual spatial um Hardware in your brain I don't know if we have if there is such a thing but Suppose there were um then uh you might decide
to uh you might find playing basketball um rewarding because you can get the satisfaction of seeing your intentions uh fulfilled and if you're also very tall and and so on you can see how the genetic factors might affect your creativity but it can also happen the other way around so if someone is shorter than normal they might still become a great tennis player so Michael Chang was I think 5' n and the average tennis player was at the time was 6'3 or something and Michael Chang nevertheless got into the top whatever it was and nearly
won Wimbledon and I can imagine telling a story about that I don't know actually you know why Michael Chang became a tennis player but I can imagine a story where his innate suitability for tennis that is his height but also perhaps his his uh coordination you know all the other inborn things that the things that might be inborn that they might be less than usual and that therefore he might have spent more of his creativity during his childhood in compensating for that and he compensated it so well that in fact he U became a better
tennis player than those who were genetically suitable for it and in a certain Society if I can just add the social thing as well it it's also plausible that in a certain society that would happen quite often because uh you know in in gordonston school where where Prince Charles went to school they had this appalling custom that if if a a boy it was only boys in those days if a boy didn't like a particular activity then they'd be forced to do it more and if if that form of instruction was effective you'd end up
with people emerging from the school who were better at the things that they were less genetically inclined to do and worse at the things they were more genetically inclined to do so okay bottom line I think that that um creativity is hugely undervalued as a factor in the outcome of people's behavior and although creativity can be affected in the ways I've said sometimes perversely by genes and by culture um that doesn't mean that it's it's not all due to creativity um it because the people who were good at say tennis will turn out to be
the ones that have devoted a lot of thought to tennis if that was due to them being genetically suitable then so be it but if if it was due to them being genetically unsuitable but they still devoted the creativity then they would be good at tennis course not sumo wrestling but I I I chose a sport that's that's rather cerebral let me put it somewhat different way you know heritability as it's used in the field of behavioral genetics is a measure of individual differences so it is not even meaningful to talk about the heritability of
of the intelligence of one person uh it is a measure of how the extent to which the differences in a sample of people and it's always relative to that sample can be attributed to the genetic differences among them it is they can be measured in many ways or I shouldas in four ways Each of which takes into account the fact that people who are related also tend to grop in similar environments and so you can one of the methods is you compare identical and forral twins identical twins share all their genes um and their environment
for trol twins share half their genes and their environment and so by seeing if identical twins are more similar than fraternal twins that's a way of teasing apart uh first approximation heredity environment another one is to look at twins separated at Birth who share their genes but not their environment and to the to the extent that they are correlated that suggests that genes play a role and the Third Way is to compare the similarity say of adoptive siblings and biological siblings adoptive siblings share their environment not their genes uh biological siblings share both and now
more recently there's a fourth method of actually looking at the genome itself self in Geno and wi Association um uh studies to see if the pattern of variable genes is statistically correlated with certain traits like intelligence like creativity If We Had A A good measure of creativity and so you can ask why is the difference between two PE to to what it is the difference between two people attributable to their genetic differences although those techniques don't tell you anything about the intelligence of you know of mic cor intelligence of of Lisa herself now heritability is
uh always less than one um it is surprisingly much greater than zero pretty much for every human trait that we know psychological trait that we know how to measure and that isn't obviously true a priori you wouldn't necessarily expect that say if you have identical twins separated at Birth and you growing up in very different environments and and we there are cases like that such as one twin who grew up as in a Jewish Family in Trinidad another twin who grew up as a in a Nazi family in Germany and then when they met in
the lab they wearing the same clothes had the same habits and quirks and uh and indeed political orientation not perfectly so we're talking about statistical resemblances here but before you knew how the studies came out I think most of us wouldn't necessarily have predicted it political liberal to conservative belief or libertarian to communitarian beliefs would all be correlated between twin separated at Birth for example or uncorrelated in biological sibling in adoptive siblings growing up in the same family so I think that is a significant fighting I don't think it can uh can be blown off
although again it doesn't it's true that it is not speak to David's question of how a particular Behavior including novel Creative Behavior was produced by that person that at that time that's just not what heritability is all is is about yes but but even even when um uh you know you can say um whether a gene influences in a population whether similarities in genes uh influence of behavior but but unless you have an explanation you don't know what that influence consists of it it it might consist it might operate via for example the person's appearance
so that um people who are good-looking are treated differently from people who aren't good-look and that would be true even for identical twins reared separately and there's also the fact that um when people uh grow up they they um they sometimes change their political views so The Stereotype is that that your your leftwing when you're when you're young and in your 20s and then when you when you get in into your 40s and 50s and older you become more and more right-wing there's there's the saying attributed to many people that anyone who is not a
socialist when they're young has no heart and anyone who is a socialist when they're old has no head uh I've tried to track that down and it's been attributed to me many uh uh quers over the years but uh is is not completely True by the way there's some something of a uh life cycle effect in political attitudes but there's a much bigger Cort effects people tend to carry their political beliefs with them with as they age so they tend to in our culture so that there are there are other cultures in which they absolutely
always do because only one um political orientation is is tolerated um in a different Society one that perhaps doesn't exist yet which is more liberal than ours it might be that people change their political um orientation every five years well that's an imper I mean that that I mean we neither of us can uh determine that from our armchairs I mean that is an empirical question that you'd have to test well I I you can't test whether it could happen well that that is true you could test whether it does happen yes exactly now just
but getting back by the way it is within the field of behavioral genetics it's it's well recognized that heritability per se is a correlational statistic so it if a trait is heritable it doesn't automatically mean that it is via the effects of a uh the genes on brain operation uh per se you're right that it could be via the body could be via the appearance it could be uh indirectly via a personality trait or cognitive style that inclines someone towards picking some environments over others so if you are smart you're more likely to spend time
in libraries and in school if you're going to stay in school longer if you're not so smart you won't um and so the environment it's sou of the envir doesn't matter but the environment in those cases is actually Downstream from genetic differences sometimes called a gene environment correlation where your genetic endowment predisposes you to uh spend more time in one environment than in another so also one of the possible explanations for another surprising finding that some traits such as intelligence uh tend to increase inherit anity as you get older and effects of familial environment tend
to decrease contr to the mental image one might have that as The Twig is bent so grows the branch that uh ear that the as we live our Our Lives we may um differentiate as we live our lives we tend to be more more predictable based on our genetic endowment perhaps because there are more opportunities for us to place ourselves in the environment that make the best use of our heritable talents whereas when you're a kid you got to spend a lot of time time in whatever environment your parents place you in as you get
older you get to choose your environment so again they're not the the genetic enement is not a uh an alternative to an environmental influence but in many cases it may be that the environmental influence is actually an effect of a genetic difference yes like in the examples we just said but I I I just want to carry on like a broken record and say that um that something is is only partly C caused directly caused by genes doesn't mean that the rest is caused by environment it could be that the rest is caused by creativity
by something that's unique to the person and it could be that the proportion of behaviors that is unique to to the person is itself determined by the genes and by the environment so in one culture people are allowed to be more creative in their in their lives and um William Godwin said something like I can't I can't say the quote exactly but it was something like two boys walk side by side through the same Forest they are not having the same experience uh and the they the one reason is that one of them's on the
left and one of them's on the right and they're seeing different bits of forest and one of them may see a thing that interests him and so on but it's also because internally they they are walking through a different environment one of them is walking through his problems the other one is work walking through his problems and um so the the if you can account for if you could in principle account for some Behavior perhaps statistically entirely in terms of genes and environment it would mean that the environment was destroying cre creativity let me let
me actually cite some data that are maybe relevant to this because they are right out of Behavioral gentics namely that if you there behavi has distinguish between the shared or familial environment and this rather ill-defined entity called the non-shared or unique environment I I think it's actually misnomer but it refers to the following empirical phen so each of the techniques that I uh explained earlier with let's just take say identical twins say separated at Birth compare them to identical twins brought up together now the fact that correlation between identical twins separated at Birth is much
greater than zero uh suggests that genes matter it's not all the environment in terms of these this variation however identical twins reared together do not correlate 1.0 or or or even the 995 in many traits that correlate around 0.5 now it's interesting that that's greater than zero it's also interesting that it's less than 1.0 and it means that of the things that affect say personality they uh D you may want to attribute this to creativity but they are neither genetic nor are they products of the aspects of the environment that are obvious that are easy
to measure such as whether you have older siblings whether you're an only child whether they are books in the home whether they guns in the home whether they're uh TVs in the home cuz those all are Sayan in twins reared together nevertheless they are not indistinguishable now one way of just of of characterizing this well maybe they uh there's a causal effect of some minute infinitesimal difference like if you sleep in the top bunk bed or the bottom bunk bed or you walk on the left or you walk on the right another one is that
there could be effects that are for all intents and purposes random that as the brain develops for example The genome couldn't possibly specify the wiring diagram down to the last synapse it makes us Human by keeping variation in development within certain functional boundaries but within those boundaries there's a lot of sheer Randomness and perhaps it could be and David you'll tell me tell me if this is if this harmonizes with your conception creativity in the sense that we have cognitive processes that are open-ended combinatorial where it's conceivable that small differences in the initial state of
thinking through a problem could diverge as we start to think about them more and more uh so they may even have started out in uh essentially random but end up in very different places now would that count as what you're describing as creativity because ultimately creativity itself has to be it's not a miracle It ultimately has to come from some mechanism in the brain which and then you could ask the question why are the brains of two identical twins specified by the the same genome why would their create their creative processes as they unfold take
them in different directions yes so that that's very much captures what I wanted to say although I I must add that it's always a bit misleading to talk about high level things especially in knowledge Creation in terms of um the microscopic substrate because you know if you say um uh the reason why something or other happened the reason why Napoleon lost the battle of waru was because ultimately it was because an atom went left rather than right uh several years before um even if that's true it doesn't explain what happened it's it's only possible to
explain the outcome of the battle of War too by talking about things like strategy tactics guns numbers of soldiers political um imperatives you know all that kind of thing and it's the same with a child growing up in a home it it's it's not helpful to say that the reason that the two identical twins have a different outcome in such and such a way is because there was a random difference in their brains uh even though it was the same DNA program uh and that was eventually Amplified into different opinions uh it it's much more
explanatory much more um much matches the reality much better to say um uh one of them decided that um his autonomy was more important to him than praise and the other one didn't and perhaps perhaps that's even too big a thing to say so even a smaller thing would be would would be legitimate but I think as small as a as a molecule doesn't tell us anything right by the way I much that I agree with and it's even an answer to Joe's very first question and what do I appreciate in David's work because one
thing that captivated me immediately is that he like I uh locate explanations of human behavior at the level of knowledge thought cognition not the level of neurophysiology that's why I'm not a a neuroscientist why I'm a cognitive scientist because I do think the perspicuous level of explaining human thought is at the level of knowledge information uh uh inference rather than at the level of neural uh circuits the the problem in the case of the say of the twins though is that you um because they are in as best we can tell the same environment because
they do have the same uh you or very similar brains although again I think they are different because of of of random processes during brain development together with possibly the sematic mutations that each one accumulated after uh conception uh so they are different but it's going to be very difficult to find a cause at the level of explanation that we agree is most perspicuous given that their past experience is as best we can tell uh indistinguishable it could be that we could trace it if we follow them every moment of their life with a body
cam we could identify something that predictably for any person on the planet given the exposure to that particular perceptual experience would send them off in a in a particular direction although it also could be that creativity which we're both both interested in has some kind of I don't know if You' want to call it a chaotic component or undecidable component but it may be that it's in the nature of creativity that given identical inputs it may not end up at the same place I agree with that I do want to finish on the topic of
progress so I have three questions and I'll uncharacteristically play the role of the pesimist here so you two can gang up on me if you like but the the first question and this is kind of a high variance question it'll provoke either a very interesting answer or the question will just be kind of boring but can either of you name any cases in which you would think it reasonable or appropriate to Halt or slow the development of a new technology sure depends on the technology but I and depend on the argument but I can imagine
say the gain of function research in in varent viruses May uh have cost that that a way to benefit in the knowledge and there maybe maybe many other examples I have to be examined on a case-by casee basis so there's there's a difference between um halting the research and and making the research secret so the obviously the the Manhattan Project had to be kept secret otherwise it it wouldn't work and they were trying to make a weapon and the weapon wouldn't be effective if if everybody had it um so but whether whether it's um can
I think of an example where where it's a good idea to hold the research all together um yes I I I can't think of an example at the moment maybe this gain of function um thing is is an example where there under some under some circumstances there would be an argument for a moratorium but the trouble with moratoria is that not everybody will obey it and the Bad actors are are definitely not going to obey it if the result would be um a military advantage to them mean you could put it in a different sense
of where where it isn't a question of uh putting a moratorium but not making the positive decision to invest vast amounts of brain power resources into into a problem where we should just desist and it won't happen unless you have the equs of a Manhattan Project I think we can ask the question uh I don't know if it's it's answerable but would the atomic bomb have been invented if it were not for the special circumstances of a war against the Nazis and an expectation that the Nazis themselves were working on an atomic weapon that is
this technology necessarily have a a kind of a momentum of its own so that we it was inevitable that if we had a 100 civilizations in in 100 planets all of them would develop nuclear weapons at this stage of development or was it just really bad luck and we've been better off obviously it would be better off if there were no Nazis but uh if there were no Nazis would we inevitably have developed them or would we have been since we would have better off not the the Japanese could have done it as well if
they' put enough resources into it they had the scientific knowledge and they had already uh made biological weapons of mass destruction they they they they never used them on America but they did use them on China so there were there were um Bad actors but all those things so nuclear weapons and U biological weapons they required the resources of some quite Rich States um at that time in the 1940s could so we could we if we replayed history is there a history in which we would have had all of the technological progress that we've now
enjoyed but it just never occurred to anyone to set up at Fantastic expense at Manhattan Project we just were better off without nuclear weapons so why invest all of that brain power resources to to invent one unless unless that you were a specific circumstance having reason to believe that the the Nazis or imperial Japan was doing it I I I think that this so although it's very unlikely that they would have been invented in 1944 45 um by the by the time we get to you know 2023 um I think that um the the secret
that this is possible would have got out by now because we know that we knew even then that the amount of energy uh available in in uranium is enormous um and the Germans were by the way thinking of making a dirty bomb with it and uh this something less than nuclear weapon uh I think by now it would have been uh known and there are countries that have developed nuclear weapons already like North Korea who I think by now would have them and they'd be very much more dangerous if the West didn't have them as
well I I feel I wonder there are I think we have to do is think of the counterfactual of other weapons where the technology uh could exist if soci if countries devoted a comparable amount of resources into developing them uh is it possible to to uh generate tsunamis by planting explosives and in deep ocean um uh uh faults to trigger earthquakes as a kind of weapon or to control the weather or to cause you your weather catastrophes by seeding clouds if we developed if we had a Manhattan project for those could there have been a
uh development of those Technologies where once we have them we say well it's inevitable that we would have them but in fact it did depend on particular decisions to exploit that option which is not trivial uh for any society to do but it did require the positive commitment of of resources and a national effort yeah I I can imagine that that there are universes in which nuclear weapons weren't developed but say biological weapons were developed where about none of them just uh uh let's let's be optimistic for a second in terms of our thought experiment
could there be one where we have microchips and vaccines and moonshots but no weapons of mass destruction well I I don't think there can be many of those because we haven't the problem of how to spread the enlightenment to to um Bad actors uh we will have to eventually otherwise we're doomed um the I think the reason that a a wide variety of uh weapons of mass destruction civilization ending weapons that kind of thing have not been developed is that the nuclear weapons are in are in the hands of of uh Enlightenment countries and so
it's pointless to try to attack America with with biological weapons because even if they don't have biological weapons they will reply with nuclear weapons so um once there are weapons of mass destruction in the hands of the good guys it gives us Decades of um leeway in which to try to prevent it try to suppress the existence of of Bad actors state level Bad actors but you know the fact that it's expensive that decreases with time you know it it it uh for a country to make nuclear Webers now requires a much smaller proportion of
its National wealth than it did in 1944 uh and that will increase that that that effect will increase to to the in the future but is that true to the extent that some country beforehand has made that investment so the knowledge is there and that if they hadn't then it wouldn't that that kind of Mo's law would not apply un less it would ow them up by a finite amount by a fixed and finite amount whose cost would go down with time okay penultimate question so there's been a a well-observed Slowdown in scientific and technological
progress since about 1970 and there are two broad categories of explanation for this one is that we have somehow picked all of the loow hanging fruits and so ideas are getting harder to find and the second category relies on more kind of cultural explanations like for example maybe Academia has become too bureaucratic maybe Society more broadly has become too risk averse too safety focused given the magnitude of the Slowdown doesn't it have to be the case that ideas are getting harder to find because it seems implausible that a Slowdown this large could be purely or
mostly driven by the cultural explanations um David I I I think I kind of know your response to this question although I'm I'm curious to hear your answer so St I might start with you I suspect there's it's some of each that almost by definition unless every scientific problem is equally hard which seems unlikely uh we're going to solve the easier on before the harder ones and the harder ones are going to take longer to solve um so we have we do go for the fruit sooner it's not of course it also depends on how
you count scientific problems and solutions you know I think have an awful lot of breakthroughs since the 1970 so how well you could quantify the rate but then I think there you one could perhaps point to society-wide commitments that seem to be getting um uh diluted certainly in the United States there are many decisions that I think will have the effect of slowing down progress the main one being the retreat from uh meroy the fact that uh we're seeing uh gifted programs specialized science and math schools educational commitments toward scientific and mathematical uh Excellence being
watered down sometimes on the basis of other dubious uh uh worries about Equity across racial groups as superseding the benefits of going all ahead on nurturing scientific Talent wherever it's found so I think it almost has to be some of each so I I disagree as you as you predicted by the way you said you only going to be pessimistic on one question now you've been pessimistic on a second question no I've got I've got I had three pessimistic questions oh okay so there's one more yeah so um I I don't think that uh there
is less low hanging fruit now than there was a 100 years ago um because um when when there's a fundamental Discovery it not only picks a lot of what turns out to be with hindsight low hanging fruit although it didn't didn't seem like that when uh in advance um but it also creates new fruit trees if I can uh increase continue this metaphor so there there are there are new um there are new problems for example uh you know my own field quantum computers quantum computers couldn't exist before the field of quantum computers couldn't exist
before there was quantum theory and computers they both had to exist there's there's there's no such thing as it being um having been low low hanging fruit all along in 1850 as well it it wasn't it was a thing that that emerged a new problem creating new loow hanging fruit but then if I can continue my historical speculation about this as well to make a different point it wasn't in fact um quantum computers weren't in fact invented um in the 1930s or 40s or 50s when they had a deep knowledge of quantum theory and of
um computation and both those fields were regarded by their respective Sciences as important and had a lot of people working on them although a lot in those days is a lot was a lot less than what a lot counts as today but uh I think the reason that it took until the 1980s for anyone want to even think that physics the the the computation might be physics was as you put it cultural or societal or whatever the the the beginnings of positivism and instrumentalism and the irrationality in in um wave function collapse and that kind
of theory the the the breakdown of philosophy as well um and uh and uh in in computer science the domination of computer science by mathematicians by by um people who had what I have called the mathematicians misconception which is that that um um proof and um uh computation uh exist uh abstractly and that they can be studied abstractly without needing to know what the underlying physics is um so um I think nobody thought of this and the reason they didn't think of it was the that even then um scientific research was directed towards um incremental
solution of problems rather than anything fundamental I think another 50 years back people at the foundations of every field of science wanted to gravitate towards fundamental discoveries um 50 years ago that was much less true now fundamental discoveries are absolutely suppressed by the funding system by the by the career system by the by the expectations of scientists by the a way that um young people are educated by everything the the the science journalism everything is is just assumed to be incremental so that that's why journalists always ask me uh whether what effect I expect quantum
computers to have on on the economy on cryptography or whatever whereas I'm interested in what effect um uh the quantum theal computation will have on our understanding of physics nobody wants to work on that because that is not rewarded in the in the present culture so I think it's it's uh and you know I I don't disagree at all with with the cultural factors that step mentioned in addition to this instrumentalism and overs specialization and and the career structure and all that stuff there is also sheer irrationality there are there are irrational Trends which have
taken over universities even in stem subjects um the very fact that I call them stem subject is a symptom of this phenomenon I'd like to Echo every it's certainly true I I'll I should have thought of it there really are questions that could not even have been conceived until certain changes in understanding were already in place you know until you had the idea of say of Darwin's theory of Evolution there just wasn't a question of say what is the Adaptive function of music or does it have one it's just not a it's not not a
question that would have occurred to anyone and there many and and that I I have to agree that that that always happens that trees Sprout maybe the low hanging fruit Falls and the seeds germinating the new trees whatever metaphor I you want when the human race does not take advantage of that that's something that needs explanation that that's not um that's not going to happen by accident because there are smart young people out there who want to understand the world and who who want to devote their lives to understanding the world and if they are
diverted into um I don't know if this metaphor works but into just picking up fruit that's already fallen from the tree then um something malign is producing that and we are seeing in U a lot of journals and scientific societies a rejection of the enlightenment idea that the search for Truth uh is indeed is possible and desirable and there are actual guidelines in journals like nature human behavior that you may not publish a result that seems to make one human group uh look uh worse than another one or that might be demeaning or insulting and
if all of our science has to flatter all of us all the time that's a different criterian from the most uh explanatory most accurate view of the world that you could uh attain and we are seeing a a kind of diversion toward goals other than truth and deep explanation I agree and it is terrible so final question because I know we've come up on time there may be some physical limit to how much we can grow in the universe so to give an example the philosopher W mccal but also other thinkers like I think Holden
kovski have written that if we continue our roughly 2% economic growth rate within about 10,000 years we'll be at the point where we have to produce an implausible amount of output per atom that we can reach in order to sustain that growth rate so if it is true that there is some physical constraint on how much we can continue to grow should that make us pessimists about the ultimate course of civilization or civilizations in the universe so the short answer is no but uh it is true that if we if we continue to grow at
2% per year or whatever it is then intend th000 or 100,000 years or whatever it is we will no longer be able to grow exponentially because we will be occupying a sphere which is growing and if the outside of the sphere is growing at the speed of light then the uh volume of the sphere can only be increasing uh like the cube of the time and not like the exponential of the time so that's true but that assumes all sorts of things all sorts of ridiculous extrapolations to 10,000 years in the future future so for
example Fineman said there's plenty of room at the bottom there's a lot more room you know you assume that the number of atoms will will be the limiting thing uh what if we make computers out of quarks what if we make new quarks to make computers out of okay the quarks have a certain size what about energy well as far as we know now there's no lower limit to how to how uh little energy is needed to perform a given computation we'll have to refrigerate ourselves uh to to uh go down to that level but
but there's no limit so we can imagine efficiency of computation increasing without limit then when we get past quarks we'll get to the quantum gravity domain which is many orders of magnitudes smaller than the than the Quark domain we don't know what that that we have no idea how gravitons behave at the quantum gravity level for all we know there's an infinite amount of space at the bottom but you know we're now we're now talking about a million years in the future two million years in the future um our very theories of cosmology are changing
on a time scale of a decade so it it's it's absurd to extrapolate um our existing theories of Co cosmology 10,000 years into the future to obtain a pessimistic conclusion um which has no um which there's no reason to believe is is is takes into account the science that will exist at that time also I'll add this is a theme that that David has explored as well humans really thrive on uh on information on knowledge not just on stuff so you talk about growth it doesn't mean more and more and more stuff uh it could
mean better and better and better information more entertaining U virtual experiences uh more uh remarkable discoveries or uh ways of of encountering the world they may not actually eat more and more energy but just rearranging pixels and bits in uh different combinations of which that we know the space of possibilities is unfathomably big and growth could consist of uh better for cures for disease based on faster search in the space of possible drugs and many other uh massive advances that don't actually require more jewels of energy or more grams of material but but could thrive
on information which is uh which is not limited it might largely um require replacing existing information rather than adding to information so so w we we may not need exponentially growing amounts of computer memory um if we have better and better more and more efficient ways of using computer memory in the long run maybe we will but that long run is so long that our scientific knowledge of today is is not going to be relevant to it well I think that's a nice optimistic note to finish on it has been an honor and fascinating to
host this dialogue I'll thank each of you individually and if you like you can leave us with a brief parting comment so firstly David Deutsch thank you so much for joining me well as I said Thank you for having me and I'm glad you made made a pivot to optimism at the last moment so stick on that tack and step Pinker thank you so much for joining me it's been a pleasure and I'll just add that ism is not just a matter of uh temperament in board or other but a matter of rationally analyzing our
uh history and rationally analyzing what progress would consist of thank you both fantastic
Related Videos
Mindscape 253 | David Deutsch on Science, Complexity, and Explanation
1:42:07
Mindscape 253 | David Deutsch on Science, ...
Sean Carroll
60,737 views
Sean Carroll, Daniel Dennett, & Steven Pinker: AI, Parapsychology, Panpsychism, & Physics Violations
2:00:12
Sean Carroll, Daniel Dennett, & Steven Pin...
Robinson Erhardt
61,165 views
Joscha Bach: Nature of Reality, Dreams, and Consciousness | Lex Fridman Podcast #212
3:12:22
Joscha Bach: Nature of Reality, Dreams, an...
Lex Fridman
1,900,775 views
David Deutsch and Naval Ravikant — The Fabric of Reality And Much More | The Tim Ferriss Show
1:48:20
David Deutsch and Naval Ravikant — The Fab...
Tim Ferriss
144,109 views
The Science of Smarter Thinking l Steven Pinker on AI and Human Intelligence
1:00:58
The Science of Smarter Thinking l Steven P...
World of DaaS with Auren Hoffman
17,535 views
Is Consciousness a Miracle? | Harvard’s Cognitive Scientist Prof. Steven Pinker & Sadhguru
1:50:27
Is Consciousness a Miracle? | Harvard’s Co...
Sadhguru
1,268,378 views
David Deutsch - AI, America, Fun, & Bayes
1:24:07
David Deutsch - AI, America, Fun, & Bayes
Dwarkesh Patel
24,581 views
Iain McGilchrist: How faith can re-enchant a left-brained world
1:08:33
Iain McGilchrist: How faith can re-enchant...
Seen & Unseen
69,270 views
What went wrong at Harvard | Steven Pinker | The Reason Interview With Nick Gillespie
1:14:18
What went wrong at Harvard | Steven Pinker...
ReasonTV
194,761 views
Pattern Recognition vs True Intelligence - Francois Chollet
2:42:55
Pattern Recognition vs True Intelligence -...
Machine Learning Street Talk
55,239 views
Artificial Intelligence Explained: How to Make Money with AI & Use It to Improve Your Life ft X Eyeé
1:47:53
Artificial Intelligence Explained: How to ...
Earn Your Leisure
326,442 views
Steven Pinker: Why Smart People Believe Stupid Things
43:43
Steven Pinker: Why Smart People Believe St...
The Free Press
165,193 views
Steven Pinker: Why Heterodoxy Matters in the World
1:19:48
Steven Pinker: Why Heterodoxy Matters in t...
Heterodox Academy
106,575 views
The Multiverse is REAL - David Deutsch
1:36:32
The Multiverse is REAL - David Deutsch
Alex O'Connor
551,308 views
Remembering Christopher Hitchens | Richard Dawkins, Stephen Fry, Douglas Murray, & Lawrence Krauss
1:39:11
Remembering Christopher Hitchens | Richard...
The Origins Podcast
162,489 views
Joscha Bach - Why Your Thoughts Aren't Yours.
1:52:46
Joscha Bach - Why Your Thoughts Aren't Yours.
Machine Learning Street Talk
83,606 views
David Deutsch on the infinite reach of knowledge | The TED Interview
59:52
David Deutsch on the infinite reach of kno...
TED Audio Collective
33,212 views
Rationality during an epidemic of unreason - Steven Pinker
38:52
Rationality during an epidemic of unreason...
Channel 4 News
101,827 views
Steven Pinker vs John Mearsheimer debate the enlightenment | Part 2 of FULL DEBATE
27:17
Steven Pinker vs John Mearsheimer debate t...
The Institute of Art and Ideas
113,416 views
Super Brain, Epigenetics & More: Bernard Carr, Christof Koch, Rudy Tanzi, Deepak Chopra & Sadhguru
48:47
Super Brain, Epigenetics & More: Bernard C...
Sadhguru
226,991 views
Copyright © 2025. Made with ♥ in London by YTScribe.com