AI changes the payoff functions of different types of human activities learning I think over production relationship over things and then Capital over labor priest prostitute politician there might be a demand specifically that the service be provided by a human being as opposed to a machine you think you want this fundamental novelty deep purpose this world historic sense of meaning what you actually want is a lot of pleasure uh social engagements aesthetic experiences we are seeming approaching this this critical juncture in human history upon which like the next billion years might depend like this if you
want real purpose like knock yourself out now like because it's never going to be like more at stake than it is right now and in the next few years what was striking to me was how theological our entire conversation today has been and how similar to the Christian afterlife once you have a super intelligence explosion you sort of zip fast forward to technological maturity human capital is a depreciating asset in this world of advancing AI when I used gpt3 for the first time it filled me first with awe and then with Dread because I could
see how close AI was in surpassing me in research writing philosophy that I trained so hard for if the capern revolution took away our special place in the universe and Darwin robbed us of our special place in nature then AI threatens to undermine the last Pride of the human race our intelligence today work gives many of us purpose and meaning but AI is making more and more of that work obsolete until one day perhaps all human activity may be redundant Nick bostrom's deep Utopia is about that day and what to do about it in this
interview you're going to learn where you can invest your time effort and resources that is most AI proof and what the good life looks like in a world where AI has made you obsolete my name is Jonathan B I'm a fellow at the cosmos Institute researching philosophy and artificial intelligence if you want to make sense of this new age of AI we're entering Please Subscribe without further Ado Nick Bostrom AI changes the payoff functions of different types of human activities certain activities are going to be made obsolete a lot sooner and other activities seem to
be uh a lot more robust and so the three I want to talk with you about are learning I think over production relationship over things and then Capital over labor so by the first one learning over production what I was thinking about was in your book you gave this thought experiment of a technology that can help us learn and you described this as potentially one of the most difficult Technologies to create because it requires of you potentially reading and then changing uh billions if not trillions of synapses in our brains right to be able to
learn a new language or or gain the experience of being becoming a good father or a good mother and so my thought is uh anyone engaged in intellectual work you know your production may be rendered obsolete maybe in just a few years time we can feed in you know your book deep Utopia it'll a podcast an interview AI will generate a better script than the one that we can have today and then video AI is able to create something a lot more beautiful than the conversation we're having however if I wanted beautiful yes even more
beautiful however if I wanted to understand Professor Bostrom work I still have to do the hard work of sitting there and reading it it seems for a much longer time so that seems to be one area of life that's going to be a lot more resistant to to AI Innovation yeah if you think about what it actually would require through some sort of neurot technology to sort of directly download skills so first you would have to have the ability to read off what your current synopsis how they are configured right and and as you said
you know millions or billions or trillions of them uh then you would need to interpret you know what all of those synopses actually en code like currently and and then figure out how they would need to be changed in such a way that they now encode this additional knowledge that you want to download like with without sort of messing up what's already there or or changing your personality too much and and then you would have to physically change all of these so this definitely seems like a task for uh mature super intelligence but on on
until until that time like if you do want to have some sort of relatively fine grain control over what goes on inside your brain for now you know the the the best method is the right traditional one of of of you know reading and thinking and talking and working on yourself meditating Etc I think that's actually quite optimistic because it means uh uh at least According to Aristotle in the beginning of his metaphysics he says you know all men desire to know that knowledge and the contemplative life is in some sense the most human life
for Aristotle that that is the one that will be made obsolete uh at the end yeah I always find it a little bit funny when when philosophers are like thinking what's the best and highest form of of being a human and the conclusion is being a philosopher but you know maybe they are right well I've actually uh for a long time thought of philosophy as something that has a deadline in fact this I think occurred to me in my late teens it seemed to me that that's some point our current philosophical efforts would be rendered
obsolete either by AIS that could you know become super intelligent or maybe by humans developing cognitive enhancements of various forms that would make subsequent Generations much better at this and so that rather than spending my time thinking about the sort of Eternal questions of philosophy uh it seemed more useful to focus on that subset of the questions in philosophy where it might actually matter whether we get the answer now as opposed to in a few hundred years would you prefer to know the answers to those Eternal questions well one is uh certainly curious um but
um if we do end up in a trajectory where human lifespan becomes extremely long um then maybe rather than sort of using up all these Mysteries right away to immediately know the answers maybe you would want to spread it out a little bit so that you have sort of interesting things to learn and discover even you know hundreds of years thousands of years into the future um you know ignorance might become a scarce commodity maybe put some in the shelf for like the bottle of champagne you know that's from a specific gear that there only
small numbers of left you might want to save it for a special occasion but yeah there might be some other um I guess uh uh professions as well that that are sort of relatively immune or or variables that where where it's not just the specific Knowledge and Skills we have but the fact that these things be done by human is regarded as significant in its own right right could you give give some examples um well I mean I the priest prostit politician um where there might be a demand specifically that the service be provided by
a human being as opposed to a machine even if a machine could have all the same functional attributes um I I want to move on to the the second type of human activity that is somewhat AI resistant and that seems to be uh relationships over things and what I mean by that is in your book you gave a thought experiment of let's say we're technological maturity and we have have a better version of an AI parent in every domain right so so it's better at changing diaper it's better at teaching better at emotional support I
I still uh would would be willing to bet that most people wouldn't want to completely Outsource their role as a parent this could be even more resistant to uh even technologically mature AI because what's constitutive to forming a relationship is that it requires at least in our current view to humans yeah I I think relationships is one of the more plausible places where we might find purpose that survives this transition to technological maturity there is some value in honoring and continuing this existing relationship between two people um and that even if you could sort of
teleport in a robotic surrogate that would be sort of functionally superior it wouldn't be as good or maybe it would be better in some respects but it would also lose this value of of sort of of continuing the current relationship even if the robot sort of appeared indistinguishable from from the original Parent if you met in a thought experiment where nobody would actually notice any difference you might still think that the reality is that there is now this different person who is playing the parenting role and if if you care about sort of Truth in
relationships that might already be a disvalue and so yeah the existing Human Relationships to the extent that they sort of partially consist of of intrinsically Val in this connection to a particular other being would potentially be resistant what are things in a child's education that are potentially made more obsolete given the current wave of innovation in AI well potentially uh uh all all of it um but we don't know how long that will take right so it makes sense to hch your bets a bit you don't want to find yourself you know 18 or 20
going on the labor market with no skills and it turns out the whole AI transition has been delayed um so yeah I would like want to make sure to to H your but get the broad bases some useful stuff but then also like um um have fun while it lasts I think uh it would also be a shame to have wasted your childhood um perhaps we are too trigger happy uh in declaring that certain things are made obsolete uh given technology and for example so I I I went through the Chinese education system and it
had perhaps the a when compared to the Western education system uh an extreme emphasis on math like like rapid Ari arithmetic computation uh including memorization the memorization of the classics and I think it's tempting to think that you know even with the printing press you know memorization is no longer needed we have access to all these great books but certainly with the internet and with calculators in our pockets that we don't need to know these things even though machines can perform it better there's a lot of alpha I think to be had in perfecting these
skills because it it it's not just that it cultivates your character it enables you to see things that other people might not I would also say General um judgment and especially in these kind of turbulent times with all like social media so many mtic dynamics that that people are now exposed so having a kind of robust um I don't know cognitive immune system where where you can sort of reflect yourself on what makes sense and what doesn't and not getting swept up in in whatever latest fat or cult or or memes that are sort of
bombarding you I think that that certainly is uh something I also want to see um I I want to explore the third and last potential activity that would be more resistant again this is not in technological maturity but just on the way there and that's capital over labor because uh one uh one anecdote you brought up is how shocks to the labor market throughout history have changed whether uh the disproportion of production going to Capital or Labor uh has shifted and so for example post black plague one of the best times to be a peasant
in human history because so much of the labor force was called that uh they had a lot more bargaining or negotiating power and so I think you hear these stories of them having like holidays for like half the year or something like that because they had so much more uh bargaining negotiating power it seems like AI again on the way there will do the reverse right it will make labor EX abundant and so practically if I'm uh someone in law school or someone in in medical school training 10 20 years before I can start making
capital and I'm building up my labor I'm building up a skill that trade-off suddenly seems a lot less attractive um yes human capital is a depreciating asset in this world of advancing AI um and so Investments with very long payback times uh if if if especially if you're doing them only because of the ultimate payoff of a higher salary 20 years into the future or something would you should be discounted you know uh accordingly I mean you would have to have scenarios where AI development takes longer or where it gets so regulated that they can't
perform these particular jobs let me read you uh a quote from from your book Kane's predicted that by 2030 accumulated savings and Technical progress would increase productivity relative to to his own time between four-fold and 8-fold and as a consequence the average working week would decrease to 15 hours as we approach 2030 the first part of Kane's prediction is on track to indication the second part of Kane's prediction on the other hand would appear to be about to miss its Mark if Trends are extrapolated while it is true that working hours have declined substantially we
are nowhere near the 15-hour work week that Cane's expected what do you think this is greed has triumphed over sloth um I think uh the reason is twofold first uh status competition so we work hard to afford a whole bunch of things going well beyond uh our basic material needs so we don't just want a car we want a car that's nicer than the cars everybody else has Etc and so that provides potentially unlimited Demand right for more because the bar keeps racing and then I think the other uh factor I think is a kind
of work ethics that is uh a fairly ingrained Norm uh that it's sort of virtuous to to be not just loing on the sofa uh but to sort of exert yourself it's interesting because most people think that we've yet to approach this sort of post post scarcity world where all of our fundamental needs can just be met like that but at least in the first world you know North America Western Europe that seems to be far gone like most people are able to sustain themselves um just fine and so it seems like it's maybe it's
less so greed that has triumphed over sloth but vanity because now the reason that people work I do think it's mostly this this kind of social Drive Russo I think talks about this in his second discourse where he kind of flips the view of of the direction of scarcity so in in Russo second discourse in his state of nature civilization is formed there's actually a state of Natural Abundance not because the productive capacities are actually increased but because people don't have these social vain desire vain in rousos perspective vain desires That civilization has has cultivated
but it's actually in Civilization that scarcity increases again not because of the of the amount of things that have diminished but the amount of new desires that have been that have been born yeah I mean I think there is a form of scarcity uh uh that has been quite pervasive through human history and prehistory even aside from civilization um in that there were many things that were really needed in some sense but not available to most people or indeed any people say like Quality Health care if you were a hunter gatherer and uh you know
you got sick you know maybe you could rub some Leaf you know but that there a lot of conditions that couldn't be fixed that way and and occasionally there would be periods of famine as well you know maybe that you you get the cold season or something else um and so I think it's kind of um to some extent been endemic up until um a couple of hundred years ago and and in in many parts of the world until more recently are still today we've been in a sort of um approximately malthusian condition so yes
there has been various advances in productivity and all kinds of Technologies you know over thousands of years but whenever the economy grew by 10% the human population also grew by 10% and average income still hovered around subsistence um you know with some fluctuations and I think it was really only in special circumstances like you mentioned in the aftermath of the plague you know where there had been a great calling um or you know maybe people discovered some some new came to a new island or a resource where there were no humans yet like for a
period of time they could enjoy plenty and then since the a revolution where economic growth has been so rapid that population growth although it has been high hasn't been able to keep up that you have been able to sort of have increasing average incomes so we talked about um AI right now and the current trends we're seeing but now I want to move into the heart of your book which is what you described as deep Utopia full technological maturity and as a brief overview for our listeners this is where AI far exceeds human capacities in
almost all tasks where we can simulate right virtual experiences to be indistinguishable from the experiences we experience now where the world is plastic that anything materially that can be done within the the Realms of physics that we have that technology to do but also that we ourselves are plastic right so uh in this deep Utopia um you described how for obvious reasons work other than the types of work you describe priest prostitute and po politician other than jobs that constitutively require humans uh are are already made redundant but I think the most interesting claim here
on redundancy is that Leisure or or a lot of human Leisure would also be made redundant as well why is that um if you go through those sort of leisure activities one by one you you you can for many of like cross them out or at least put a little question mark on top of them whether they would still have a point in this condition of technological maturity so maybe um some billionaire goes to the gym you know uh five times a week because they want to why they they want to be uh like fit
and healthy uh and maybe they feel more relaxed afterwards but if you had this drug that would induce exactly the same effects W without you know you having to go there and sweat and make that then you could still go to the gym but but why like if you could just pop the pill you get exactly the same health effects um the same body the same mental Clarity after like I think many people would just pop the pill um we talked earlier about child raring like maybe people would still want to do that but a
lot of the specific activities involved in child like changing the diapers like doing this kind of a lot a lot of it is is kind of individually things you might be tempted to Outsource um and um so and then you can like go through Leisure activity by Leisure activity and and I think uh many of them would lose their original purpose at technological maturity and and then if you did them it would have to be for some other reason generally speaking I think you would have a kind of post instrumental condition where to First approximation
that would be nothing you would need to do in order for something else to happen because that would be a shortcut to the other thing uh you could sort of press a button or or like request your AI or robots to to do that thing and you wouldn't have to exert effort yourself to get that outcome and so all the things we do for those instrumental reasons uh would drop out of the picture with maybe with some like class of exceptions but and so I think this is the the most interesting Insight um in your
book which is even if we solve all of the tremendously difficult problems around alignment um around all the other issues that that you described in your various other books and we get to the maximally good state right where politics society and and AI is purely working for us it's still in some sense difficult to imagine what a good life looks like because it's so so different from um how we conceive of a good life today and so yeah when work when so much of work and so much of leisure is made redundant what is the
best type of life one can live in such a Utopia well it's actually quite challenging to really envisage a great like because yeah it would risk undermining a lot of the things that currently give meaning to and you would feel almost at a loss that's this kind of do do we just become these kind of amorphous you know drugged drugged out pleasure blobs or what what like what and then it might seem quite alienating and un unattractive I I think though that if you if you push through that I think there is a a dare
on the other side of something that actually would be very worthwhile so in your book you described five let's call it Moes of Defense uh for for a life in deep Utopia and I I found this sort of way of framing it as already very interesting because you think of in Utopia you're not defending things right but but you are defending here because of how difficult it's to imagine a good life here honic veillance experience texture autotelic activity artificial purpose and social cultural entanglement these are these will become the the in your view the pillar
of the good life once you push through as you describe it so can you give us an overview of what uh what what you these means the first one being hon equality so this is basically the observation that um we could have a lot of pleasure in Utopia pleasure in the wi sense not just kind of physical but mental like enjoyment like you could actually immensely enjoy um every hour and every day of life um to to degrees that that are like at at least would match the sort of the peak current human experiences perhaps
go a lot beyond that and um it's easy to sort of dismiss this because it's like a philosophically boring point I mean it's kind of trivial that when you have this advanced neurot technology you could do this and uh and then we immediately jump to sort of thinking well that's a kind of um to generate existence we think of junkies and it's like not really the but but like actually it I think is super important maybe the most important and and this on its own might make it very worthwhile to swap out the uh the
current world for the I mean I should say there is also a sort of minus one mode which the book doesn't talk very much about but but is super important which is just get rid of the negatives that currently plague The Human Condition which is like immense and terrible but uh here we are thinking about what could you do more than just kind of getting rid and so there yeah just this honic well-being like that every day could actually be a a an immense Delight like there there are some people who are like philosophical heatonist
I think pleasure is the only thing that matters and the absence of pain so for them that's kind of case closed at this point right but there are other value systems that maybe think pleasure is a good thing but there are other good things so let's see what we can add to just mere honic wellbeing and so then the second one is experience texture we observe that not just these utopians have great level of enjoyment but rather than just being sort of Dazed out junkies that have a diffus confused sense of pleasure they they could
like attach this pleasure to valuable experiences like say the uh appreciation of beauty um or the uh uh understanding of deep important truth um so like pleasure in in like learning and understanding the basic law of physics learning about human nature learning about you know and appreciating great art and um natural beauty uh you know plays and like that's that's how they get their fun it's not just like the junkie but it's like this kind of connoisseurship that is also exquisitly joyful you know maybe appreciating the moral virtue and goodness in in various people and
historical people and so forth um that already seems to like make it uh more attractive um and then then you can add some further things like so it it's not the case that you need to imagine these utopians as mere passive recipients of these experiences of truth beauty and uh and uh understanding of deep truths that they take enjoyment in this this would be this kind of aoic uh stuff so they they they don't just sit there passively like observing great Beauty and feeling Joy but they could go around and like doing stuff um artificial
purpose is um purpose that we uh create uh in order that we are then able to engage in purposeful activities so you could set yourself some maybe arbitrary goal uh and then once you have that goal then if if you select that goal in a suitable way you could then have instrumental reasons to and engage in various efforts to achieve it so key here is that the goal you set yourself has to be sort of constitutively such that it calls on you to make an effort rather than for you to press the button to have
the robot do it right um so you can bake into the goal that the goal needs to be achieved by your own efforts otherwise like the goal is to achieve a certain thing with your own efforts then if that if you have that goal for whatever reason then now you have purpose uh because the only way you can achieve your goal is through your own efforts so we we can think of this like the Paradigm cases kind of various forms of games where you know maybe you you decide to play a game of golf there
was previously no reason why the ball would have to go into a sequence of holes it's like a completely arbitrary goal but you adopt that goal and now you have um a reason to try hard to hit the ball in exactly certain ways to achieve this goal and you could generalize that you could have much more complex games with you know multiplayer multimodalities like you know extending maybe over years and uh that that could then give you purposeful activity not just activity and that so those those are the first four and there is like one
remaining one which is there would be some natural purposes that could survive into technological maturity um but that would be sort of Sofer purposes like what what what natural purposes would survive in a in this post instrumental State well take something like um so if you currently have some value or goal that say uh you value the continuation of a certain tradition that might be something you just happened to that many people would right now have that as one of their values and the tradition might be such that it just isn't continuing unless humans are
continuing to do it it might be like constitutively part of the tradition that humans are doing this thing every year in a certain way like imagine some ritual or something like that so then then those would survive because that would be like you could create robots who would be doing this stuff but it wouldn't count as Contin or honoring your ancestors like it might not count as you honoring your an your your value of honoring your own ancestors might not be served by building a whole like like Ensemble of of robots who are like going
around and you know paying visits to their grave or or thinking about them like it might require you to do it for that value to be achieved um more broadly I think various forms of uh social entanglements where I mean we touched upon it a little bit earlier when we discussed parenting where if if there is this existing relationship between a child and a parent and part of what is valuable about that is that it's these particular individuals are relating in a certain way then that could also give you a natural purpose even at technological
maturity to continue to do certain things and interact in various ways uh with your child and and and this might at first s seem weak uh like compared to a lot of the reasons we have things to do today there are like very Stark tangible immediate consequences if we fail to do them so like maybe somebody has to go into work every morning because otherwise they will lose their job and then they can't pay their rent and then they will get thrown out on the street and then it will be cold and this is like
a very sort of hard set of consequences and a lot of the stuff we do today are motivated by these kind of hard hard consequences and and uh in in Utopia a lot of that would go away but um I think like just as um you you walk outside in in the daytime you you you you see the sun you don't see the stars they are much fainter than the Sun but they're still there I see these subtle values as as already being there like there's a whole host of these more almost aesthetic reasons for
doing things that are blotted out from sight currently because there are these kind of more screaming moral and practical needs that we need to take care of in our lives but if you imagine a scenario where all of that went away like the sun set and then it would make sense for our evaluative pupils to dilate to take in more of the fainter light that comes from these subtler values right so what I what I understand these five rings of defense is that the four the first four rings a Critic might say what's lacking from
them is a kind of necessity right that that I feel I need to do XYZ because it it's from this external source that gives my life purpose and it it seems like they're trying to rescue necessity in Utopia by highlighting a subset of activities like honoring one's ancestors that it is it constitutively requires us humans or maybe even stronger you as an individual to do and and that's how you rescue uh necessity and so maybe analogy here is you know um LeBron playing in the NBA bring back a uh uh um the championship to his
hometown akan Ohio there's nothing necessary about that right it's it's a set of rules that that we've invented for ourselves but it doesn't make it any less meaningful for the people involved is that is that a good way to understand it yeah well if he independently wants to do this or if there is an independent value that this is something that should happen that it like if if if it were a case that he just set himself this goal because otherwise what should I do all day long and then like he convinced himself to pursue
it then then it would be an instance of the fourth artificial purpose but if there's like some independent reason that was not just created in order that somebody has something to do then it would count as a natural purpose right and and and in this example LeBron winning the championship for the for Cleveland it would be something like the recognition and expectation of his family and friends and the whole city that he grew up in would that be a good example of something that would make this social cultural entanglement instead of artificial purpose yeah so
that that would give him a a real reason to do this if if people continue to want that to happen and if he cares about what other people want or what they will think of him so it seems like social cultural entanglement relies heavily on um uh the economy of recognition right what people desire what people give honor to and what people give esteem to do you think that humans will start esteeming the recognition of non-human agents and maybe we're already starting to see the the Genesis of this where there are these dating dating Bots
that have already existed and right now it's only appealing to people who are are struggling to form real human relationships but I remember there was one dating bot that after they changed the algorithm and the dating bot behaved nothing like what they previously behaved like the people were in tears as if a real family member had died so it seems like if eventually we think that we'd care as much about uh artificial recognition as as human recognition then a lot of these social cultural entanglements would be threatened because presumably we can uh tell with a
robot uh what what to what to recognize or or what to give AEM to um yeah that's right so you could have these uh future social um entanglements with with with various forms of digital minds but you have a kind of Legacy if if now you care about certain people like you you might not want even if if you had some brain technology that could sort of uh extra Pate that care and implant a different care or like you maybe like some somebody who is kind of uh faith of falling in love with the wrong
person and but it it might have been better if they had fallen in love with somebody else uh but once they are there they are there um and so we might have these Legacy purposes that come from current commitments that we care about and that we will carry on to uh into into technological maturity I I think it's not the only I mean so there certainly are these social entanglements I think there might also be um more Broad aesthetic reasons like you might think it would just be a more beautiful way to live your life
if you did certain things yourself and upheld various things and kept doing certain things even maybe Avail yourself of a lot of the conveniences but it would just be uh so I think those could and spiritual and religious um reasons for doing things also could possibly survive right I see and so I I want to go back to the beginning um about pleasure uh because I think I want to defend your position a bit because there are entire schools of very serious philosophies for example the Epic Koreans who who do see that pleasure as the
the Supreme good and so that literally might be might be enough but even for someone who we we we naturally consider to to think of pleasure as this kind of secondary or tertiary thing someone like Aristotle um when you read him closely what the virtuous man is is is someone who takes pleasure in doing the right thing right so it's about correctly aligning one's pleasure so so I think that ends that adds credits to your view that that pleasure itself might be worth uh jumping into the state if if nothing else yeah I mean in
in the second case I guess particularly if the pleasure was kind of coupled with the right that if you took pleasure in the right things uh which which is like yeah that that could be like taking pleasure in in contemplating the right things or in doing engaging in the right kinds of activities and so forth so yeah you can I think in general it's nice if we um steer towards the future that will score high on on a on a wide range of different values and according to uh many different people's preferences if if we
can do that by just compromising slightly on any one of the values so I I think because the future could be very big in terms of the resources available um there would be a bunch of different values that it could be quite cheap to satisfy and so we should make sure to satisfy all of those values there are certain values that are resource hungry uh like if you're a utilitarian for example like you could always make more happy people and so you just want more and more and more resources but uh but if there are
the values that just needs a little bit and then they are sort of almost maxed out then it seems like let's let's do that and then right you know the the outer space maybe the utilitarians could have a bigger say about how we dispose of that what about values uh that could compete with each other and and I think this factors into this pleasure case because in Christian theology for example a lot of the sins aren't loving the wrong things but they're liking the right things but in the wrong degree right right so lust gluttony
are all about liking good things but into such an immense degree that you ignore some of the most uh the more important things in life so here's a concern Professor Bostrom you might able to design a perfect super drug that's like Molly plus mushrooms plus cocaine all the pleasures of all the drugs we have right now with zero of the side effects continuously and there might be nothing objectional about that in itself except for the fact that such an immense pleasure would detract us from pursuing perhaps some of the the the less Obviously good and
interesting parts of our lives but are no less important for a flourishing life in the long run yeah um so I think that certainly would be the case today um um with more mature technology you might imagine having the ability to uh create more fine grained experiences that um well a that mean you could remove Sort Of The Addictive potential and uh the the sort of Stupify effect of some intoxicants the adverse you know implications for your liver and blood pressure like all of these things but but also like psychologically rather than just having it
the kind of monoton Dum pleasure and that's the alternative to living a rich and engaged life in deep relationships with other people you could you know weave these things together in a more integrate way so so that the pleasure comes from these you know virtues and appropriate activities and thoughts and experiences right so uh to to give an uh example that certainly will be outdated it's like uh ingesting a drug or or going undergoing biotechnical enhancements that gives me immense pleasure when I brush my teeth when I visit my friends when I uh when I
when I go to bed in the right time right it's about exactly like what Aristotle described like interweaving pleasure into what also would in itself make for a good life yeah that seems right like so some of the specifics there might need to be like you might not need to brush your teeth any anymore exactly that's why I said it would be outdated um but yeah so it would be focused more perhaps um on what is intr like intrinsically valuable activities and experiences as opposed to these kind of instrumental Necessities so right now there are
all these things we have to do in our lives and so yeah let's take pleasure in them because then we do them more and it's like all works right but even in a condition where you didn't have to do all of these things like you have you don't have to you know clean your house because you have a robot that cleans it your teeth are like enhanced so they don't rot even if you don't brush them etc etc then in that scenario it seems like what we should be spending more of our time on are
activities that are intrinsically valuable um right and so then taking pleasure in those would seem to be appropriate yeah and and that might be uh even better for someone concerned with virtue because we would be able to program The Virtuous uh uh activities as more pleasurable right so for example to respond to the the Christian that I just brought up we might say yes we are going to make sex sexual activity 10 times more pleasurable but we can also dial up uh reading the Bible or the contemplation of God to be even more pleasurable so
that pleasure naturally directs us uh to to to uh a good life in in of itself yeah or or the the the the the sexual pleasure likes being specifically connected to the in wedlock uh case etc etc I mean we have already a little like the OIC for example that's think uh is kind of you know reduces people's uh Vice of overeating uh too much and so there are like these limited ways U now there might be sort of harder trade-offs so there might be certain values that for example maybe it would be perhaps appropriate
for for these utopians occasionally to remember the earlier times and the horrors and tragedies of history and maybe feeling uh sad and mournful when when they kind of contemplate that maybe they would would have like I don't know like imagine an annual ceremony where you try to think of all the people who died before they ever got to uh experience this and um with some some some some beautiful ceremony to honor the the right and and then you maybe you don't want to have like I mean certainly certain forms of pleasure would seem to be
inappropriate and maybe you would need to actually have sadness or maybe some kind of Bittersweet I don't know I this there's a huge design space here that um hopefully that could sort of work out um I see um one thing that seems to be threatened even with uh these five modes of Defense uh is interestingness right not living a boring life because the challenge is you could be immortal you could potentially live forever uh until something catastrophic happens to you um is there any defense for a boring life given of how long we'll live um
first of all there's an even more basic distinction here between subjective boredom and objective boringness so the subjective boredom is just to feel bored that certainly could be abolished in Ault world like through the same kind of neurot technology that we have already discussed and and we have to be careful that that doesn't sort of infect our intuitions about the objective boringness or interestingness uh constantly remind that that that's already a big chunk of what we normally associate with things that are boring or interest all these people in Utopia could like be totally fascinated all
all the time and just be completely immersed and find this like every fiber of their being is just like wow this is sort of cool and interesting and I want to dive in right so that that's already there like um but if we're talking about this objective so the the the objective notion of interestingness is is a little bit harder to pin down but it's some notion that certain experiences or activities are sort of maybe such that they it would be fitting to be interested in them and others it would be sort of un so
like just staring at the maybe you think that even if you could take a drug that would make that feel super interesting that there is like some sort of normative disvalue in that because it's not the kind of object that it would be appropriate to be super so so then you can think objective interestingness well so to the extent that it involves something like complexity and richness and sophistication of your experiences or activities they could sort of just rack that up to 11 there is a slightly different version of interestingness objective interestingness where it uh
calls for fundamental novelty um where you might run out of that um that like so for example if you think it's not nearly as interesting to learn about fundamental physics as it is to Discover It For the First Time then you know eventually we will have figure out what the all the basic laws of nature are and the fundamental truth and the big general concepts and then scientists will uh have to content themselves with finding smaller and smaller truths more local truths that are less profound and so that that that that's some kind of form
of interestingness that that you would eventually run out of or maybe you would set aside little pockets of ignorance um and Mysteries to to sort of as as I alluded to earlier um if you think about human lives currently if you take this second notion of sort of fundamental novelty in in a individual person's life I think a lot of that happens really early on and so like in the first couple of years of life think think about so you discover well like you discover that there is a world like that that's a pretty fundamental
Discovery right and then like it contains objects like that continue to exist even when you look away like wow that's like just like a jaw-dropping real cognitive Revolution right and then you discover that there are other people in the world like that's a now like when you're grown up what's like the most profound thing you learn in a given year like it's is registered a lot lower on the sort of richer scale of fundamental novelty and so so we are already suffering huge diminishing returns within our current lifespan if you sort of look at the
human like the planet Earth as it were from from some alien comes here and like each the average person's life like how much fundamental novelty is there like they all you know maybe a few people are doing some interesting new things but most people are just doing the same they have the same old thoughts the same old fears and the same hopes and they hope you know boy meets Girls y y y get the job get the paycheck get old you know somebody passes away you're sad for a while and then you eat and that's
like from a certain lens you might think that our current lives would be extremely boring already just because it's already pretty much been done and only small details are different in fact I I want to generalize this um this might be a caricature of your view but this was my kind of takeaway of how you essentially are defending uh life in in Utopia which is you're saying look you think you want this fundamental novelty you think you want deep purpose this kind of world historic sense of meaning look at your life now how many of
us have those things and yet so many of us live great lives what you actually want is a lot of pleasure uh social engagements aesthetic experiences it's kind of like your argument is kind of like the softest position in the gorgeous like let's stop talking about these high Fant values with big fancy names like purpose meaning and interestingness and let's get the basics done and uh the example you gave was n and and you said something like you know nich talks a big game about his higher men and the napoleons of the world he lived
like a Bohemian reading and writing books in the Alps is is that a fair uh categorization of your view well I think n was one of those people who uh like would have a relatively more plausible claim to interestingness in this later sense like in that he like thought big original thoughts and uh really dove into that um I think that there there is if if you have a an axiology a theory of value that is kind of pluralistic I think given that pretty possible that there's like some extra value in in also having this
third kind of interestingness that's kind of globally you know registers in a significant way I'd say that even there though like if you just zoom out a little bit further it's probably an infinite Universe out there and with a lot of other planets with a lot of other civilizations that have already thought the same thoughts and thought much better thoughts and created better so like depending on your scale we might already be completely unable to realize any fundamental novelty in the world um but if you do focus on this mesoscopic scale that is kind of
either an individual life or or like the globe as it is now with 8 billion humans or so and you think that that's where I want to make the significant difference on that scale for there's a particular kind of then I'd say that right now is the Golden Era of purpose right now there are like immense stakes in the world there are like a lot of immediate uh moral urgent causes that where you could individually make a significant difference um plus we are seemingly approaching this this critical juncture in human history upon which like the
next billion years might like this if you want if you want real purpose like knock yourself out now like because it's never going to be like more at stake than it is right now and in the next few years and um if if you can't even bother to do this right now like then I mean how much value do you really place on this kind of global purpose and so is my reading of your view of human nature right which is you're defending uh the life in deep Utopia not by rescuing this deep Global sense
of purpose meaning interestingness but by suggesting people actually don't need that or desire it yeah I think it's like one value of which we might have less in a Sol world but I think we could have a lot more of most of the other values such that NE the net balance is like an enormous positive I see I see I mean like yeah okay so maybe if there weren't people starving you would be deprived of the purpose of feeding them Etc but still it's a trade I I would be happy to take right um and
there's like something lost there like there's like something nice and glorious in somebody going out of the way and feeding the starving that's that's like a little plus there but there's also this huge negative and if we could all just have enough to eat without without that I think that would be better and you can I think generalize that right when you looked back at history in the book you said that uh the periods and the people Worth dramatizing or talking about are rarely the good periods or good lives that you would want to live
yeah there there's like a big difference and this is like a fundamental um thing to bear in mind when forming some opinion about the this this this kind of utopian problem uh that you could even evaluate a hypothesized condition from two different perspectives and what we I think often defaults to if we're not careful is the external point of evaluation like we look at this like future utopian condition as if it were like a stage play we sit in the audience and and we look at is this and then we form some like kind of
thumbs up or thumbs down but from that perspective I think we will tend to overvalue interestingness and drama like if you go to the theater you want stuff you know to happen you you want like you know there's like a king and he gets killed and then the Assassin fleas and then they overtake him and there's like or the movies we write or the novels Etc um so so good stories often have a lot of suffering in them um but there is a different and which I think is the right way to evaluate this is
not like how good is Utopia from the outside to look at but how good is it actually from the inside to inhabit to live it and and there I think yeah the the the stories that you know are are most interesting to read about are not necessarily the stories that are best to live out in your own life and uh and we need to correct for that if we're actually trying to build a Utopia that we would be moving into if it's not just a fiction or a screenplay but like an actual plan for what
we want to spend the rest of our time in so so you mentioned uh something like there are no Wars in history that are worth it no matter what kind of great art is is produced by them but but I wonder if on the off chance that that War or conflict or or bad thing creates these civilizational grounding pieces of art would you still will that away so here's the thought experiment you know exante I think you you would say no no Trojan War right but expost given we know how fundamental the Trojan War was
to establishing not only just Greek tragedy but Greek philosophy and Greek culture would you if you could wave your wand one way or the other would you say spare those lives in the Trojan War I don't want my ilad and odyssey or or even the responses to that I don't feel entirely competent to make these judgments um I would think that at some point enough is enough um that uh I mean we we've had a lot of Wars by now and we've had a lot of people dying for various causes and suffering horrible Fates and
and you know maybe there's a certain kinds of value that can exist in in human style life and but if you think about this do we want like okay so maybe you want another few decades another few hundred years another few thousand years but like a 100 thousand need a million years more of of these like two-legged creatures running around here and killing one another and getting cancer and like having headaches and stuff like at some point I think we want to maybe unlock the next level um and uh and say well you know if
there are values in some of these tragic and beautiful things like it might not scale with a number of instances of the tragic and beautiful like having 10 tragedies doesn't create 10 times as much Beauty value as one like and and so it's the kind of value that seems to saturate right um whereas like the value of like a nice cup of tea you know the thousandth cup of tea might be you know just as taste just as good as the first cup of tea So eventually you've kind of ticked all this you've had your
big life you're you're the Youth of humanity you've had your adventures and stuff and then you settle down a little bit I like I think there might be different kinds of Adventures that might be a lot more interesting in many ways it's just that maybe they involve less suffering in this scenario so other than this uh Global sense of purpose meaning Global sense of stakes Global sense of uh novelty and interestingness is there any fundamental set of human values that will not be fulfilled in this Utopia or or be fulfilled worse than currently I I
think a lot of them have some connection with this um yeah this sense the purpose and meaning that they seem to be particularly uh uh threatened by the affordances of of a of a solid World um depending on yeah you're kind of more spiritual and and religious views that there could be I guess additional constraints there in terms of what could be achieved in in a solid world and then this kind of fundamental interestingness of the form that requires novelty on on a global scale not a cosmic scale not on a sort of day to-day
scale but on the sort of scale of planet Earth that that also might there might just be so many times you can discover like you can discover relativity Theory once the theory of evolution once and then there might be you know 50 more discoveries of that magnitude that you can make and then after that it starts to dwindle so that would be another example and I think that's a very interesting Insight on human nature which is that humans are the type of creatures such that one of our core values requires us to be in a
fallen world or or in an imperfect world right meaning this kind of global sense of scale purpose or novelty our our Natures have maybe being conditioned on the existence of uh problems uh in in that throughout human ex history and prehistory and indeed all the way back to our great ape ancestors and way earlier than that there were like various forms of scarcity and and things that needed to be done all day long you had to check for Predators you had to get food you had to do find and So like um a lot of
our psychologist kind of just assumes that there are these needs and we see a little bit of how problems can arise uh when that is no longer the case today with the Obesity right so we had we have kind of psychologically um evolved in a way that assumes that it's food is scarce and you need to try to find it and grab it when you can and stuff yourself as much as possible because maybe tomorrow there won't be anything to eat and we've removed that constraint from our external world in in in at least in
in wealthier countries there's plenty like the fridge you know it's full of food and now there's this mismatch between our environment and our psychology um as we move to technological maturity that that that that little crack could open up much wider and there could be a huge mismatch between what our sort of evolved psychological nature is and what the environment actually demands of us and that's kind of what creates this problem in the first place that we would need to possibly like change ourselves quite fundamentally to become suited for life in in Utopia right so
so here's a uh proposal about rescuing the sense of at least perceived Global purpose Global novelty Global meaning Global Stakes which is to wipe off our memories and then go into a VR simulation that is indistinguishable now the obviously the the Philosopher's objection is laid out in The Experience machine right where the the conclusion is that you wouldn't want to enter into such an such a machine even if you couldn't tell because they would be unfitting that that there's something objectively bad about it even if the experiencer doesn't know but correct me if I'm wrong
I believe there were subsequent variations on this uh experience machine uh uh uh thought experiment that said but what if I told you that uh you live an experience machine now would you want to pull out and and I believe the in is that you wouldn't want to pull out and so what people are really after isn't fittingness to objective reality but it's a kind of familiarity so so if that's the case what's wrong with wiping out our memories and pretending to live like Achilles uh in Utopia for one life and just keep on doing
this to to get that kind of global sense of uh uh uh meaning back first it's not clear how much meaning you would get from that so if there was no it it would kind of fall it seems to me into the artificial purpose category um but you wouldn't know right that's the key part you would know but that's that's also the case inopia that I mean if if you want some like partial Amisha so that that you forget about various things that would be easily arranged um I think like I mean I presume that
you might want to edit the like so if if if edit edit the uh the sort of uh version 2. like if you want to re revisit history like there are a lot lot of parts of it that I think you would want to Omit from from your sort of uh recapitulation of it um either because they are too horrible or because they are just kind of a bit dull um but uh certainly you could imagine creating virtual uh worlds that you could interact with and inhabit and explore um in different configurations and with different
variations so I I just want to draw a distinction here because when I when I when I heard you say artificial purpose I thought what you meant was for example going rock climbing so you know I could get helicoptered up to the Cliff face but I knowingly restrained my own set of available means uh to climb this rock face knowingly hence making it artificial but if I wiped my memory and entered into the life of Achilles like phenomenologically I I wouldn't be able to know right and so so it would lose the artificial side of
that at least from from the subject no yeah so I think there is a Continuum there if you sort of you could imagine the the the rock climber once there are halfway up the wall um today that might not be a helicopter that could reach them in time they didn't have to climb the wall that was kind of artificial purpose but once they are there they really have no uh Choice other than to like do their utmost to keep doing it otherwise on on pain of death right and similarly you could imagine utopians if they
wanted to could create little um holes in Utopia right where the world is not solved where there's like real need and constraint and Stakes of various kinds um if you thought that was a added value to sort of being subject to these forms of risk um now you don't want to make too many holes or or you just kind of destroy the Utopia right like if you could like just destroy the whole Utopia you're back to scarcity and real need again but that that would also mean giving up all the good things about it but
you might have little pockets like designed and you know maybe that would be real Stakes but maybe the stakes wouldn't be quite like um a child dies from brain cancer type of steak which but but more like well if if you fail at this task you will have a month of uh you know being excluded from your normal fun gadgets and friends and you have to like work hard for a month to sort of get back to where you were something like something more kind of human scale Stakes right right I I think the interesting
question about human nature of whether you know we think that the picture you painted in in deep Utopia is an attractive one or not has to do with how important necessity is for for a good life right and for for someone on the opposite extreme someone like the unibomber he thought that today even in our current technological Society there are not enough necessary primary actions that we have to do and so even now he wants to take us away from the current technological Utopia that we are in and so I I found that to be
a very interesting uh uh extreme opposite end of this of this experiment yeah although I think uh some of what he was thinking about were the sort of contingent psychological effects of living the current lives where there are various forms of psychological Mal I mean from from overeating but also like like various kinds of Malay that can happen when uh people live in this modern artificial uh like you get addicted to your social media feed and if you never can can you really connect to any other can you have real friends if you've never been
in a life or death situation where you like saw that they like were true friend even though it they risk their life like that might be a form of connect there might be all sort of ways in which uh the mismatch between what we evolved to do in the current world creates psychological ailments that maybe they are outweighed by by the Comforts and benefits I think that's POS possibly the case but still there is this kind of psychological cost probably some kinds of mental illnesses are more widespread because we're not perfectly adjusted to the modern
but those things could be fixed uh like like you you you wouldn't have to uh uh overeat or or or become feel socially alienated or uh in in other ways like you might even get closer to Nature in some ways like rather than living in sort of concrete square buildings like you could imagine a technological maturity you would be actually living in some sort of Savannah like you know but minus the bugs perhaps and M like you know the temperature would always be right and right like um and so so in many ways you could
have like some like even like first you could adjust the psychological apparatus so that it didn't have these negative symptoms second to some extent you could also adjust the environment so that it would in many ways more match what we were naturally uh designed to interact with what was striking to me was how theological our entire conversation today H has been and how similar the thought experiment you set up was was to the Christian afterlife so let me just give you a few examples so in your deep Utopia right plasticity means that the material world
is is not really a concern we are but we also have our individual bodies right in the Christian afterlife individuality is preserved so all kind of social political issues are resolved and just as the Saints in in heaven are supposedly spend their time contemplating God a lot of the activities that you described are contemplation based uh and also you delivered this book in a lecture format where you kind of construct this world in six days days six days of lectures and then you rest on on the seventh what do you make of the the theological
aesthetic of your work yeah I mean the number of days is more I I thought it was kind of done on on I mean it took kind of six years to do the six days I felt like it was time for a wrap at that point um and and but I think in general that there are uh strong par between the uh thoughts developed in in in religious and Theological context of because it's like in some sense the same fundamental question like what's the best possible future for a human being if if you abstract away
from various contingent limitations and constraint like um more generally I think actually when you think through the full and ultimate implications of the sort of standard physicalist worldview and really think in in many ways you get to considerations traditionally developed in in a theological context I mean we we alluded to the simulation argument earlier right there it's very striking uh I mean it starts from a different kind of um assumption to begin with but the end result is something at least structurally strikingly similar to to Many religious and Theological conception my my most recent paper
um AI creation and the cosmic host is is another example of how you start to think through about in in this case um various ethical questions related to how we should relate to uh digital minds and AIS that we are building and and the possibilities of levels of uh where again and you sort of come up and so it might be that there's like the the philosopher Derek parfit who was a colleague at at for um um he he had this metaphor in his work of of a big mountain and he he he did work
on meta ethics and he he had this view that different approaches to meta ethics um sort of consequentialism and the anology they were kind of climbing the same Mountain from different sides and that when you thought through each one to its kind of purest and clearest form that that would sort of converge at the peak and I think maybe there is like some similar phenomenon where you have a Big Mountain where people have been climbing it from the theological side and if you climb it far enough High Enough from the naturalistic side maybe you get
to a similar conclusion in the end thank you so much for a fascinating discussion uh thank you Jonathan I enjoy this