can you take us through what social media content algorithms have done uh sure yeah so the social media content algorithms right they decide what you read and what you watch and they do that for literally billions of people for hours every day right um so in that sense they have more control over human cognitive input than any dictator in history has ever had more than stalin more than kim il-sung more than hitler right they have massive uh power over human beings and they're completely unregulated and people are reasonably concerned about what effect they're having um
and so what they do um is basically they they set an objective because they're good standard model machine learning algorithms and so they're set and objective let's say maximize click through right the the probability that you're going to click on the next thing so imagine it's like this is youtube uh you know you watch your video and lo and behold another video pops up right and am i going to watch the next video that it sends me to watch or am i going to you know close the window and so click through or you know
engagement or various other metrics these are the things that the algorithm is trying to optimize and i suspect originally the the companies thought well this is good because it's good for us if they click on things we make money uh and it's good for people because the algorithm will learn to send people stuff they're interested in if they click on it it's because they wanted to click on it yeah right and there's no point sending them stuff that they don't like they're just cluttering up their their input so to speak um but you know i
think the algorithms had other ideas and uh the way um that an algorithm maximizes click-through in the long run is not just by learning what you want right because you are not a fixed thing and so though you can get more long-run click-throughs if you change the person into someone who's more predictable right who's uh for example you know addicted to a certain kind of violent pornography right and so youtube can make you into that person by you know gradually sending you you know the gateway drugs and then more and more uh extreme content whatever
direction so the algorithm doesn't know that you're a human being or you have a brain right as far as it's concerned you're just a string of clicks right content click content click content clip right and um but it wants to turn you into a string of clicks that in the long run there's more clicks and less less non-clicks and so it learns to change people into more extreme more more predictable mainly but it turns out probably more extreme versions of themselves so if you know if you indicate that you're interested in climate science it might
try to turn you into an eco-terrorist you know and you set you know articles full of outrage and um [Music] and so on if if you're interested in in cars it might try to turn you into someone who just watches endless uh and endless reruns of top gear why is the person that's extreme more predictable well i this i think this is a this is that's an empirical hypothesis on on my part right that if you're more extreme you're you have a higher emotional response to content that affirms your uh your current views of the
world and so what in politics we call it red meat right the um the kind of content that gets the base riled up about you know whatever it is that i love about whether it's the environment or or you know immigrants flooding our shores or whatever it might be right you know if once you once you get the sense that someone might be a little bit upset about too many immigrants then you you send them stuff about all the bad things that immigrants do and you know you know videos of people climbing over walls and
uh sneaking into beaches and all the rest of the stuff you know and it's human propagandists have known this forever but historically human propagandists could only produce one message whereas the content algorithms can produce in theory one propaganda stream for each human being specially tailored to them and the algorithm knows how you engage with every single piece of content right your typical you know hitler's propagandist sitting in berlin had absolutely no idea on a moment-to-moment basis how people were reacting uh to the stuff that they were broadcasting right they could see it in the aggregate
over longer periods of time um that certain certain kinds of content was effective uh in the aggregate but they don't have anything like the degree of control that that these algorithms have and you know the one of the strange things is that we actually have very little insight into what the algorithms are actually doing so what i've described to you seems to be a logical consequence of how the algorithms operate and what they're trying to maximize um but i don't have hard empirical evidence that this is really what's happening to people um because the the
platforms are pretty opaque but they're they're opaque to themselves they're a picture themselves so you know facebook's over own oversight board doesn't have access to the algorithms and the data uh to see what's going on who does i think um the engineers but their job is to maximize click-through right so pretty much there isn't anyone who doesn't already have a vested interest in this who has access to what's happening and and that i think is something that we're trying to fix um both at the government level so there's uh there's this new organization called the
global partnership on ai which is um you know it could just be you know yet another do goody talking shop but it actually has government representatives sitting on it so it can make direct policy recommendations to governments um and it ha in some sense it has the force of governments behind it when it's talking to the facebooks and googles of the world so we're in the process of seeing if we can develop agreements between governments and platforms uh for a certain type of transparency so it doesn't mean you know looking at whatever you know looking
at what chris is is watching on youtube you do not want to do that you do not want to do that it means um you know being able to find out you know how much terrorist content uh is is being pumped out where is it coming from who is it going to uh slightly more sort of aggregated stuff like typical data scientists do yeah so and and possibly being able to do some kinds of experiments like you know if if the recommendation algorithm works this way you know what effects do we see on users compared
to an algorithm that works in a different way so to me that's the really interesting question um is you know how do the recommendation algorithms work and what effect do they have on people um and if we find that they really are manipulating people right that they're they're sort of a consistent drift um that a person who starts in a particular place will get driven in some direction that they might not have wanted to be driven in then that's really a problem and we have to think about different algorithms and so in ai we often
distinguish between reinforcement learning algorithms which are trying to maximize a long-term sum of rewards so so in this case the long-term rate of clicks on the content stream is what the algorithm is trying to maximize those kinds of algorithms by definition will manipulate because the the action that they can take is to choose a particular piece of content to send you and then the state of the world that they are trying to change is your brain and so they will learn to do it right a supervised learning algorithm is one that's trying to get it
right right now right so they are trying to um predict whether or not you're going to click on a given piece of content right so a supervised learning algorithm that learns a good model of what you will and won't click on could be used to decide what to send you in a way that's not based on reinforcement learning and long-term maximization but simply okay given a model of what you're likely to click on will send you something that's consistent with that model right in that case i think you could have you could imagine that it
would work in such a way that it wouldn't move you it wouldn't cause you to change your preferences but um if it was done right could sort of leave you roughly where you are um are you familiar with the term audience capture you know what this means from a creator an online creator's perspective uh i can imagine but not as not as a technical term yeah well it's not it's not a technical term but it's basically when you have a particular creator online who finds a message narrative rhetoric that resonates with the audience and what
you see is that this particular creator becomes captured and they start to feed their own audience a message that they know is going to be increasingly more well liked and for the most part this actually does look like a slide toward one side of the one particular direction or the other at least politically it does but with anything it does too that people inevitably sort of niche down and then they bring their audience along with it so the fascinating thing here i mean first off it's unbelievable that these algorithms that are simply there to try
and maximize time on site or click-throughs or watch time or whatever that they have managed to find a way things that we programmed managed to find a way to program us for it to be able to do its job better i mean that when i read that in your book i it's insane like that's one of the most terrifying things that and it's happening right it happened like everybody that's listening to this has had something occur with regards to their preferences their world view whatever it might be something has slid in one way or another
you may be right it may not be toward the extremes i would say anecdotally based on what i see in the world increasing sort of um levels of partisanship no matter what it is whether it be sports politics race relations anything uh people are moving toward the extremes why is this happening oh well you know it's people getting into echo chambers and they're only being shown stuff like that and also the fact that the algorithms are actually trying to make them more predictable but on top of that as well there's another layer which is the
creation of the content itself that comes in from the creators and they have their own levels of manipulation which is occurred from their feed then they kind of second order that into what do i want to create what have i seen that's successful what does my audience seem to resonate with from me so you have layers and layers of manipulation going on here yeah yeah and i think i you know in some ways the creators are are being manipulated by the system um you know i think every journalist now is thinking okay i have to
get something that's clickbait i have to write an article that can have a headline that is sufficiently attractive that it'll get clicked on you know and it's almost the point where the you know the headline and the article are completely divorced from each other um and and and you can see this now in the comments right the the people writing the comments at the end of the article will say oh i'm really i'm really pissed off this is just click bait you know the article really doesn't say anything about the thing you said you were
going to say so on so so this you know it was not as if this has never been going on and obviously you can't you can't ban people from writing interesting articles or you know i often think about you know the novel and it says on the back i couldn't put it down right well she'll be banned novels are here because that's addictive you know you can't have that right uh no but i think it wasn't too bad before because the feedback loop was very slow and there wasn't this you know targeting of individuals by
algorithms who are you know so you think about the the number of learning opportunities for the algorithm right i mean it's billions every day for the youtube uh selection algorithm right so it's the the amount the consistency the frequency and the customization of the learning opportunities for manipulation so much greater i mean it's you know millions or billions of times greater and more systematic and that that systematic element so it reminds me there's this i don't know if it's apocryphal but there's a um there's a story about the psychology lecturer and he's been teaching the
students about subliminal uh effects and the you know the students decide to play a trick on him which is you know every every time he's on the left-hand side of the room they pay attention they're really interested and every time he walks onto the right-hand side of the room they all really bored you know start checking out the email and so and uh and by the end of the lecture he's glued against the left hand right and he has no idea that he's being manipulated um but because of the fact that this was like systematic
and uh you know and sufficiently frequent it has a very very strong effect uh you know and i think that that's the difference here is that it's because it's algorithmic um and it's tied into this very high frequency interaction uh that people have with social media it has a huge effect um and it has i think a pretty rapid effect as well thank you very much for tuning in if you enjoyed that then press here for the full unedited episode and don't forget to subscribe makes me very happy indeed peace you