Hidden AI: How algorithms influence our daily lives, Bristol Data Week 2024

196 views10249 WordsCopy TextShare
Jean Golding Institute
The AI and the Future of Society event was a public-facing day held on Thursday 6th June 2024 in Bri...
Video Transcript:
[Music] hi everyone my name is Hugh day I'm a data scientist at the gene Golding Institute I hope everyone's been enjoying day to week so far and in particular the lunch um so our next session uh is hidden AI how algorithms influence our daily lives uh I'm really excited about this session we're joined by Kieran Sharma and Ricky green Kieran is doing a PhD here at the University of Bristol in synthetic biology where he's using AI to try and figure out how he can synthesize new biolog iCal uh compounds uh and Ru is doing a
PhD with uh the interactive AI Center for doctoral training where he's looking at how you can make AI systems more explainable so Kieran and rku um together have started the artificially Ever After podcast which is a podcast that's all about making AI more explainable they do lots of different topics where they try and Mis demystify AI for the general public they've done episodes on how Tesla self-driving cars work can AI read your mind and how does chat gbt work um so today we're going to be doing a live recording um of the podcast all about
hidden AI so without further Ado let's get on with it yeah hi everyone thank you I'm Karen and I'm Ricky and yeah Welcome to our podcast artificial Hereafter um it's very strange doing this in front of a crowd of people Rick and I normally just sat in this tiny little box room up in University where we record this podcast um but yeah thank you Hugh and jgi for having us um we just wanted to kick off by just introducing Ricky and I really quick background on us and how we came to start this podcast so
Ricky and I are longtime friends we did our undergrad degrees here at Bristol together and then went on to do Masters um and then we're both yeah now doing PhD research in AI so uh and with this podcast as well we can't seem to escape each other living together and we live together yeah so yeah we see each other very often um but yeah we after starting our research we basically both felt that there was this significant gap opening up between the latest research and development going on in AI and public awareness and understanding especially
at the rate at which AI was being integrated into society so motivated by that we started this podcast um where as Hugh was saying so each time we take a different topic or application of AI I think our last episode was uh an AI a day keeps the doctor away so looking at AI in healthcare uh and we try and demystify that technology uh and look forward to possible future directions um and yeah think about the ethical and societal concerns so yeah we've SP a lot of time recently thinking about the future of AI which
is yeah why we wanted to come here today but we're going to be focusing on hidden AI because that aligns quite well with our motivation of that um podcast so do you want to give us a bit of a structure on what we're going to be discussing there yes so this podcast episode we on two halves so in the first half we'll talk about you know what is in AI recommendation systems and we're going to have a little bit of time for listener questions and then we'll have a little fun fact segment which is kind
of part of our podcast gimmick just have a little bit of breather and then in the second half talk about generative AI a bit about the future and then have some more time for listen to questions yeah so to kick us off then I'm going to do what I normally do in the podcast because we don't have an audience sitting in front of us and I play the role of the listeners so going to ask Ru if you want to Define what is hidden AI so hidden AI talks about the AIS that are integrated into
our everyday systems and specifically these AIS are ones that use that it's not necessarily clear that you're actually interacting with AI and previously in these systems maybe human was making some decisions for you or you know the human was acting as the online agent but now the AI is replacing those and it's not clear that you're actually interacting with an AI so a really widespread usage of a hidden AI is in recommendation systems that we SE in social media but in other use cases like uh credit assignment um you know if you're going to get
a loan now an AI often makes that decision for you another one that's pretty serious especially in US is like payroll assessment in the justice system so these big decisions that affect your lives and now a lot of the time being in decided by an AI when you don't even you're not even aware of it yeah and I guess that the inverse of that is visible visible AI shock um and some examples there so you've got like uh chat bots so chat gbt Siri Alexa um self-driving cars as well so these are examples where when
we're using that technology we very explicitly know that we're getting Ai and that's the kind of inverse there isn't it so um but yeah do you want to like generally Define the task of recommendation and we'll step back a bit Yeah so today we'll talk you know H is a Big Field um but we'll talk about recommendation systems today just because it's the most wide widespread hidden AI that we interact with um and we think about recommendation systems as like an AI concept but we're actually going to take a step back and think about what
recommendation systems were before uh Ai and and the task recommendation is when there are too many options and you just need someone or something to tell you what to pick and a lot of the time that's meant to be for your best interest so we're going to play a bit of a game here with uh with a lovely Hugh here and cuz I've heard you you have explored a lot of the Bristol cafes so bit of a pastry coniss one might say yeah exactly so um you know maybe if someone wants to ask you oh
which Cathy would you recommend what what kind of things would you consider before giving that recommendation right so I remember when you so you guys kind of briefed me and asked me about this before so thinking like Ru I mean you don't need to be satting near the front to see Riu is like quite athletic and he likes to look out how much he eats and stuff like that so we're looking for something like maybe a bit more low calorie if he has any like dietary requirements and then like he's on a PhD stip in
so he's probably not going to go somewhere too expensive um I over doger then well I A little birdie told me that you cinon Twirls so we'd probably look for somewhere like again also thinking about somewhere that's local to you both not recommending you like go to London for a pastry sort of thing um yeah so those those kind of things I would I would think off the top of my head and also do you consider maybe you know how happy we are with your recommendation right that's really important to me cuz like I guess
it would mean why would you ask if I gave you a rubbish recommendation why would you come back to me again not that I'm like making business from recommending pastries to people but you know yeah and and I'd hope as as a friend you would you would want right so this here is an example of what recommendation systems are and how they actually serve Society it's they part of the equation is actually the the satisfaction of the person who wants the recommendation um and a really intuitive example which you know bit bit of a bigger
scale you know there is like 100 cafes and Bristol but if you think about a library um there's probably thousands of books and you want to find a book for you um it's not realistic for you to look at all the blurs and you know pick one yourself um and this is actually the first instance of like an AI kind of recommendation recom recommendation system was implemented called the Grundy system in 1979 so these aren't actually that new um and the key thing there is that an expert librarian has thought of these if then rules
to to profile you to find a book that you'll be satisfied with so again the original systems did consider the satisfaction of the people looking for a book yeah and I think since then you know over the past several decades now obviously um as we've generated more data there's more information online that we need to interact with that task of recommend has got even more important you know it's not feasible to look through all of that information to find what we want um so Rick and I actually set ourselves a task to to go away
and and think if we could list all the different areas when we interact with the digital world where we interact with the recommendation system um and for me the list was definitely longer than I was expecting but I'm going to put H back on the spot here again now and see if you can think of any recommendation systems you interact with sure so I guess like search engines right is the obvious one um and then I'm just thinking like every every week spot ify is like here's songs we think you might like and some weeks
they give me a list of songs that I hate and sometimes they give me my new favorite song so uh those would be like the two examples that jump out of me for sure yeah I think entertainment as well is another big one so you think of like yeah Netflix well stat on Netflix actually so over n 80% of Netflix content is consumed through recommendation systems and a similar one for YouTube videos as well so over 70% of YouTube videos uh are being recommended to people rather than specific search for um another one too is
um Google Maps so that's a really interesting one because you know so firstly if we're thinking about coffee shops you might not know which coffee shop you want to go to you might want to go to a coffee shop in a certain area so first you're getting recommended which coffee shop to go to then you might not know how to get there so it could recommend a route so that's a really interesting one because you know we're not just being told how to navigate the digital world there but it's actually influencing how we navigate through
the physical world as well so yeah I think it's very hard and this is again why we're talking about recommendation systems now because you know it's hard to go anywhere both digitally and online where we don't come into contact with some recommendation algorithm and what we've seen back from when we had the Grundy system is and to what we have today is this change in this kind of central goal of that algorithm so historically there with the searching for a book in the library the central goal there was very much to recommend you like the
best content for what you were searching for and to try and satisfy you with that specific need now because we've got so many different recommendation algorithms working in conjunction on the digital world the goal is is changed to trying to maximize your engagement and kind of fight for your time on that specific website using that specific algorithm hasn't it yeah and the key difference here is that when you maximize for engagement all that matters is that um whether you click on the content that's recommended to you so what we had previously with the Grundy system
of you know Q here like the satisfaction is not part of the equation anymore and another thing is that um you know H is the expert you know in the bakeries and and with the grand system you have an expert librarian doing recommendation so they they kind of know what you know what a good book is um but now the the expert in the AI is actually looking at your micro behaviors um that the way they profile you is actually on you know not necessarily your things that make you an individual but actually how long
do you spend on each post when you scroll maybe your reactions to it um so you get some crazy stories where um some recommendation systems actually can predict if you're expecting a baby and sometimes it's before you know the person even knows it themselves so this is the kind of level of profiling that these hidden AIS can do and to emphasize that point that they're maximizing engagement and the more time you kind of interact with them the better they get I think a really good example which we probably all think of is Tik Tok um
so I've just got some worldwide usage stats for Tik Tok as of July 2023 so nearly a year ago now uh but they had 1 billion active monthly users um and over 90% of those users were using the app daily uh and over 60% of them spent over 10 hours per week on the app uh and if you extrapolate that out to the average lifetime which maybe is a bit of an assumption but that equates to nearly 2.3 years of the average life spent on Tik Tok which is just mindblowing and what's also concerning as
well is that we see these very clear trends that Tik Tok usage and and um time spent on the app is just going up and up whilst other social medias are going down so people are spending more of their time on Tik Tok and it's getting more of this data it's kind of hooking people in more so you can clearly see the effects of that goal being maximized and and just for context the NHS kind of recommend about 2 and a half hours of exercise uh you know per week um but with some statist uh
some surveys we found that 25% of UK population don't do one half an hour workout a week um and St from the US is that um the average reading time per week is less than 2 hours so it's about we're looking about five times as much time spent on you know apps like Tik Tok than these maybe more productive ways of using our time so the big question is you know do these systems serve us or persuade us yeah like what you know are they actually maximizing what we want what is best for us I'd
argue that you know if you're spending five times more time on Tik Tok it definitely not but and another stats I'm sure you believe us at this point but there's another there was a really interesting study that was published well four papers actually published last year uh in science and uh nature journals and they did this huge study um on nearly 300 million Facebook users and Instagram users during the 2020 us election cycle um and what they found was they were looking at basically do um does switching from a chronological feed where you're just shown
information in the order in which it was posted versus switching to a recommendation algorithm they wanted to see the effects of those two groups um and there were loads of things that were found but one of the main things they identified was that when you switch to a recommendation system people spend uh over 50% more time on that app when you when you switch to that which is just crazy isn't it w so taking a bit of a bigger picture and look at the expectations in reality so so we have to kind of think about
why these systemss were built and they actually were built for more connectivity um but we see a lot less time we spent together physically U maybe information sharing the idea was to uh democratize being able to publish your opinions you know before social media you'd have to rely on you know the big media companies uh but now everyone has a voice but re in reality we see a lot of fake news and you know content addiction um community building was another intention that you have access to all the communities in the world and maybe you
can have a more bigger world view but in real you see a lot of you polarization and Echo Chambers um which again in AI the goal the goal wasn't to have these you negative consequences and what happens in AI is you always specify the goal but you can't specify the means um and this is how machine Learning Works in general so it's not necessary that these were designed to be bad um but the key lesson I think we learned from social media U when we look back is about responsibility and um the companies didn't take
responsibility required to make sure that the goal was met the right means and this fell on the individual and it's equated to these outcomes yeah it's a really good quote that we got from uh the people that made the social dilemma which was a really good Netflix documentary all on recommendation systems and they were saying that when you discover a new technology and start implementing that you also uncover a whole new set of responsibilities so um the reason we're talk about recommendation systems to start with today is because you know we've just discovered and uncovered
this new technology of generative Ai and we're kind of enrolling it out into society and we really need to think long and hard about the responsibilities that come with implementing that new technology so that's what we're going to come on to in this second half so we've looked back now and we're we're going to be looking forward after a quick round of questions so yeah it's super interesting because some of the panels from this morning we were talking about how agative AI is coming into our into our lives and how there's that there feels like
there should be a responsibility on on Industries to regulate and think more carefully about how they're doing and how they're rolling out these new technologies but often times they're just not taking that responsibility because they're just not incentivized to and it's falling on us as individuals that exactly yeah uh okay so now we're going to go into uh the Q&A section um so I'm going to start with a a question that was submitted through like the ticket tailor and the signup which is I think will be nice to get the ball rolling and then we'll
have some opportunity for people in the audience to to ask questions with on the microphones uh so uh someone asked how can I optimize the way algorithms influence me in a positive way thoughts recommendation question it's a very good question I think it's also because as we were kind of mentioning we can't really get like away from these algorithms now so you do obviously want to maximize how you're engaging with them positively um but in order to do that you need to know you're interacting with an algorithm in the first place and I think from
when we went away to think of all the different ones we interacted with there's way more than you actually expect so I'd say first you just kind of sit down and have a think like what websites you go to what apps you use and and then is that content that you're consuming have you explicitly searched for that or is it being recommended to you um and if the answer is the latter um you know you want to think like do your historical interactions with that website and the data that they therefore have on you does
that represent you as an individual and do you want to be recommended more content that aligns with that um and I guess if the answer is no you could like delete all your cookies and like delete your history from that site try and hit refresh and then moving forward just interact with that algorithm much more mindfully um and you know when you're making decisions clicking on things um be a lot more mindful with with what information you're giving them and I guess one really good thing you could do is to actually leave feedback so rather
than let these algorithms just um infer your behavior and your preferences you can leave feedback leave ratings um to to show it what you do like and dislike there's also a lot of these apps now have the little three dots next to a post or a bit of information which they sometimes give you the optionality to say um you know show me less of this content or this isn't relevant to me so so yeah strongly recommends like actually guiding that algorithm in the direction that you want it to go rather than just um yeah letting
it guess that for you yeah these are good like practical tips and I think more just a perspective idea would be to treat our online content a bit like how we treat our you know our diets what what stuff we consume online um so for me I'd say that the you know those addictive funny cat videos to me I should look at them like chocolate and and we've heard about having a balanced diet and maybe you know your vegetables can be uh like a documentary and I know your carbs can be something that excites you
but um I think a lot of the time when you just recommended things um it's just going to skew you to just always buy chocolate so I think having that mindset of actually taking responsibility you know sadly the company should have taken that but as an individual you do have the power to affect what content you will consume online ironically I was thinking of the recommenders that would would probably recommend me actual chocolate yeah is there anyone feeling brave from the audience who would like to ask questions you were talking about book search and I
was just thinking that like you know when you go to a book shop and you know you you're after something but actually so much of the process of of rumaging through is where you come across something that actually is really significant to you or that you weren't quite looking for how do we balance that equilibrium between between this efficiency that efficiency is great but then also this kind of like the the the process the journey The Voyage of you know there's potentially something very lost in that as well yeah so it's almost like what's what
are we losing from the process of searching I guess that's yeah it's a it's a really interesting question a good point um I know I actually get a lot of stick from my parents because um you know when I was doing my degree and stuff they'd always say oh you can just Google it now like you don't have to actually go and search to the whole library to find that textbook and then find the relevant information in the textbook so we are definitely losing something um but I think you know as we were discussing is
just with the sheer amount of information that's out there now in the digital world especially um there are certain aspects there that we could never feasibly search now so there there are definitely areas where we need those algorithms you know granted if that's spilling out into the physical world where it's definitely losing something but I think um that there's also a gain there I'd say that there's definitely a productivity gain that comes with being able to find something very efficiently and quickly on Google so you know it's pros and cons isn't it with anything at
AI there but very interesting point I think one perspective is that it's not necessarily an AI problem but it's a it's a human values problem so we we need to identify what you know which of these Journeys do we actually find Value in um so you know if you go into a book with recommendation system maybe if the journey isn't valuable you just want to find a book but in this case I you know I'd agree with you that when I look for a book I love to read for the blurs and actually figures out
which book is the best um but maybe in some other situations when you're looking for an article the if you don't use a recommendation system you'd have to firstly sort all the websites by a toz or you know when they'll publish and look through them all and maybe in that case there is no value in the journey and I think that is a human value we need to figure out which Journeys we actually want to practice and it's also now becoming like a bit of a skill to search I mean I remember when I started
my PhD and like my supervisor was like go find a paper on this so I just Googled it and then like you know the first 100 results were not useful and I was like oh maybe I should go on like Google Scholar and then maybe I should like filter by uh how many how many citations a publication has or something like that so it's and you know people would talking about in previous sessions about like prompt engineering if people are using like chaty BT to search for stuff now you got kind of ask in a
particular way and in the same way that when you use Google you can specify like okay I need this word I'm mentioning in my Google search has to be relevant in the search that you're doing you can like do that so it yeah so I guess in some ways that's putting some owners back on us and giving us like some some power and ownership over it for sure it's a great question thank you any other questions h Hello thank you very much uh for this discussion uh I want to ask something uh that is what
I've been hearing the most of as in Daily AI influence uh this thing about posts with political content on Facebook especially uh being pushed and boosted when they feel more controversial and also um about being fed posts that are closer to the views we seem to have and I think this is very Danger uh at a level I cannot even fathom uh but is there any way we could influence the algorithm in that area really good question um one first thing that I'd note which at the risk of sounding somewhat controversial here um that study
that I was mentioning those those few studies that were published last year they were looking at kind of political information being shown on people's feeds uh and one really interesting thing that kept coming up there is that um even though in on on people's Facebook and Instagram feeds uh reshares so when you're not actually posting new information you're just re-sharing it um made up a very small fraction of the total amount of content that people were consuming yet it actually contained the most polarizing and um fically controversial and often um untrue information was all in
those reshares so I think uh this is going to a point that Rick and I often make and this is what I was saying about sound a bit controversial that these algorithms are kind of playing into human behavior and and we are actually more inclined to some of those controversial statements things which are maybe more polarizing or reinforce our existing views you know confirmation bias so um yeah I think that there should be maybe something there if if we kind of generally agree that having that polarization on those platforms is bad then I guess we
could put in some hardcoded um bit of an algorithm which goes against that um but I don't know what your thoughts are as to what that could be but yes it's a really interesting question um and exactly as Kieran saying um so remember the goal is to maximize engagement and there are many means to achieve that goal and if we look at human behavior a lot of the time we we are attracted to things that um result in a big emotional response a lot of that's anger a lot of this negative emotion we I think
humans have a bias in negative emotion and like anxiety because you know in in the wild you know it's better to assume there's a tiger in a bush there isn't one then assume there isn't one there is so um we can understand that there's you know some human nature that is leveraging um but the other part of the question is so that that's the human side but the the other side is that um a lot of these algorithms are behind you know closed Wars um they're not transparent um so what that means is that um
a good example the opposite of this is X or Twitter they've open sourced their recommendation system so that means that we can actually know exactly you um it's not necessary for you know the everyday person to be able to understand the code but at least with it out there some computer scientists can actually go through the code and see does this system actually care more about your anger or your happiness and when you ask about oh how can we change the systems we'd actually have to change the code if they are coded that way but
the first step to unveiling that is for them to be transparent yeah I guess the difficulty there is that like the responsibility is kind of on the people making the systems it's kind of limited in our ability to to like you and I can't go and change Facebook's code on recommender right so there's a there's a downside of that but I think later we're going to touch on like other things that we can do as well yeah and just one last point before we move on um I think you know Twitter did do that and
open source their algorithm and that promoted a lot of discussion around whether we should try and change that algorithm uh but that is by no means a status quo you know only last week um there were there's a big section of uh Google's search ranking algorithm which was leaked um and there were so many instances in there where it completely contradicted public statements that they made on what went into ranking Pages um so yeah we need that public discussion we need to know what currently they're ranking to in order to push back so great cool
that wrap up the first bit of questions I think but thank you everyone we so we do this segment in our podcast to break the flow a little bit I know we've kind of done that with questions but it's our fun fact segment so we often get a bit doomy and gloomy uh pessimistic so we've added this little segment where we just bring a random fun fact about AI so we've also enlisted Hugh to give one so do you want to lead us on yours yeah sure usually in the podcast they play like elevator music
in the background whilst they're saying the fun facts so you're going to have to like imagine that's in the background but our audio budget didn't stretch for that today um okay so my fun fact um is about a YouTube video I watched by a YouTuber called Michael Reeves he does kind of goofy engineering programming videos so he the first video of his I watched was he modified a rber so that every time it crashed into an object it would swear um so that's the kind of projects that he does um and you got a sponsorship
from an app that does like Financial trading and investment and it were like make a funny video using our app um so he actually developed his own trading algorithm um called fish um with the help of Frederick his goldfish um so what he did um is he each day would randomly pick two companies that you could buy stock from um he then had a camera aimed at uh Frederick's uh fish tank and was tracking where Frederick was during the day and then so one company would be like left and one would be right of the
fish tank and then whichever side Frederick spent more time in that was the stock he bought and so um he he got money from the trading app company to to to kind of test Frederick um Frederick did really well um he actually um outperformed the NASDAQ which is one of the M like top three trading indexes um and the money that they that he the the YouTu made he like reinvested in making Frederick's fish tank nicer so oh very nice I think I need to buy a goldfish as well yeah yeah um so yeah my
fun fact is actually about sports so during the uh covid-19 pandemic uh there was a football team up in Scotland in vaness FC and they when they were trying to limit the number of employees at the football ground they enlisted this AI uh ball tracking system for their cameras so that the camera would operate itself and always have the perfect view of the football which sounds great but then in reality when they actually started using it whenever the football went out of bounds so for a throwi in or a corner um and then it would
go back into play the camera would actually get locked onto the Bold head of one of the Lions person um and it would just Karen filming him walking around so yeah find I think I've heard that three times I find it funny I got a really short and sweet one um just using my half Japanese background so AI if you spell the letters I was you I means love in Japanese that's sweet B more optimistic and positive sentiment right now back to how gen is going to ruin the world was that was sounds about right
um yeah so second half now so we're going to resume a little bit more content before the last round of questions and we're going to be focusing on looking forward at generative they are yes so the first half we're kind of looking back because we see social media being integrated number of years ago now and we can see the outcomes of that um and we we kind of refer to this as hum's first contact with AI because it's the first time where widespread hidden AI usage you know was integrated in society and now we can
see kind of the some of the uh outcomes of that um and the next contact is with generative AIS you know we're seeing it massively integrated you know I think everyone's heard of chat GPT now um in fact we're thinking um if if there's a sh hands like who's who's used chap GPT before just any just once or again so a lot of people in the room yeah and like maybe who's used it this month or this week who's it every day okay so there's a quite a number large number so so this really highlights
how transformative it is because you know it's already around everyone's already using it and it can do so many different things yeah uh and it is you know It's Tricky right like looking in the face of a new technology and trying to know what all those responsibilities will be I think it's easy what we've just done like in hindsight looking back at recommendation systems now we know the effects that they've had on society and it's much trickier to do that to do that looking forward but I guess the way that we wanted to try and
do that today a little bit I think what's good is to I know Riu in the last half mentioned expectations versus reality so I think asking ourselves that question about why is this new technology so transformative and what promises are we being made by the companies developing them currently uh and what could the flip side of those be yeah and we're seeing a lot of you education the democratizing education everyone now has access to you know Einstein if they want to learn about physics um surprisingly a lot of people use it for emotional advice uh
and just writing texts as one you'd expect code just generally producing things faster um and in line with say topic of hidden AI we just need to appreciate that the more we integrate it into our society and the idea of hidden well it's not clear whether you you know if you're publishing some content and use Ai and someone's receiving it that's hidden AI they didn't know is being used for that the more power it has the more it's integrated um another thing you know just as a prediction for some um outcomes think about for Education
if we're all learning from chat GPT you can see um whatever bias chat GPT has whatever worldview we can see a homogenization of ideas you know if we were learning from the same teacher can we expect to have completely different worldviews yeah so we have to be very conscious about our Reliance on this technology and then also with that homogenization of ideas that information then going back and being used to train new algorithms and then it just get this sort of convergence but yeah Reliance is really important one and I think when we were first
discussing that it sounds as though it's a problem for in maybe several years time when these machines are much more capable then maybe it's not something we need to address now but we're already seeing concerningly uh generative AI getting integrated into that hidden AI category whereas previously we'd very much go to chat gb.com or we go to darly for kind of image generation we we knew what we were signing ourselves up for so we were going explicitly to get generative AI maybe aware of the there but now we're starting to see several examples of where
it's running in the background without our explicit awareness so I think one really good example of that is just last month Google released their AI overview feature so for those of you who haven't heard of this when you would go and Google search something they um added this generative component which at the top of the search page would give you like a summary of what you were searching for um and they explicitly said that this wasn't the same technology as the large language models powering chat gbt and it therefore wouldn't hallucinate as much uh but
we saw these very uh weird and funny examples where it very much was hallucinating so I think there was someone who they were trying to make homemade pizza and they Googled uh what do I do if my pizza toppings keep sliding off uh and this summary feature said oh you can just try super gluing them back on so um I'm guarantee that was probably a hallucination so they've they've been reeling back in this feature um and another example of of kind Reliance that is you know very clear whether downsides could be is actually somewhat aligned
with the research that I do in synthetic biology so uh there's been some incredible research in the past few months on generative AI being able to produce so uh there was this research that was done and these models produce a whole new class of antibiotics that were able to fight antibiotic resistant bacteria um and the researchers and the biologists that were using this tool said that it did that in a way that they would never have thought to do themselves that's incredible but as long as we prioritize um the performance and more instances where it's
generating brand new things and we don't equally prioritize understanding those systems you know how they're making those decisions we could get in some very sticky situations so you know maybe in several Generations time of that software um it's working so much faster and producing these genetic modifications that are so far beyond any biologist that we kind of just have to take it for granted and then we move it into the physical real world and it kills millions of people so that's kind of one Doom State scenario there where take these drugs the AI told me
to yeah exactly so we've got all this stuff now going on in the background and I guess just to reiterate that point there was a statistic released from europol a couple of years ago and they released this big statement on the current uh situation with generative Ai and looking forward to predict in the future and they estimated that by 2026 up to 90% of new content going online could be synthetically generated um and that's even you know again that's in 2026 which scarily is just around the corner now but even earlier this week there was
a BBC article which found that lots of young voters in the UK were receiving and being recommended content on Tik Tok and Instagram uh about this upcoming election that was generated by AI uh featuring key politicians misleading content factually inaccurate content so it's happening isn't it now it's yeah I mean here you had an example about the yeah I mean I had two examples on top of my head so uh yesterday at our data Hazard session uh one of the um events that we saw was in elections in India people were using deep fakes of
actually dead politicians to to spread political discourse um but uh kind of closer to home in my own work I've there's been lots of studies done where they're looking at the uh the number of papers that have the phrase as a large language model being published which is if you've not used chaty Beauty before as like a key thing that it will say like as a large language model I can't tell you how to do this but here's how to do this um and so someone would be like hey could you write my abstract for
my paper and then just blindly copy and paste it in and you're seeing this massive increase in papers with these mentioning with people just publishing it because they didn't read it and then the review is just somehow missing it so that's not great so yeah where's this going and and I think we've heard on on the internet um a lot of people like the existential risk of AI and that's more of a long-term View and we're actually not going to talk about that because there's a much you know shorter term Vision that we can all
imagine which is trust collapse so you know now that um the recommendation a systems curate you know they show you what you're going to see generat thei creates you know what you will see so those two things in together is is the complete picture now um and given that um we have you know 90% of online content is likely to become AI generated um you know can there's this problem of trust collapse and and a good analogy is is to look at um money like um the world relies on the trust that the money we
use is is real and there's so many regulations to stop counterfeited money um and because of that we can actually use the utility of it and the same can be said about the digital world that we we see so many benefits of it um but this technology is going to undermine our trust and you know we'll lose all the benefits you want start getting like counterfeit information almost right exactly yeah so I guess given all that you know it's very uh doomsday scenario but uh what I guess what can we do in this situation yeah
so what can we do so there's kind of three classic bins to separate this so there's government action industrial action and what the individuals can do and to begin with on the government there's a really good example what the EU did with um data protection so the gdpr laws um every time you click on a website you can see you can read what the website's uh data is collecting from you and you can consense that before you use the website and um that's kind of the idea of transparency and we can see that it would
be great to have that similar thing enforced by governments where if you go on a website you can see exactly where AI is being used and and your consent and you know being aware so it's no longer hidden yeah and obviously we've got this algorithmic transparency policy being kind of written up and employe but as you know I'm sure we're all aware we can't always wait for government so there is definitely onus on on the industry um and I guess one reason why we need to think about that is because we find ourselves currently in
a very similar situation to the situation that uh brought us the recommendation systems as we were discussing earlier so we've got a very select set of of these big companies in a very small geographical location in the world that are developing them and we're now again as we were recommendation systems in this arms race to produce the most capable AI That's got you know the the latest features um and the reason that's very concerning is part well there's many reasons but one of them is down to this idea of emergent abilities so um these are
capabilities of these models which weren't there in previous versions but just as they scale up and throw more money at these algorithms they're able to do new and wonderful things and the very scary part about that is that most of the emergent abilities have been identified by members of the public and research groups after they've been released out into the wild so um we need some incentive for these companies to slow themselves down and have a much more robust testing system uh you know whether it's kind of a regulatory body that um sets guidelines for
how they need to test these algorithms before they're released into the wild yeah and on what individuals can do I mean I want start off with some uh you know encouraging words because I think there's a a preconception about AI that it's not very understandable and you know as researchers we've looked at the maths and completely agree um but um if we can flip this on its head and actually realize Iz that the maths reflects the concepts that building these AI systems so if we look just today um yes you know we haven't explained any
of the maths of recommendation systems but the concept is that they maximize clicks and we can make you know some predictions about how that would affect society and I think um that is the that is the kind of level that we encourage people to try and learn it's it's definitely learnable we can definitely learn the concepts underpinning this technology yeah definitely and that's sorry this is a perfect point for me to plug our podcast but that is our Central goal there um with the podcast and and why we've kind of come here today to give
you a taste of that but we very much believe that this information is digestible but no one seemed at least in podcast format to try and take all these different Technologies and areas that AI is been integrated and demystify them so yeah we would love if you could kind of check out the podcasts because this is our Central goal is to educate people in this very uh non-technical way uh and we think everyone needs to hear this you know it's it's AI is going to affect everyone our friends our family so yeah we would love
if you could share the word cool so I think we have time for a few more questions uh so yes could Mike would you mind bringing your microphone to the gentleman on the front row please thank you I just want to just bring a scenario I saw an article recently and I I saw the screen grab of it and I posted I can share it with you Afters it's um about um AI ad meta so uh I think it's already in the US but I think it's coming to UK soon but there was an example
where parent groups so often on Facebook people get together to chat to other people and they're expecting to get recommendations from other people but they received unrequested AI admit um giving recommendations about a specific school for specific children was about children autistic children and recommended specific schools and said I have a child and my child had a good experience there um so I think part of my concerns that just question about that is how much have we fed it and how much is it going to keep feeding and with Gemini and Gmail now what does
the gdpr mean in terms of what we've already shared and what historically it's had access to yeah very interesting question uh so you're saying this kind of content being recommended is is generated as well so it's this kind of well I mean I I I didn't go into look more as to whether that was an actual school and whether it was true but my assumption is that that content probably is quite meaningful yeah because it's probably someone else's story and it's aggregated some kind of stories and maybe it's anonymized it but it's I mean it's
definitely wrong to say I have a child when it's not even conscious that is saying it yeah I think this falls into this category of of misuse um so you know it's a capability of these algorithms and there's a couple of concerning things I guess one is that you know maybe the company's developing these tools do put like guard rails in place to stop you from generating certain types of content that can be very controversial or or actually reference that data in the training so any copyrighted stuff or or uh protected information um but there's
not enough time being spent I think on those guard rails because there's not been a single instance of any generative tool where that's a chatbot that hasn't been kind of jailbroken where people were able to very easily get rid of those guard rails and get it to say things that it wasn't meant to so we definitely need to focus more on um or or incentivize the companies to focus more on Guard railing it making them robust to not replicating any copyrighted information uh but there's also something else which is really interesting I don't know if
you might want to touch on this but um we need to know instances when something has been generated or that is kind of a real human so that's a very difficult task but there is something really interesting going on with this radioactive data isn't there that is a possible way to trace the source of that information I don't know if you yeah so touch on that I mean I think debate slight from question but one way to you know Safeguard against um fake uh you generated uh content was if you imagine like a watermark like
I remember I was at school trying to make a presentation and I'd copy and paste an image from Google and it'll be like I know stock images and everyone knows a stall the image um so the whole point of that is our human eyes can see that allv this person stall the image um you can do the opposite with AI so you can actually have a hidden Watermark that our eyes don't see so it doesn't affect the quality for us but an AI can easily detect whether this image is genuine or not um but it's
very interesting about this idea about um you know the AI Authority so so how much Faith do we have in the outputs because this is about truth like did this AI accurately reflect this school recommendation and um one kind of tip to consider would be the more General the concept the more likely it is to be in training data but the more specific it is if it's about specific school is it really going to be training data because the problem is that these models are really good at telling stories like you can you can get
um Shakespeare to tell about tell you about F1 F1 wasn't the thing in Shakespeare's time so um it's really hard to navigate that that is the Annoying answer don't know if that helped but it was a good question if nothing else yeah uh uh there's a chat in a green shirt with a brown hair over there at the back um you said at the start that um a lot of models are optimizing click-through rate instead of satisfaction how do you think you change that cuz click way it's way easier to measure satisfaction is really sort
of loose and hard to track pretty good question very very interesting question I think um well click click through rate is obviously one piece of the puzzle but you know they do have all these different data points on engagement you know time spent on an image you know what parts you're focusing on uh I think one thing that they're probably not going to do but you know if we were to kind of design an algorithm ourselves um I think there definitely should be some incentive to actually recommend you stuff that they had the data for
that was positive sentiment towards that so we actually do know if you've thumbed something up thumbs it down if you've heart reacted so it would be great if we could see um rather than just any engagement we have the data to to know whether it was positive engagement but that's definitely not being prioritized it's probably the algorithm whether it's learned to or whether it's been tuned that way that it actually you know favors things with a negative sentiment one thing I'd say based on the the stats you gave earlier about how people using recommender systems
on like social media are more like to spend more time on social media and I guess it links back to the question that we had previously about whether or not you enjoy the process but I think if there's a recommended system where we don't enjoy the process there might be some incentive to actually optimize it to be as quick as possible and to get you off the platform as quickly as possible now I don't own Facebook so that's not going to work but like just idea what do you think Ru I think um I'm going
to tie a bit of you know Che bit philosophy here like um I think you it's a really good question like what is satisfaction and you know can we you know have an equation that maximizes that and and it seems like that's just too hard so so engagement is a proxy for that we think we click on things we want we think the things we want are good for us but as we can see from the outcomes of recommendation systems a lot of time we click on things that aren't good for us so again it's
it's it's maybe about human values in this case awesome thanks for that question um so we we do need to wrap up soon there was one question that we had from the ticket tayor that I wanted to ask um where can I learn more about this um which is quite broad I know but like can I plug the podcast yeah go on go on I don't know if you guys have heard this this podcast heard it um books yeah um some have you guys heard of reading um on the panel pre uh on the panel
this morning someone mentioned my favorite book of all time weapons of math destruction by Kathy O'Neal it was the first book I read on the topic of like data ethics in general uh and I'd also really recommend um automating inequality by Virginia Eubanks um both are like at least like 5 years old but they're really interesting books they kind of get you into the subject in a non-technical way yeah I definitely was reading a lot of books on all these topics before we started this podcast and it's what like nudged me to to actually come
start it with Ru um but I think one that covers the full landscape in a very kind of General way not too technical which is one I always recommend to friends and family is called scary smart uh which is by an author called Mo Gard he used to be head of um machine learning research at Google X um and he's stepped away from there and he was kind of worried in the Direction things were heading and he's now much more kind of human Centric uh he also has a great podcast not to steal our listeners
but um he's got a great podcast called slow-mo which is kind of doing what we're trying to do which is kind of give the uh power back to the people a bit but but worse but worse sorry yeah yeah yeah we do it much better I only got two books so uh one book is called the shortcut by um my previous supervisor who you know he's done loads of talks EU called nello christianini um and cool thing about this book is it talks about how a bit of history of AI and um touches on some
technical points but it's for you the general audience um and specifically it's about this shortcut where um previously AI is kind of built from the bottom up we were thinking about we're asking ourselves what intelligence is but then we kind of have this big switch in in the AI field where we we just copy intelligence and that's kind of the new paradigm and it's talking about that shortcut um but another book is called the alignment problem which also covers um a bit of an intuition on each type of AI concept and um yeah how it
relates to society that's great well um if anyone has any other questions they can come and find us after the podcast yeah please come and grab us yeah um thanks both for coming on and do the podcast and thank you very much for listening everyone [Music] [Applause]
Related Videos
A More Inclusive Data Science, Bristol Data Week 2024
55:36
A More Inclusive Data Science, Bristol Dat...
Jean Golding Institute
2,825 views
Will AI Spark the Next Scientific Revolution?
40:01
Will AI Spark the Next Scientific Revolution?
World Science Festival
102,505 views
Mo Gawdat on AI: The Future of AI and How It Will Shape Our World
47:41
Mo Gawdat on AI: The Future of AI and How ...
Mo Gawdat
300,097 views
NotebookLM with Steven Johnson and Raiza Martin
1:01:49
NotebookLM with Steven Johnson and Raiza M...
TensorFlow
7,514 views
Paul Roetzer on AI in Marketing | Future of Selling EP001
54:07
Paul Roetzer on AI in Marketing | Future o...
Victor Antonio
12,960 views
You're Probably Wrong About Rainbows
27:11
You're Probably Wrong About Rainbows
Veritasium
3,456,983 views
How AI Cracked the Protein Folding Code and Won a Nobel Prize
22:20
How AI Cracked the Protein Folding Code an...
Quanta Magazine
277,458 views
Harvard Professor Explains Algorithms in 5 Levels of Difficulty | WIRED
25:47
Harvard Professor Explains Algorithms in 5...
WIRED
3,251,529 views
Panel discussion on AI and the Future of Society, Bristol Data Week 2024
42:07
Panel discussion on AI and the Future of S...
Jean Golding Institute
126 views
Neil deGrasse Tyson - “Merlin’s Tour of the Universe” & Future of American Science | The Daily Show
19:08
Neil deGrasse Tyson - “Merlin’s Tour of th...
The Daily Show
228,540 views
Generative AI in a Nutshell - how to survive and thrive in the age of AI
17:57
Generative AI in a Nutshell - how to survi...
Henrik Kniberg
2,415,538 views
‘She sounded drunk’: Megyn Kelly reacts to ‘crazy’ Kamala’s latest video
17:16
‘She sounded drunk’: Megyn Kelly reacts to...
Sky News Australia
188,103 views
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
Visualizing transformers and attention | T...
Grant Sanderson
153,805 views
AI Snake Oil—A New Book by 2 Princeton University Computer Scientists
39:24
AI Snake Oil—A New Book by 2 Princeton Uni...
Eric Topol
74,731 views
Science Fiction or Science Fact - how stories have shaped the way we see AI, Bristol Data Week 2024
1:16:17
Science Fiction or Science Fact - how stor...
Jean Golding Institute
730 views
Careers in Data Science, Bristol Data Week 2024
48:23
Careers in Data Science, Bristol Data Week...
Jean Golding Institute
603 views
What's the future for generative AI? - The Turing Lectures with Mike Wooldridge
1:00:59
What's the future for generative AI? - The...
The Royal Institution
524,022 views
What Does the AI Boom Really Mean for Humanity? | The Future With Hannah Fry
24:02
What Does the AI Boom Really Mean for Huma...
Bloomberg Originals
796,873 views
What Is an AI Anyway? | Mustafa Suleyman | TED
22:02
What Is an AI Anyway? | Mustafa Suleyman |...
TED
1,849,117 views
Bayesian inference? Bayesian modelling of cell count data, Bristol Data Week 2024
47:54
Bayesian inference? Bayesian modelling of ...
Jean Golding Institute
135 views
Copyright © 2024. Made with ♥ in London by YTScribe.com