The point is if someone's younger or someone's changing careers, jump on this wave, right? There's no downsides as far as I can tell, right? Because normally, you might imagine there's a downsides where like, what if this technology goes kaput with AI, I think we can kind of all out the idea of it going away. Yeah, right. Because it's just huge getting better, and also, it solves so many problems. Even if it didn't get better, it's still super useful for solving problems as part of other pipelines. I think that this, an understanding of AI that you
get in 2024, is going to last you a lifetime, because I just can't see how it would go away. It will look very different in a few years, but you'll be able to build on that knowledge that you've already got. So I think it's, there's no drawbacks. Outside of the core of the hype stuff you can see in the media, there are thousands of examples of smaller AI and Deep learning applications that are solving problems across the world. And they just quietly happen behind the scenes. They don't get all the big hype, but they're doing
a huge transformative job. And so even if you don't go and work for the big tech company training the next ChatGPT, you might still be doing something with maximum impact, right, and really, really worthwhile doing. I think that the roadmap, right, so for AI is... Hey everyone is David Bombal back with the amazing Dr Mike Pound. Mike, welcome. Thanks for having me back. It's great to have you. You the person I always talk to about AI, how it's changing our lives, the hype, the reality. You know, you, I really love about what you do, tell
us not to fall for the hype and you tell us what's actually going on. So what's the state of AI, or, you know, where do you see it, where it is in 2024? Yeah, I mean, it continues to move really, really quickly. I think that there's still a huge amount of hype, so we can talk about that. I'm not going to change my tune on that front. I think the thing that we've seen change really quickly is more image generation stuff. So, yeah, we can still talk to AI chat bots and we can still have
good conversations, but now we're linking in images. So now we can produce images, it could describe images. We can say we're this thing from the image and putting something else instead, and these images are just getting better and better. So, you know, we're fully into the area now where, you know, you can't absolutely be sure, but an image is real or fake now. You know, not always anyway. I've seen a lot of YouTubers use it really well for generating thumbnails, and I've been using it as well. It's amazing what it can do. Exactly like you
said, you know, just tell it, like, with a quick sentence, my daughter's been playing around with it, like, create a hacker cat or whatever, a cute hacker cat, and then it creates this thing. But I mean, like you said, some of it so realistic, you have to look really carefully to see what's real and what's fake. Yeah, and, you know, there's going to be obviously a lot of regulatory problems with that because, you know, we're going to have a situation, maybe not tomorrow, but at some time where there's an image in a court, and some
of trying to decide if it's real, really important, but some case, right? And this hasn't happened yet, but it's not far off. Is the the famous example of Tom Cruise been acted pretending to be him and putting out TikToks or something? And, you know, you have to look carefully. You do, and actually, I made a joke on one of my videos that, you know, I don't look enough like Tom Cruise for this to work. And then the next day on Twitter, there was a video of me with Facebook, Tom Cruise just talking in my voice.
Yeah, it was pretty good. Right, you know, that the speed at which you produce these things, that was quite incredible. I think the, I want to talk about like opportunities because that's a big thing that people are interested in and also like the worries because I think people are worried that the jobs are going to be taken away. There's this concern that, you know, in the old days, technology was taking away perhaps factory jobs or basic jobs and AI is going to take away a lot of like knowledge worker jobs. I mean, at the moment,
I don't think it is in a sense that I think these AI tools are really impressive, but of course, when you actually really start putting pressure on them in terms of something really difficult, they start to fall down. So I think that, you know, your job's safe for now, but, you know, inevitably, eventually we are going to have, you know, image generation, but so good that, you know, he's comparable to an artist. We're going to have text in the race and that's so good, but it's comparable to a writer. I think as a society, we
have to think about what we're going to do when that is the case. Now, there are a huge other concerns like, you know, who owns the writing that an AI has produced. You know, what maybe it's the people that train it, right? Or maybe it's the text that went into train it. So there's a lot unanswered questions. I think in the short term, there's not a huge threat to sort of office work and think like this, you know, if you're doing, if your day to day job is, you know, coding or your day to day
job is data entry or, you know, financial, you know, work with less day spreadsheet for something like this. The ability of an AI to do to use these tools is still limited. Right? It's impressive. Sometimes, but it is limited. I've had feedback from people that it's great, for instance, to help with boilerplate. And I think you said exactly the same thing. Boiler plate code. It's great for doing certain things, but in our previous video, you mentioned the fact or the problem that it could write code that has vulnerabilities in it. Yeah. So, you know, we
don't know because we haven't been told what all this data has been trained on. What is that? What's been trained on? So, you know, there's going to be correct code, but it's trained on. There's also going to be bugs in the code that's trained on. We don't know to what extent it will overcome these bugs or they'll get caught in the wash or whether it will just repeat these bugs back to you when you use them. I know a lot of people who work to companies who have policies in place about the use of these
tools, just for a safety point, you do more than it's like, you know, if you've not got us off, they said a few in eyes on something. At the moment, we're not in a position where we can fully trust what we're getting out. And so, you know, feel like in code for a living by all means, usually tools to help you speed up if that works for you. That's great. But I think that the idea that you're going to be able to go home and just let it, let it churn away all on it's own.
We're not, we're not quite there. But it won't, and that won't be coming for a while. I've heard people say that artificial general intelligence is also a long way. I'd be, might not even see it in our lifetimes. Yeah. I think that that will be boring. Why, I believe. AGI is a term that's used far too much in the, in the modern conversation. I think it's exciting, sounding, and then it has five, five, you know, implications and stuff like this. But I think realistically, the majority of researchers think that's, you know, in the next five
or ten years, we're just going to see better versions of what we have now. But nothing's going to substantially completely change in terms of, you know, we're not going to jump from where we are now to something that can, you know, think and act for itself. You know, we'll see if I'm proved wrong, but I'm pretty firmly at, I believe in that. So I think that we're just going to see much better chat bots and much better image generation. One thing that hopefully we'll see soon is more grounding in real data. So, you know, when
you, you know, you can see this on Bing already used Bing, it does a kind of web search and then injects that into its own front to try and keep you better informed information. Now, that sometimes works. It sometimes doesn't. But I think that grounding the output is a good idea because at the moment, you're just getting it kind of a hybrid amalgamation of training data that might be out of date or it could be incorrect. I see, I mean, correct me if I'm wrong, but I see that the industry's kind of moved on. But
since we spoke as well, I think we lost time. We spoke about 2023 outlook and it's good. It's interesting to see what's happening now in 2024. In the old days, you had to get like a PhD. I'd like to be like you if I wanted to use AI. But these days, I can use APIs. So it's like we've got the users using like the front-end on ChatGPT. That's one opportunity, perhaps, to create thumbnails. And then you've got like the developers who could use the APIs. And then you've got like the hardcore stats guys, math guys,
people with PhDs like yourself. It seems like there's like three groups now and there's a lot of opportunities in each of those. There's opportunities everywhere. Yes, you're absolutely right. So if you take, so I mean, one of the terms I get to use a lot is this kind of foundation models, which is a model that is incredibly large. It's been trained basically by resources that most people don't have access to. So basically big tech company and you can deploy these models on subtasks, you know, without too much expertise. So a good example is to segment
anything model. It's a model where you forward to an image and it will just start picking out objects. Now, actually, segmenting objects for many years has been a really difficult problem. But you can use this as part of your pipeline. You don't really have to know how the AI works to be able to use this tool. It's just run it in Python. It produces you some objects and then you can decide which of those objects you want. Let's suppose you're writing a web app that, you know, let's say picks people out of an image. You
can use a segment anything model and then just try to find people and it'll go off and do that. You don't have to know how the network actually segments to be. You know, you're just using the polygons. So in other words, there's a huge opportunity now with it's open to a wider community of people who perhaps are not hardcore AI guys, but they can leverage the power of AI just by interfacing with APIs. Yeah. So yeah, these companies are exposing these models at via APIs that you can then, you know, you can make use of
and you know, we see a lot of companies doing this. Many of these tech startups are essentially a wrap around, let's say, ChatGPT's API, and try to use it at the clever way. You know, to an extent, there are risks doing this because of course, you don't that how you don't control ChatGPT, you don't have a how long it'll be online for, you know, these kind of things. Whatever happens, it gives you access to models that you couldn't otherwise have trained, you couldn't otherwise have actually done yourself, to put maybe nab the expertise, you didn't
have the resources. And so I think that it is a low hanging in for a lot of people. Yeah, so rather than me trying to get like crazy data sets because that seems to be the biggest problem right, getting the data and then like the GPUs and all the power to run this and obviously the expertise like yourself, you know, I can just leverage an API and I've got it. Yeah, I mean, you know, it's difficult to overstate the number of resources needed to train ChatGPT. Right, first of all, need massive amounts of internet data
in the order of trillions, probably of tokens, right, at the very least high numbers of billions. And then some outfired itself is a data nightmare. But then you've also got, you know, the hundreds and or thousands of GPUs across hundreds of machines that train the model with distributed fashion, all we infrastructure to do with this, that no need to do all that, you know, just find up, get your API token and after you go, and then you can talk to this thing and you can tell it what to do. I mean, just didn't this. Do
you have recommendations for instance of languages I need to learn, perhaps books, you know, how do I get started? Because AI is so much hype in the news, but how do I practically use this to, for instance, have a business or just change my life, you know, in some ways. Yeah, I think that actually one of the things is you what you want to do is take yourself to an extent where you can get around this hype, right, because that will help you not to be taken in by the most flashy tool, but the one
that's going to work best to your use case, right. And so I think to do that, you need a little bit of expertise in certain areas, you know, you don't need to do a PhD, you know, by always coming to a PhD with me, that'll be fun. But, you know, if you haven't got the time to do that and you've been, you want to get in on AI more quickly, then sure, there's lots of things we can do. The first thing is, you know, you don't need to tremendous amount of mathematical knowledge to run AI,
you definitely don't need much, much more large at all. You can even train networks about much mathematical knowledge. If you want to read the papers and understand the networks, you might need to know a little bit of linear algebra or things like this, right. But that, you know, that's about the extent of it. You can get away with with a little bit less. I don't mind maths. I use it when I have to, I avoid it also quite a lot of the time. So, you know, it's not so bad. But the primary entry point is
going to be learning Python. That's the main thing you need to do. For whatever reason, I mean, I, you know, I've said this before, I have a love hate relationship with Python. Some days it's my best friend, some days I can't stand it. But ultimately, this is what the AI community has settled off. If you're going to use machine learning, you know, Python is going to be a thing that runs on network. Python is going to be the thing that handles the input and the output of those networks. Now, of course, they will actually implemened
in C for speed and CUDA on the graphics cards, but you interface with them in Python. which is the best place to get started. And do you have any courses? Or do I need to, it's just basic Python. Just grab some like, how do you get started with Python? Do you have stuff? It's really, I think, you know, any appropriate introduction to Python course is going to be, it's going to be fine because what we're doing in Python is not actually the most complicated things that Python can do. Right? If you're writing enterprise level software,
you know, you might have been using Python. But if you were, you'd be using a lot of past more sort of advanced stuff around the edges that you don't need to use to do machine learning at all. The main thing you need to know how data structures works are lists, dictionaries, and then, you know, pretty quickly, once you've got a grasp of the fundamentals, you can actually start training up smaller networks to get your head around the quirks of each particular library. So, for example, PyTorch has a day structure that a user called a tensor,
that actually most day I like, please do. But the tensor is kind of fundamental to how Python and PyTorch works. So the sooner you get going on that and the sooner you play around these things, you know, you pick them up pretty quickly. So, start with Python, just like a generic Python course, understand lists, dictionary, basic Python knowledge. They look at PyTorch. Do I need to understand concepts like supervised, unsupervised learning, you know, all this kind of AI terminology? What I would say, so, you know, to begin with, I'm not necessarily right, but what I
would say is probably there is a nice progression actually. Supervised learning is perhaps in some sense the simplest and the easiest to get your head around, mostly because actually, still the majority of AI that we see, in both industry and in research is supervised. It's supervised because that's the easiest one. Right, you know, you've got some data, you want it to perform that task. So you just throw a data in and you train it and then we go from there. If you download code off, let's say GitHub, that has a neural network in it and
it trains up something with some on some tasks, the chances are you'll be able to plug your data in and go, you know, you know, you have to fill around the data loaders a little bit in a bit of Python there. But really, that's the first thing you can do. Once you've done something like supervised learning, then you can move on to more complicated topics like unsupervised learning, weekly supervised. There's hundreds of subsets of different configurations that you could train with. And then you know, you could also start to familiarize yourself with these, you know,
large language models and these big networks that are trained using a hybrid of different approaches. Where do I learn PyTorch? Is there GitHub places to work and go? So there's loads of courses, but actually the PyTorch GitHub, the examples and the PyTorch tutorials, those are actually really good good resources. There's a couple of tutorials at the beginning of the PyTorch tutorial set, which is so things like tensors, auto differentiation, so it's auto grad framework. It's worth working through these because you're just there to feel for what's going on under the hood. So actually, PyTorch does
a lot of stuff behind the scenes that you don't see. And if you kind of skip over that, you'll get it running fine, but you might not necessarily understand what's going on. But I think if you have any background, what it's doing, it actually does make it quite very easy to use. To just for everyone who's watching, I've linked to video below. Mike did a great example showing us how to use code that he's used or created, and how to like work with images and do some interesting stuff. So I've linked that video below if
you want to learn. I'll also put links to PyTorch below. So if you want to learn that, what about last time you recommended, like Andrew has got a course on Coursera that teaches the basics of should I do that? Yeah, so I think that, so Coursera, I think cost more than it used to. So that's the choice that people have to make. I think you can do it for free still, can't you? I think you can. At least briefly, if you're quick. Yeah, I think I still do recommend that course. I think that course is,
there's two courses really. There's the Machine Learning one and there's the deep learning specialization. Now actually, I'm a big proponent of, I recommend Us then doing the initial introduction to Machine Learning course because I think it teaches more to fundamentals. You can't pick up through just how you don't. You know, things like how, what does the learning rate do and how, what effect does it have on the training of the network? What happens if your network doesn't train what you do in that situation? Going from Machine Learning to deep learning, actually not that big a
leap at all, they're very similar concepts are trained in much the same way. It's those initial Machine Learning concepts that take a little bit of time to get your head around. That's a good course. There are plenty of courses on all of the online learning platforms in similar things, but I certainly think if you're taking a course that's talking about learning rates, how to train the training process, how you prepare data, those are the kind of things that are really worth learning. I think we need to convince you to create a course. Yeah, I mean,
when I find some time I would go, I sometimes do think about that. Yeah, I think people will be quite good because I spend a lot of fun teaching these topics to a lot of people. It would be easy, but it's just pre-recorded myself. I could just say go over and watch that and then come and talk to me after. Very great. Everyone, please vote below, put your comments below if you want Mike to create a like a Udemy course or some kind of online course where we can all learn from him. I really appreciate
you sharing this Mike and it's, you know, it's overwhelming and you've been doing this for a long time and like I said, you separate like all the hype from actually what's reality. So, it performance work is overwhelming to people working in AI as well, right? Because there's so many papers. I mean, we just submitted a paper to a conference where we're not being submission about 12,000. I mean, that is, that you know, 10 years ago, there were not that many papers in computer vision or most across the world in a whole year, and we're talking
now about one conference. The amount of and the speed at which all this stuff is happening and sometimes, you know, sometimes papers are smaller, but it's an incremental improvement and an art changing and it, you know, much. But sometimes, you know, there's big, big things coming around, you know, in graphics, we've got these new, uh, splatting gaussians, we've got new radius fields. These things didn't exist a couple of years ago and now we have to learn what they are as well, you know, and so it actually, it is overwhelming, but I think also, it can
be exciting, right? If you're, if you're willing to delve in and read some of these things, there's loads of resources to explain all of these topics. You don't have to necessarily read the maths in the original paper. You don't want to. I think it's, it's like when there's a shift in technology, it opens up a lot of opportunities for anyone who's willing to jump in, right? Yeah, absolutely. I mean, I think it's like, you know, when we're two cable on right, it's the same kind of idea. I think it, there is a lot of opportunity
and actually I know plenty of people who are working in AI and don't have, you know, PhD in AI, they just got sufficient amount of base level knowledge that they could get in on the ground floor and work their way up and then, you know, you learn these things as you go as well. Most of what I've learned about article surges I've learned by doing, you know, in my research, rather than just reading tutorials and things like that. So I think you, you get yourself that base level of knowledge and then pretty quickly you're ready
to go and you could start training things and you will learn quickly if you just, you know, keep training with models. Do you think there's going to be a shift towards like domain specific AI's where it's like an AI for cybersecurity and AI for networking and AI for XYZ technology? I think that, so in a way, I hope there will be actually because I think that the things like ChatGPT and these need large language models, they're very interesting and they're very fun, but actually I find them quite hard to apply to my own research because
my own research is not specific, right? I'm often looking at a medical image and trying to work out what's shaped that thing is and ChatGPT doesn't know because it's never looked at that kind of thing. And so actually it's much easier for me simply to just train my own thing to do it, right? And ignore the large language model. I think that foundational models or big models trained in specific areas such as segment anything, they're more tailored to a specific task, they're slightly less capture and that means you might have a chance to be able
to control them in a way that's helpful to you. So I hope it goes that way. At the moment, I have fun planning with these tools, but for the majority of the time I don't use them because I've got my own models that I've trained up. My what's segment anything? Segment anything is a really interesting to say foundational large model that's been released by Meta. It's kind of transformative in the segmentation space. Segmentation, prior to that, has always been a supervised task. That is you basically, well, I mean, pretty much. So you give it a
bunch of label regions in a load of images and it loads, learns the label, though, it reads it back. So you might use it, for example, for labeling products as you're walking right of shop, something like this. Now segment anything is a tool that just segments all the things it can see in a scene and give some labels as to what they think they are, but you can also prime it with points or boxes or text to say, I want you to find all the footballs. I want you to find all the laptops and it
will go off and segment them for you. Now it's not perfect because it's a very general model. It may underperform a very specific model, train my very specific task, but of course, you don't have to train. You do you need to run the thing and it's done it all for you. So I've seen it. I've already started seeing papers that are using segment anything, part of their pipeline. They build in just segment everything first, get rid of the things we don't want and then we've got some useful data or whether you have them trained anything
at all. So it's with images, right? It can pull stuff out of an image. Yeah. Yeah. You can think of it, yes, it's a bit like a kind of reverse, you know, Stable Diffusion or Dall-e, right? Instead of producing an image, it's taking an image and finding interesting information from it. So all of the tasks that people do is, is image to text, they're trying to describe what's the image, this is image to object. So it's trying to find everything in the image for us. But like in your specific use case, like medical data, at
the end of the day, it's better to use your own AI because that's specific domain of knowledge if you like, right? Yeah. I think at the moment, it probably is. Also, you know, we have the expertise to run it, so it reminds you of your own technique. I think, you know, in a long term, just certainly for outdoor scene, for sort of scene, you see typically in day to day life, outdoor indoor, normal photographs. I think that it worked pretty well. And it might not find everything, but it will find it in pretty quickly for
you. For medical images or super specific data sets, you might find you have to train your own specific AI. But you know, you could use what you could do, for example, that you could use something like segment anything or less powerful, you know, a network that gets you the gist of what's going on to start that annotation process for you to actually can be used sort of in with you in the loop to get data really, really quickly. So from a job's point of view, or like, I'm just trying to think, how can I use
AI to put me in a different league, like job-wise, or, you know, just in general use? Those are those companies that are using AI, right? Now some of them will be working on financial data. So I know we're going to be working on networking and security data. Some of them are going to be working on image data, it will depend on the company. You know, going to be dozens of job types under that umbrella terms. And there will be people actually engineering the networks themselves. They may not be anyone in the company that does that
because they may be using off-the-shelf networks. And they're going to be people who are, you know, existing in the data and the storing of the data and moving the data around, you know, the site. And then there are people controlling the cluster and the GPUs and training the data. And there are people who are combining a web interface on the front of the data. So, you know, front of the network. So I think wherever your expertise is, there's a place in that pipeline for you. Right. And you know, if you have some understanding of AI,
that's going to make it a lot easier because ultimately that's what's actually running under the hood. So Andrew's course, I think, is an absolute recommendation for anyone who's technical in any capacity, right? Because it gives you good understanding. And then you can take that knowledge. Yeah, no, that's absolutely right. If you want to learn how to run a model, then what you do is you learn Python and then you can just basically start running into tutorials and we can run the model. But you won't necessarily understand exactly what it does. Now, it might be even
your job doesn't necessarily require you understand exactly what it does. But I suppose what I would say is at some point it's not going to work. Right. At some point you're going to train it up and it's not going to work and you're going to need to know why. And so some expertise in AI, that's where that will help. You know, I do okay. So, it's doing this. That means that means we need to increase the learning. Or that means we need to change increasing our data. We've got Andrew Ng's machine learning course and similar
courses are about starting point to get issue about AI knowledge. Right. So knowledge of things like when you actually train what actually happens, right? What happens to your loss as it goes down? Oh, is it good that it's doing that? And you know, how can we monitor this and make sure it's working? Correct. There is a bit of maths, right? So if you're absolutely not okay with doing maths, then I would suggest a more practical kind of PyTorch course that just teaches you a sub of the fundamentals of running the network without really obsessing too
much about exactly how they're trained. Bearing in mind, you can always pick that up later. Like, you know, it might be easier in some sense to learn the mathematics, kind of the network after you've been trained in for a year and you understand exactly how they work. So, you know, very various angles we could take in this world. Yeah, just think I mean, even if you just jump into the course, it gives you kind of like terminology, you understand like if we talk about supervised unsupervised, we talk about transformers, at least you understand what that
means. Because if you're totally new to this, it means nothing, right? Sorry, go on. No, no, that's absolutely right. I think that learning the terminology, you can understand a bit about what they are. Let's you work out what the best tool is for the job. Well, right. So for example, transformers, which you mentioned, are very common all over the research and all over research and industry, actually most big models use a transformer in some way, mostly to do the text and you know, the text image and image text. But actually we've had tasks where we
haven't the transformers and work, right? We buy a transformer, doesn't work, not after data. The problem is different, it doesn't work. But we have other things we can use as well, right? Because we know what they are. So I think understanding of been about the different technologies will mean you apply the best one for the job. Sometimes, you know, these big models of are overkill for what you need. Actually, you need to just detect something in some object, this is some image. Your image is very consistent. You know, the same picture it's taken every day,
nothing changes. That's a few hundred images in a small network, and you've got them. And you so, you know, sometimes you don't want to overthink these things. You can you more basic is better. I just think we've got to the point I've seen the shift in the last year where, you know, ChatGPT was like this catalyst where it became mainstream. But since that time I see I've seen TikToks, I've seen Instagram videos, I've seen YouTube videos where it's totally AI generated. You can see it's a AI voice, it's an AI image. So the stuff's getting
applied more and more in a lot of places. I think the days of like ignoring AI are over. You have to get involved at some point. Would you agree with that? Yeah, no, I mean, yeah, I work in AI. So I'd love everyone to do that too. But I think it's so general now, right? It is. So I mean, you know, most people that are using AI that we see in the media and you know, on TikTok and social media. So I most of them were just running it, right? They're running out too much of
a concern about how it works. I think you put yourself in a really small position if you just know a little bit about how it works as well. And particularly because at some point, someone's going to present you with an image and say this was AI or this wasn't AI. And it's amazing. And the hype stuff is going to start to come up and you can go hang on a minute. I'm not so about that. I'm actually, I don't think that was done in that way. And you know, I don't think that's quite an impressively
thing. Or maybe it is. But I think you know, that understanding really, really helps. And so I think it doesn't, you don't need to totally be able to, you didn't bit of write your own transform of paper. I think just understanding a little bit about how they were, why they've been designed way. It's a super useful thing to do. And then, you know, you can get on the AI, you can get with the AI and you can start using it. You can use these things as well. I think for like someone who's like not technical,
perhaps, and I hate to break it up, but I see it like again, like I mentioned in the beginning, like three groups. People who are not technical, but can interact with an AI and learn how to leverage an AI just by talking to . But that's like the beginning, right? And then you're going to go to the APIs and you want to use Python and understand it. And then the third is like going and actually understanding more and more. And the opportunities are in all three places, but you're going to be in a much better
place. I think if you're like in the second group where you're a programmer or someone interfacing with an API or like in your position where you're actually doing much more with AI. So there's three groups, right? So the first group is just people who have gone and generate some images online. They go on on on the website. They're the text box you type in your your prompt and an image comes out. And you know, you are controlling AI in some sense, but only in a loose sense. Yeah. And I suppose the the interesting thing about
that is there's a lot of stuff. You know, you hear this term prompt engineering. Yes. You're just really doing great. Next job. Sorry. What did you say about it? Sorry. Yeah. I said it's a really interesting term because in a way, you're trying to control something that we don't, the people that train the network don't really understand it. But in terms of what it will and do with certain, which is so interesting. It's like trying to, you've got some working for you, but they don't speak your language. You know, it's been there. Land of you're
just shouting words at them to hope that they do something but kind of work. I think that's really interesting. But of course, you run the risk that's, you know, if you build any kind of product based on something like that, you run the risk of an underlying network will be changed. There'll be API change in some way. And suddenly it doesn't work anymore because when these networks and retrains, we can't guarantee that they will do the same thing the second time. You know, it's some understanding of how these networks work. It lets you control for
a little bit and you can be a bit more sure about what you do is going to be predictable. So perhaps the next leveler would be someone who's again, you know, using the API and it's perhaps doing something. So, for example, something that companies often do is they will prime a chat box with some text that allows it to control, allows you to control exactly what it says. So let's imagine a hypothetical situation where I want to write a tech support, but I don't want to pay anyone to do tech support. So what I'm going
to do is I'm going to write all the answers to tech support into the chat box. So it has that information and then let it do my tech support for me. Now that might work quite well for at first, right? Because as long as people ask simple questions that you've just given the answers to, it will produce lovely answers for those things. But if they ask what are your opinions on, you know, you know, this conflict that's going on across the world or what's your opinions on, you know, really political hot topics or what are
the instructions for bomb making? It might tell you those things because those are still in the original training set. So you have to be quite careful if you're going to create a business as based on hoping that one of these large language models will act exactly how you think. So that's a really interesting problem that's not been solved yet. We'll be seeing people hopefully addressing over the next few years. And then perhaps a little bit above that is okay. So you've got some tasks you want to solve, you know, a large language model or a
segment anything network will kind of work, but it's not good enough for your task. That's when you probably need to train the network yourself. And then, you know, that's going to be a case of you download the repository. You write your data loader, so your data is going in, you write your objectives, so you don't have to learn and then off you go away. So I suppose there is a different level. You can always start with one and move to the next one as you go up my expertise. I've seen people at the example for
the second group I've seen is someone who, and I'll link the video below, who uses AI and Python to, for instance, he says go get this video off YouTube and they give me like a 10 bullet point summary of the video and tell me if it's how important it is that I'll watch it based on my preferences and he's like put that all into Python script. Ultimately, that's not training a network or changing the configuration of a network or something like that. And you may find a time where your split doesn't work because some some
exchange in the video or the network that means that they're not going to function in a way to a bit like using a program like if they get a feature and change it, you're going to have to change your code based on to reflect those changes otherwise your code network. And so it's kind of a similar deal. It's set perhaps that the pace of shapes are very rapid. So you might find that your prompts from this month, they'll work next month. And so there's a bit of a risk that. But if I want to go
to the third group, do I have to buy GPUs or do I have to rent stuff in the cloud or is it a way to actually do that? Just like to learn. No. I think if you're a company and you're looking to do proper training in AI, then cloud resources like Azure or AWS will have you covered. If you install your own local stuff, you have to obviously support that you have to pay people to look after the machines and make sure they run. But they've had ultimately it probably marginally cheaper depending on how much
training you're doing. The cost computations get quite complicated depending on what you're up to. But of course, if you're just playing around with these things, fun learn them, then something like Google CoLab is perhaps the best place to go to. So Google CoLab is a what we call a Jupyter to notebook style interface. So essentially you have lots of text boxes where you can put in Python code, but it also has access to GPUs. It has all of the library base of the installed already. So you don't have to do any of that. You can
just say use PyTorch or import PyTorch and off it goes. And you can and actually a lot of the big models like Detectron 2, which is a really good model for object detection. They come with a link to CoLab on the Git repo. So you can click on it and just run it straight away and see how it works. And actually that's how I learned how stable diffusion works as well. stable diffusion came out. I went on the GitHub. I went to the CoLab and I started playing around the network to see what they would
do. And it's a really great way of learning. And that's for your low cost, right? It's very low cost. It's free, but a sort of entry level. You'll just find that if you get over excited and you use it a bit, you might have to wait for a GPU, right? So you know, you can pay, I want to say it's about eight or nine pounds a month, you know, British pounds for a monthly access, which gets you pretty much, you know, all reactions to a GPU could need, unless you're sort of a ridiculous power user.
But you know, there are lots of tiered pricing models, but I think it's a good place to get started because you may end up not using CoLab in the long term. You may have your own systems or you might use proper cloud compute, but I think they're just running things and trying them out. It's a great place to go. I mean, that's why where I wrote my demo and I've actually written other demos as well in CoLab, because I know that when a student clicks on that, they're going to get access to the code they
can run, which is, you know, reassuring. So if everyone is watching once again, I've linked a video below where Mike actually demonstrates the stuff. I'm asking all the questions that I'm hoping a lot of you are thinking about. So I know the answers already, but that video is linked below. Great demo. So Mike again, thanks for sharing that. Another question that always comes up is books. Do you have any recommended books or study resources apart from like going to the PyTorch and GitHub? I'm a big fan of any not too long introduction to Python and
PyTorch. Yes, the problem is if you buy, Yoshua Bengio and his colleagues wrote this incredible book called Deep Learning. Now, by all means, get that book, I own a copy, but it's a big read. You know, there's a whole section on reenforcement learning, the whole section on Super, and you'll be there for weeks. So if you need a really good reference for one specific area, if you're looking to go into short, like if you're looking to pick up PyTorch and run some stuff, then really what you need to get you there quicker. And I think
that's something that issue Python teaches you the fundamentals of how PyTorch works, things like data loaders, the training loop, like which just takes a bit of getting your head around for the first time, and how it trains the network, things like this, that's the place to go. So there are a few books that we can link to that have, you know, but like not too high barrier to entry, I think. And that's where I'll start. Basically, I'll link those books below. So if anyone wants to get them, just be aware, I'll use of Amazon affiliate
links. So thanks if you want to support me, but I'll put those links below. You're my, thanks for sharing that because it's, you know, some people learn by doing some people like watching videos, some people like reading. So this gives us a whole bunch of resources. Now, another question, you train or teach a lot of people at university, and a lot of them are perhaps beginners. So talk to people who are starting out or people who are changing careers. And it might be a bit of nasty question, but like even advising yourself, what would you
advise yourself to do in 2024 is like jump into AI as soon as you can, do computer science. Or what would you advise if you're like talking to yourself? Yeah, I spend my entire life telling people to do computer science, and I will continue telling them that until either I retire or I die, one of the two things. I love telling people about computers, and AI is just one of the cool things you can do. You don't, you don't have to do it, right? You know, but for the sake of this video, you absolutely do
have to do AI. And I think what I would say is, you know, I, so actually as an example, I recently took up a drums, it's just for a bit of a laugh. I wanted to be good at the drums, and I was absolutely hopeless at them, but I kept doing it about an hour a day, and now I'm not too not bad, right? And I think actually it's similar with everything. Right? With AI, I have students that come to me for let's say, project work. So that's in the final year of university where you
basically do a staying period of work, and then you write up a large dissertation, or, you know, big essay, essentially, but you have to produce some software or something that works. And obviously, I mostly supervise AI projects. And so these are students who are decent at, you know, computer science, but maybe haven't done any AI before at all. And within a few weeks, they're training networks. And we've been a few months. They've got proper tools going, and it's really, really impressive. But there isn't quite as much work as you say, right? Yes, becoming a sort
of PhD expert will take you some number of years. But I think to get a baseline level of AI knowledge, it's only a few weeks and months, really. I've also a big fan of learning by doing, but I do think that just clicking run on the colab note, because isn't quite enough, you need to have a little bit of video intuition or a little bit of reading just to make sure you understand what's going on under the hood. But it's not, you know, it's not as much work as you think. I love that. It's really
motivational, because you know, people look at this. I think a lot of people would look at this and think it's like a huge mountain. How many years of experience have you got? I mean, all these, you know, degrees and stuff, PhD, I can't do that. And I'm glad that you're saying it's not as difficult as that. No, I don't think, I think doing a PhD is a very different set of skills. So, for example, one of the things that I can do by well, because I've had a lot of training, is learn new things very
quickly. So, if a new paper comes along with a new kind of network, I can pick that up, in about an hour or so I know what's going on, what's good about it, and what's not good about it. And that's useful for my job, because that is basically what my job is. Right. Well, actually, if you're working for industry, that's less of a concern, but actually, you want to be keeping an eye on what's coming along, but actually a lot of it is you've got some product to deliver, and it's going to use AI, how
do we go about doing that? So, with the source set, set the steps that everyone is going to do in that situation, you know, learning those steps doesn't require that many years. It's just something that you just, you start off, get that baseline knowledge, and then you can learn a lot of the job. So, like you've been saying, and a lot of people say, start with Python, if you haven't learned Python already, you need to learn Python in 2024, learn PyTorch, go and look at Andrew's course on Coursera if you want to get like an
understanding of AI, but if you just want to jump straight into PyTorch, it's a great way to get started, but you need Python as you'll begin it. Yeah, absolutely. Python, PyTorch, so there are other libraries, you produce TensorFlow, for example, but my experience at the moment, it's the sort of the momentum of the research, security, and industries is behind PyTorch as of today. Now, of course, we go to what will happen in the years, now some new fancy product on the long, little bit over-excited, but at the moment, PyTorch has got a huge number of
repositories, a huge number of tutorials, huge amounts of help. If you read a paper, but does something cool, like detects objects you want to detect, there's chances of as a GitHub repo that implements that paper and you can kind of run it. And in doing so, then learn how it works. Put yourself in that code, and still about it, and see what happens when you change some of the configuration. Yeah, and what I would say is, I think at the moment, because it's so much hype around AI, there's perhaps a tendency for people to leap
into the shallow end of AI really, really quick. So about these things like prompt engineering, just running models, look what we say AI can do, and that's a great place to start, but I would quickly advise people just try to get a little bit more of a feeling of what's going on underneath, because then you're better skilled when those things change, and you know, adapting to those things. Right. And also, for what it would, I think it's a really fun place to work. It's a great community around AI, it moves very fast, but the core
concepts don't move as quickly. Right. So, you know, supervised versus unsupervised learning is still the same as it was before. Pretty much, so you know, you can learn these topics, and they'll be going for a good number of years while we can work on the other faster-moving topic. But also, the thing is just money, right? I mean, a lot of people obviously want to get paid well for doing this, and jobs pay really well. Yeah, they do. I mean, it's a constant complaint in academia, because, you know, all the best students off the go, right?
They pay really, really well. And obviously, it depends on the job. Like, you know, if you've self-trained, you aren't going to be able to jump into, you know, a top AI job straight away. I mean, maybe, I guess, if you could really prove yourself, don't rule it out. But I suppose, but you can move up quickly, right? If you show your potential, and you can learn these techniques, if you obviously have a PhD, you've got a set of skills already there written down on papers, so it is perhaps, a bit easier. But I don't think
it's out of the question for anyone to do this. I think you just have to, it's just like any job, you know, you don't start at the top managing, you start, you start somewhere at the bottom, but you pick up the top, and you get back to it, and then you get, and they all pay pretty well. I just, I always advise people to ride the waves. I mean, you and I have been around the block a few times, and I've in tech, I've written quite a few waves, like Voice Over IP going back many,
many years, opened a lot of doors, and then there was like a network in this automation, etc, etc. The point is, if someone's younger, or someone's changing careers, jump on this wave, right? There's no downsides as far as I can tell, right? Because normally, you might imagine there's a downsides where, like, what if this technology goes kaput, you know, I'm learning some difficult, I'm going to pick a bad example, let's say your JavaScript program, and you want to learn React, right? Now, React's great, but it might not exist in five years time, right? It might
not. Let's not, this is not a prediction. Now, I've got a really good example. I learned OpenFlow, and there was hot for like a while, and then it just died. Sorry, go on. Yeah, you can't walk out absolutely. With AI, I think we can kind of rule out the idea of it as going away, because it's just huge getting better, and also, it solves so many problems, even if it didn't get better, it's still super useful for solving problems as part of other pipelines. Think about this. An understanding of AI that you get in 2024
is going to last you a lifetime, because I just can't see how it would go away. It will look very different in a few years, but you'll be able to build on that knowledge that you've already got. So I think it's that there's no drawbacks. I love that, because it's, you know, like some technologies, it's a solution trying to find a problem, but like you've said here, there's so many problems that really being solved by AI. Outside of the core of the hype stuff you can see in the media, there are thousands of examples of
smaller AI and deep learning applications that are solving problems across the world, right? And they just quietly happen behind the scenes. They don't get all the big hype, but they're doing a huge transformative job. And so even if you don't go and work for the big tech company, training the next ChatGPT, you might still be doing something with massive impact, right? And really, really worthwhile doing. I think that the roadmap, right? So for AI is, is a smidgen of maths. For our right, but no, but I'm not going to mention it, is is is learn
Python, learn PyTorch, you could use another library if you want, but I recommend PyTorch, get a supervised model going. So supervised and learning, right? So get your data set. It doesn't matter what it is. It can be a public one that already exists. It can be something you went out and shot on your phone. It doesn't matter at all. And train a supervised model that solves that. And you're already a good chunk of the way there. And then you can start making a problem a little bit more complicated or you can move from a simple
classification problem to maybe a segmentation problem or something like this. That's I think the right, the right way forward. I wouldn't start with anything other than a supervised learning because it's the most intuitive, right? You give examples, you learn from those examples. And once you've done that, you've also seen all the PyTorch really need for all more complicated examples. They just have more code, right? That's that's the difference. It's still the same stuff. Mike, I really want to thank you for sharing, you know, as always, you separate the hype from like the reality and give
people hope because that's always the one of the worries I hear from a lot of people is like, if I'm 18, why would I bother even learning tech going to do computer science because AI is going to eat all my jobs? I really appreciate you, you know, giving us hope with that. Yeah, it won't right. And, you know, particularly if you know about AI, there's always going to be a need for this. I think that wherever or not something will take some of those jobs with something for a government to worry about, I'm not particularly
worried about it at the moment, going to be hard for us, you know, at all. I train AI and I'm hoping that I still have a job soon. We will see, if you've keep inviting me back as an associate of profit, we shall see. But I think that is an exciting time. It is overwhelming, but actually, we have a huge opportunity and it is really fun. So, you know, you start training, there's nothing actually I find more satisfying than when you train that network and it actually does what you're asked to do. That's a cool
one when that happens. Even though it's ultimately just Python code you're running, when it actually works and you see that output, it is good. And you don't actually understand how it's got there, right? Because you just giving your data and you're teaching it. No, you don't understand the, it's essentially a really complicated function that has done something clever. We sort of have a gist of, and we have an intuition as to what it's doing, but, you know, ultimately we don't really know what it does. But actually, I stop worrying about that a little bit. If
it works well, I'm kind of okay with it. I love it.