so we are all very eagerly anticipating 2025 in terms of AI there are many different themes that we could expect but one company that all eyes are on is of course opening eye and recently Sam Alman had a Twitter thread in which he explained exactly what's going to happen so in this video I'll dive deep into what you can expect from open a in 2025 and the entire AI space so one of the things that we are actually probably going to get is of course surprisingly some of voice mode changes one user says that when
I talk to Advanced voice mode I wish it had better memory of my previous conversations both verbal and text and if it were just a retrieval augmented generation system he explains that he enjoys conversing and brainstorming due to his commute but he finds himself having to refresh things in great detail and Sam mman says that he really wants to figure this one out now this one is really interesting because this brings us to not just voice mode changes but actually some key memory changes too recently in another video I actually mentioned how in 2025 it's
quite likely that memory is going to be one of those things that is Sol memory and so it just doesn't forget which is truly transformative I mean you talk about inflection points memory is clearly an inflection point because it means that it's worth you investing the time um so it's that capability alone which I expect to come online in 2025 is is going to be truly transformative so if you're aren't familiar with who that that was that was Mustafa suan he's currently the head of Microsoft Ai and it's well noted that Microsoft AI work closely
with open AI in order to produce products and services so it's quite likely that we're going to be getting some form of open a model that basically has infinite memory and it was today that we got another member of technical staff from openai Rune actually tweeting a reply to this user here you can see someone said Rune brother us or the user interface is vastly improved but $200 per month and no increase in user memory is whack what is the outlook on that and he said soon infinite memory and I'm going to show you guys
a clip from a recent video in where I actually spoke about how 2025 is quite likely to have infinite memory for these llms which is going to be a remarkable achievement of course in more voice mode changes we do get some better detection for voice chats for example sometimes when you're thinking about what to respond to the voice mode AI it will just start yapping on about something random which is of of course not what happens in certain conversations you are allowed the time to think about your response now one of the biggest questions that
majority of people are still asking open eye is where is the for of replacement you can see right here that we did get a very important few questions from this user Leo he said one of the things that I want is a strong photo replacement GPT Sor all seamless integrated into chat GPT perhaps a $50 to $70 a month plan maybe a middle ground of course long context which we already discussed is basically in inite memory and of course knowledge cut off aggressive updates please and he says this is a very good list hopefully you'll
be quite happy with us over the next year so I'm guessing that mainly from this what we're probably going to see is you know a middle ground monthly pricing plan I do think there is just such a disparity between $20 and $200 a month I think it would probably be really useful if openi had something that was Progressive in terms of their payments I know that there are many people that wouldn't want to play just a fixed rate but would rather pay for how much they talk to the AI I know that many people who
use Claude would rather just pay an additional $10 or $20 however payers you go kind of way for when they're talking to Claude AI in certain conversations and I'm pretty sure a lot of people would like to do the same in chat GPT rather than just being rate limited and of course one of the key things here that we're going to be getting is a strong 40 replacement now 40 is of course the model which is a successor to the GPT 4 Series and this is essentially an omni model so this model is quite important
because it's the base model that a lot of people will use in a variety of different applications considering most people don't have the necessary requirements in terms of the demanding tasks that 01 requires so you can see when he responds to this other tweet right here this user Sully says honestly a good thinking model GPT 40 kind of sucks now and even gp4 or mini isn't that good and I think having both is important and he says here that definitely we are going to do this and we all do know that openi are working on
a new model in fact there was a recent article that spoke about how this new model model actually is a lot better than the current GPT 40 but the current cost that it takes to use the model might not be worth the marginal gains in terms of improvement and considering open AI still don't have enough compute to Ser their users in terms of Sora in terms of 01 Pro mode and of course the 03 model that they're currently working on I think maybe smaller models like GPT 40 mini aren't a priority but I do think
that maybe sometime next year we will get a successor model to GPT 40 that's going to be very remarkable replacement so that is definitely something that you can be on your list and I do remember last year that GPT 5 was something that people were thinking about another thing that we are going to have is of course a potentially liberated mode this one is actually quite hilarious because this person okay and most people don't know who this person is but this is plyy the Liberator okay and I say most people I'm just referring to people
outside of the AI space but plyy the Liberator is a notorious AI Liberator in the sense that he will jailbreak every single large language model that comes out anytime you think that a large language model has passed all the safety requirements sometimes this guy manages a will to jailbreak that large languish model basically meaning that you know if you want to ask the AI a question that it usually can't respond such as asking it to do something that is probably illegal this guy finds ways to jailbreak the AI every single time I don't think I've
ever seen him fail and he said that you know why don't you lose the guard rails it's cleaner and Sam Alman said we're definitely going to need some sort of grownup mode so I think this one is going to be super fascinating because I think there are a lot of people out there that the main reason they use open- Source models and all of those unrestricted models is because there is a lot of stuff for adults or grown-ups that people would want to access and explore creatively with large language models I'm pretty sure you can
all know the direction I'm talking about but maybe due to policy restrictions or yada yada yada op I have just decided to keep those restrictions on so it will be interesting to see if in the New Year we do get some kind of mode which is grown-up mode where you can simply ask the AI anything and maybe they just trust you to ask it questions that are responsible another feature we're quite likely to get in 2025 is of course a feature that has basically been released by Google so this user says do a deep research
feature like Gemini but better and Salman said okay okay now if you don't know what deep research is from Gemini I'd explain it to you first as it's quite like perplexities search engine for AI now if you don't know what this is this is basically where you can just search for AI using the current llms of your choice and it's something that is remarkably effective at allowing you to scour the web for various pieces of information now deep research however is something that is completely changing the game now this is something that I did a
tutorial on recently and Trust Me by far this is one of the most useful things in AI I use this almost every day now you can basically search for anything on the internet and it researches over a 100 different websites and creates a comprehensive report and gives you that that in absolutely incredible detail so this is something that if you haven't started using that I would really use this ass soon as possible because it allows you to access so much research so much information at the click of a button and you can click it you
can come back while your AI agent is off researching information and then you can use that information to create content to develop marketing strategies plans it's just absolutely incredible don't forget to check out that tutorial on my channel I will leave a link to that but it is something that openi are basically saying that they are going to be having so of course openi do have their own search GPT model you can see right here that this is something that they do have so search GPT is essentially where you can browse the internet with chat
GPT now I do think that this is something that is good but of course there are the depths that a lot of people do want in certain use cases which is why Gemini's deep research feature is so good so open AI sating that they're going to be working on this will be surely interesting because we know that their products are usually top tier when it comes to the user interface and of course the positioning now something rather fascinating that open ey also spoke about was the fact that they are going to be doing a hardware
play so you can see here that this user managed to ask Sam mman for a variety of different things talking about your own retrieval API talking about a video input modality would be nice of course that would be nice of course you know you're excited for agents and of course a hardware Play Now video input modality would be nice this was something that open a already revealed with GPT 40 I just think maybe it is a little bit expensive in terms of the compute side which is why we haven't seen it yet but if we're
actually actually starting to talk about a hardware play this is something that apparently open AI are already on now essentially Joanie I has confirmed that he's working on a new device with open AI so this is someone who worked at I this is someone who worked at Apple on the iPhone and now he's working with 10 employees with people who worked on the iPhone in a 32,000 ft office in San Francisco on a new generative AI device which is going to be super interesting because this year we've seen many devices come out and some of
those have failed you know notably the Humane pin did actually not do well we've seen other AI products as well do really well for example the friend tab there's just a variety of different AI products like the rabbit R1 device which is really cool so it's going to be really interesting to see how these products do succeed and if open ey does manage to do them now I don't know if this product is going to come out in 2025 or not I do know that making a hardware product is really really hard but I do
think that like I wouldn't say we're at the limits of where LMS are but I do think that now we're at that implementation phase where a lot of these companies right now are just figuring out the best ways to implement stuff which is why we're getting products like notebook LM we're getting search products we're getting deep research products we're getting agentic products and this is why I think you know there's going to be less research on you know how to make those models better in terms of the GPT series but a lot of areas of
explosion where we're going to be getting products that utilize these models in really unique ways I don't know how long this product is going to take ideally they'd want it to be really good but it's going to be really interesting I mean a lot can happen in a here so that is going to be something to look out for as well now of course one of the biggest things that has been going on in the community is of course open AI video generation model Sora so this user here oncy Harry AI says Sam please I
would like to see more adherence to image and text prompts also a reasonable content restriction policy and he said lots of Sora improvements coming now this comes off the back of the response from the community when regarding open AIS so if you aren't familiar with the Sora versus open AI so if you aren't familiar with vo versus Sora essentially Google released a video model that was just far far superior to open AI Sora and he said lots of Sora improvements coming now he also responded to this person say just make Sora really really good and
he also said that this is definitely going to be coming now if you aren't familiar with vo maybe you've been living under a rock maybe you're not familiar with the AI space but vo is Google the video model and in this example here from The Venture twins you can see that between Sora and VI vo just manages to excel in terms of prompt adherence the physical motion and all of the things that you really would want out of a video model so with that being said it's quite clear that there is a level above that
Google have managed to do but of course this is going to be something that opening ey are improving on another thing that open are going to be doing in 2025 is of course a proper agent one of the articles that have consistently reported on what openi are doing has spoken about how open ey are going to be releasing some kind of agent early next year and apparently this is going to be something that's available in January now of course timelines are subject to change based on AI developments and worldwi events but of course this is
something that will be there in 2025 as that will be the year of AI agents now this is of course something that I did make a video on which I will include a segment of there that includes a lot more details on to know so it says here that opening ey is preparing to launch a new AI agent codenamed operator that can use a computer to take actions on a person's behalf such as writing code or booking travel according to two people familiar with the matter now interestingly one of the things that I found quite
fascinating was the fact that this happened in a staff meeting on Wednesday and the fact is is that there are plans to release this tool in January as a research preview you can see that it actually states that it's going to be through the company's application programming interface for developers said one of the people which is just API and of course someone was speaking on condition of anonymity to discuss internal matters now what's important to note here is that their leadership plans to release the tour in January is going to be a research preview through
an API for developers and what's important is that this phrasing usually suggests a limited release targeted at researchers and developers rather than a broad public launch typically when we look at a research preview this does imply that access may be restricted to specific groups like those in a developer or research program than rather being released to the general public which kind of makes sense considering the fact that they are likely to test this out get feedback then of course release this to the general public so for those of you who are hoping for an opening
release of agents in January it's quite likely that this won't be January but would probably be further down on in the year whilst January should be slated for a GPT 5/ Orion release as they iron out be Kings now but in response to this right here you can see someone says proper agents and he says happy 2025 now one of the things I think most people are completely missing when it comes to 2025 is the fact that AI is probably going to move even faster you might be thinking what on Earth am I talking about
benchmarks are going to be smashed and this is because we take a look at a researcher at opening eye Jason way and he says that 03 is very performant more importantly progress from 01 to 03 was only 3 months which shows how fast progress is going to be in the new paradigm of reinforcement learning on Chain of Thought to scale inference compute which is way faster than pre-training Paradigm of every new model you know 1 to two years basically stating that look when we take a look at these improvements you know we've seen how 01
preview then to 01 then to 03 between those models it's only 3 months okay which basically means that before when you'd have gpt3 one year gbt 4 another year and you have six months of collecting data another couple months of training the model for months and months on end this kind of model doesn't require that much time in order to be improved so it's going to be really really intriguing to see just how many AI improvements we do get if we got 03 recently that means there are probably going to be four iteration Cycles in
terms of model improvements which means by the end of 2025 it's quite likely that we will be on not 03 but we will be probably get 04 05 06 and 07 which basically means that if we get to the seventh iteration of the model we can you know probably not even fathom how smart that model might be of course that is presuming that everything goes smoothly and there are no interruptions we know that the world is a very volatile place maybe opening ey might even implode we've seen some crazy things happen but one thing is
for sure with many other companies working on thinking models as well and of course since the age of pre-training is over it means that the AI space is about to move even faster with that being said let me know what your wishes are for opening ey in 2025 leave your comments Down Below on what you'd want from opening ey and mman I know that what I would want is probably some really cool AI agents that are able to listen to and do exactly what I say which of course probably going to come next year but
with that being said I'm excited for the new year and I will see you guys in the next video