Let's dig into the AI news that you might have missed this week. Starting with the fact that OpenAI just made their deep research available on the free plan. Now, I found this deep research to be really helpful.
You click this on and it actually does search on the internet, but it really does like a deep dive search and really pulls out a lot of information for you about whatever question or topic you're asking. Now they say that this new deep research that's available on plus teams and pro is a lightweight version which allows them to increase the current rate limits. Now if I jump into the free version of chat GPT here you can see that the deep research option is available here in the free plan.
If I hover over it, you can see I've got five available until May 25th. So basically five uses over the course of a month on the free plan of chat GPT. Now, reading the tweet, I'm a little uncertain about how it works for Plus and Team and Pro users.
They do say, "We're expanding usage for Plus, Team, and Pro users by introducing a lightweight version of Deep Research in order to increase current rate limits. We're also rolling out the lightweight version to free users, but as a prouser, I've had access to Deep Research. So, what does it mean it's rolling out the light version to me?
" So reading through the thread here, the lightweight version of Deep Research is powered by a version of O4 Mini, nearly as intelligent as the Deep Research people already know and love, while being significantly cheaper to serve. Responses will typically be shorter while maintaining the depth and quality you've come to expect. Once limits for the original version of Deep Research are reached, queries automatically default to the lightweight version.
From what I gather from their tweet here, if you're on the free plan, you get five deep researches. If you're on a plus team or pro plan, you get some amount of normal deep researches and then once those are used up, then you move to this lighter version. But I don't know how many we get on the higher tier plans.
Do we have a limit on the normal deep research where we get downgraded to the lighter version? I don't totally know. And since we're already talking about OpenAI, let's talk about the open model that they have planned.
the new model that they're planning. Rumor has it it's going to come out around June sometime. This Techrunch article says early summer, but the rumor mill is claiming Juneish.
But this model will be available to download at no cost and won't be gated behind an API, so it'll be likely one that you run on your local machine. They also claim to be targeting performance superior to open models from Meta and Deepseek. And if you remember, the Meta Lama 4 is a pretty big model that has a 10 million token context window.
That's bigger than any of the context windows that any of OpenAI's currently closed models have available to us. So, we'll see if they match it on that as well. That would be really interesting.
Now, supposedly this new model will be able to call upon other models. So if your query, your prompt is a little too complicated for it and it knows that one of the closed models that has an API available will answer the question better, theoretically it will be able to hand off the question to one of those models. But this is all very speculative at the moment.
It says here, if this feature, as sources describe it, makes it into the open model, it will be able to make calls to the OpenAI API to access the company's other larger models for a substantial computational lift. It's still unclear whether we'll be able to do things like web search or image generation in this open model. So definitely something to look forward to.
I know a lot of people's big objections to using tools like OpenAI is you can't run them locally. They're closed source and they're running in OpenAI's clouds and they're worried about plugging in information that maybe OpenAI is going to train future models on. Well, with models that run locally, theoretically, you should be able to turn off the internet and not be connected to a cloud at all and still get solid responses, assuming you've got the right amount of compute to run these models on your computer.
It was also announced this week that the Washington Post partnered with OpenAI for search content. So, when you use OpenAI search features, OpenAI's allowed to actually search Washington Post content. I've talked about this before.
I think this is all sort of preemptive to minimize a lot of the lawsuits. If it was up to OpenAI, they would just share whatever they could share. But because all of these lawsuits keep on popping up against OpenAI, it kind of seems like, well, OpenAI's had their hands forced, and now they're going and making partnership deals with all these various news outlets, so they don't have to get sued by them when they start feeding people these new sources content.
Perplexity rolled out a cool new feature this week inside of their iOS app called the Perplexity Assistant. It kind of does what people have always hoped Siri would do. So, on top of asking basic questions, which you're able to do already, you could do things like play media, have it draft emails, move meetings for you, book rides, make reservations, set reminders, things like that.
More examples they give, you can ask it to play podcasts, hard to find videos, favorite songs, review your day, schedule meetings, send emails. Raven, the CEO of Perplexity, does warn that some of these features are still a bit janky and they're kind of getting there, but pretty cool to see a lot of these features roll out and sort of become like a better version of what Siri should be. Now, doing a little bit of testing myself, some things have worked well, others haven't.
Like, it will open podcast, but only in Apple Podcasts. Open the latest episode of the Next Wave podcast. I have opened the latest episode of the Next Wave podcast in Apple Podcast for you.
Would you like to continue with anything else or should I end our session? Book me a reservation at Island Prime in San Diego for tomorrow at 7:00 p. m.
I've opened the Open Table booking page for your reservation at Island Prime in San Diego for tomorrow at 7:00 p. m. You can confirm your reservation there.
Tell me what events I have on my calendar for next month. Here's what you have on your calendar for next month. May 2nd, Bill's birthday.
May 5th, Cinco Deio. May 7th, Alfie's birthday. May 8th, Tracy's birthday.
Now, the funny thing is a lot of these things I tested before I hit record and they weren't actually working properly for me. But then when I hit record and tested them again, they did work the second time. So, all of my examples seem to work pretty good.
But I have had some slightly janky experiences where last time I asked it to pull up things on my calendar, it didn't find anything. And the first time I asked it to open the next wave podcast, it actually just opened Apple Podcast, but it didn't open to the Next Wave podcast. But this time, every time I've demoed it, it's worked fine.
So, it does seem to work pretty decently, at least when I'm recording a video about it. We got some updates out of Microsoft this week. They actually showed off their new updates to 365 Copilot with more agentic features.
For example, it has an AI powered search to help you find information faster, a new create experience, copilot notebooks, and a new agent store to find and access agents. Now, they did demo some of this in a video here. So, for example, we can see inside of Microsoft 365 C-pilot, they have various chat options like analyst, researcher, sales.
It can create things like bar charts from data that you have inside of your Microsoft account. There's various notebooks to kind of keep things organized and I believe sort of keep the context of what's in that notebook when you're chatting with the AI. And then under agents, we can see some examples like Jira, idea coach, skills discovery, prompt coach.
And then under all agents, there's like this marketplace to get other agents which look to be sort of like connections to APIs with other tools like monday. com, Drpbox, Trello, etc. It looks like kind of the biggest deal here though is their researcher and analyst agents that are sort of optimized to do those specific things.
One could go off and do deeper dive research for you. One, you can give it a bunch of data in from like Excel and Word documents and things like that and have it actually analyze the data that you give it. And it looks like most people will be able to start getting access to it in the spring in late May.
Another update out of Microsoft, the recall feature that has been announced and then pushed back and then rolled out and then pulled back and then announced again. Well, it looks like it is finally coming for real this time. Now, this is a feature which is almost like a browser history but for your entire computer.
So, you can go back and see what you were doing in Da Vinci Resolve earlier or Microsoft Word earlier or pretty much any app on your computer. you could kind of go back and rewind time and see anything that you were previously working on. And you could even use like an AI prompt to find specific times and things that you were doing.
And a lot of people were really worried about this. And there were some issues when they first announced it. There was some things like you can go back through and sort of see passwords that were being entered and things like that.
But they seemingly addressed a lot of the problems that people had with it and now they're finally rolling it out. When we introduced recall, we set out to address a common frustration. picking up where you left off.
Whether it's a project from last week or one of the countless browser tabs open while online shopping, tracking it down often means relying on vague memories while searching through folders, websites, and endless emails. Recall retrieves your digital memory, allowing you to retrace your steps in seconds to quickly and securely find and get back to an app, website, image, or document. Now, they made recall an optin experience.
Before, I believe it was on by default and you had to turn it off, but now it looks like they're rolling out as optin, so it won't be on by default. There's also controls and filters to specifically set what gets saved for later and what doesn't. And recall data is processed locally on your device, meaning it is not sent to the cloud and is not shared with Microsoft.
Microsoft will not share your data because, well, they're not getting access to it. It's all being saved on your computer, not getting sent to a Microsoft cloud. They're also improving Microsoft search.
It's using the neural processing units. Essentially, it's using AI to better understand what you're searching for and be able to pull up anything that you're looking for without you necessarily needing to know like the file name of the exact thing you're looking for. They're also rolling out the click to do functionality which gives you the ability to summarize, rewrite, or you know, copy and paste text and images from your screen.
We got an update out of XAI this week as well. The Grock chatbot now has vision capabilities similar to what we can get from the Gemini models and the OpenAI models, we now have that same vision functionality inside of Grock and it works on the Gro mobile app as well. So, if I open the Gro app here and then I click on the little white circle.
I'm in dark mode, so it might be a dark circle on your phone, but if I click on the little circle to the bottom right of the chat window, I could click the little camera icon on the bottom left. And now it can see the world around me. And now, if I start to chat with it, what am I looking at right now?
Oh, it looks like you're set up with a nice workstation. You've got a monitor displaying a menu with options like news, movie pick, and NBA highlight. And there's a camera mounted above it along with some other tech gear on the desk.
I don't see the news or NBA highlights. There is a camera. You got that part right.
Oh, I see. You're right. Looks like the menu has shifted a bit.
Yes, the camera is still there above the monitor. And it seems you're on a different screen now. Maybe the assistant or translator option.
What can you see in the picture on the wall behind the camera? Oh, I can see a beautiful picture on the wall behind the camera. It shows a scenic coastal landscape with a beach, waves, and some greenery under a partly cloudy sky.
Are you able to tell where the picture was taken? H It's tricky to pinpoint the exact location from the picture alone. The coastal scene with rocky shores and lush greenery under a clear sky looks like it could be somewhere tropical, maybe like Hawaii or a similar Pacific island.
But I can't say for sure without more details. I mean, it's Maui. I've actually been fairly impressed with Grock.
I don't really feel like Grock gets enough credit, but it is a really, really impressive model. The Grock 3 model is great. I think one of the things that held it up for so long is it didn't have an API, so other tools weren't actually integrating it into their tools.
But quite honestly, Grock is pretty good. We also got some news out of LTX Studio this week, who happens to be the sponsor of today's video. They actually added Google's V2 video generation model into the LTX Studio platform.
Now, this is significant because this actually makes LTX Studio the least expensive way to generate videos with V2. Using LTX Studio, it costs about 65 per 8 seconds of video. Compare that to Google's own cloud platform where they charge you 50 cents per second.
So, what would cost you $4 over on Google directly only now cost 65 over in LTX Studio. So, let's go ahead and try out V2 inside of LTX Studio. I'll click into the motion generator.
We can tell that it works in here because we've got the Google logo. Under video model, we can see they have a dropown where we can select LTXV or V2. So, let's go ahead and select V2.
And we can start from an image prompt. I took this image of a bunch of sea turtles on a beach in Kawaii. Let's see if it'll actually animate this image.
I'll upload the image. And for the prompt, let's say the turtles crawl up the beach in the sand and generate the video. And as expected with V2, it did a pretty good job.
We can see the waves moving, people walking, this turtle's moving right here. Now, let's go ahead and generate a video from scratch. I'll go back to the generate images module in LTX Studio.
Let's generate a new image of a wolf howling at the moon. I really like this one here. So, let's click add motion.
And once again, we'll select V2. Update our prompt here and generate the video. And here's what we got out of it.
A really good looking wolf howling at the moon. probably the best generation I've gotten of a wolf howling at the moon. I love the fact that LTX Studio is becoming more model agnostic.
Yes, they have their own open-source model LTXV, but now you can use V2 as well. And I'll put a link in the description because if you sign up for LTX Studio before May 3rd, you can get up to an extra $300 and VO generation credits. Check it out in the link in the description.
And thank you to LTX Studio for sponsoring this video. Now, let's get back to it. If you have a pair of Ray-B band Metas, they rolled out some new features for that this week, including live translations, so somebody can be speaking to you in a different language.
And the little headphones that are on the glasses will actually live translate them into your preferred language. So somebody could speak to me in Spanish, and as they're speaking to me, I'm actually hearing an audio translation in my ear in English. Super cool feature.
I got to test it at MetaConnect last year and was really, really impressed by it. Well, now it's rolling out. You can also download a language pack in advance so that you can use the live translation feature even if you don't have access to the internet, which is pretty cool because that is one of the issues of using these glasses and trying to use the large language model on them.
Like I was out in the Grand Canyon trying to use them and I was asking questions about it, but I was in an area that didn't have a good internet connection, so it wasn't able to answer my questions. It doesn't seem like that's going to be an issue with the live translation because you just download that translation pack in whatever language for, you know, the country you're going to. YouTube's testing out a new feature.
I haven't actually seen this or come across it myself yet, but they're testing this AI overview feature. So, you know how when you search with regular Google right now, they have like that AI overview that pops up above the search results? They're testing something similar on YouTube, but instead of like a text AI overview, when you search for something specific, it will do like a little clip from a video.
Again, I haven't seen this yet, so I'm not super clear on how it's going to work, but according to Mac Rumors here, it says there's going to be like a a carousel of video results, and they'll use AI to highlight clips from videos that will be most helpful for your search query, which essentially means that it will take clips from videos and play them right in results. So, people may not need to click into a video to find the information they're looking for. Google uses AI overviews for Google search, but the YouTube version will differ.
AI won't summarize videos and will simply pull clips from them. It's not clear if the AI selected clips will encourage users to watch a full video or cause fewer people to actually engage with videos. It's being tested with a small number of YouTube premium users in English.
But again, I haven't seen it. My assumption is that you can do a search on YouTube like what feature did Rayban Metas just roll out and it would find the clip of me talking about the fact that they just did the live translation feature and give you a carousel of just clips from videos of answering that question, which might make it easier to answer quick questions for people searching YouTube, but potentially really disincentivize creators who need you to click on their videos. I don't know how this is going to play out.
that's the way it works. I don't think a lot of people who make YouTube videos are going to be super happy about it. But again, I don't totally understand this yet.
We're just going to have to see how it plays out. Moving on to Enthropic. While we haven't been getting a lot of like new models or crazy new updates from them, we have been getting a lot of essays and sort of research from them.
This week, they put out this blog post called our approach to understanding and addressing AI harms. It's basically saying that Anthropic and other AI companies need to pay attention to more than just the giant doomsday scenarios that people talk about around AI. And we should also be looking at the physical impacts, the psychological impacts, the economic impacts, societal impacts, and individual autonomy impacts.
They claim that they've tweaked Claude 3. 7 so that it refuses 45% less harmless prompts while still managing to maintain guard rails on the really dangerous stuff. They also released an article called detecting and countering malicious uses of Claude.
In this article, they essentially share a bunch of different case studies around how Claude has already been used sort of maliciously. things like running political bot farms and scouting for leaked passwords and even helping hackers code malware. Now, with all the case studies they shared, they actually caught the accounts doing them and banned those accounts.
But really, the idea of this whole article is showing that these AI models can be used for a lot of harm. And here's stuff that we're already seeing. Now, the point of this was to say that look, this stuff is happening.
We are doing our best to constantly keep up with it, but it does seem to be a bit of a cat-and- mouse game and that everybody needs to be responsible for this stuff and also to make consumers aware that you should be watching out for this stuff. I mean, it's getting harder and harder to trust, you know, emails and DMs and even posts on social media could be bots that are being generated using AI by bad actors. So, essentially, just try to be safe out there.
And then Daario, the CEO of Enthropic, put out an essay on his own personal blog called The Urgency of Interpretability. Now, this essay is quite long, but the TLDDR of it is that he feels these large language models are really smart, but also still very mysterious. Like people don't totally understand how they think yet.
So, a lot of the risk management that's going on right now is basically guessing. He believes they can build like an MRI for AI to sort of better map out how these models are actually thinking and get a better understanding of how they're actually working. But he really really emphasizes the necessity for speed and trying to figure this stuff out because he is concerned that we might get to like a point of no return if we don't solve a lot of these issues now of really understanding how they work and you know some of the other things that we were just addressing in the previous two articles that Enthropic put out.
So with all of this in mind, it's starting to sort of make sense why maybe Enthropic isn't shipping quite as fast as OpenAI. they seem to be more on that worried where all of this is headed. And we need to better understand how this is all working before we really try to push too much further, too much faster.
That seems to be Anthropic stance and probably why they're a bit slower to ship than a lot of the other companies. Now, if you're a developer, you've got a handful of new APIs to play with this week. OpenAI shipped their image generation model in the API.
So several weeks ago when we saw the image generations inside of chat GPT where everybody was making the studio Giblly images and you were able to sort of make YouTube thumbnails and things like that, that technology is now available in the API. So developers can actually use that. So you're going to start to see a lot more AI tools for generating images that use that same technology.
There's also a new Gro 3 Mini API, which according to these benchmarks here shows Gro 3 Mini being better at pretty much all of these selected benchmarks than Gemini 2. 5 Flash, 04 Mini High, Deepseek R1, and even Claude 3. 7 Sonnet Thinking model, and the pricing being quite a bit less than all the other models.
Moving on to AI art news, Adobe released a new version of Firefly and their web app. They also added the ability to choose other models. So, if we log in here to the Firefly dashboard, we've got our models.
And sure enough, you've got the brand new Firefly Image 4, their new model, Firefly Image 4 Ultra, Imagine 3, and GPT image. And here's what I get when I use their Firefly Image 4 Ultra with the prompt to wolf howling at the moon. And here's what I get when I use just the standard Firefly image 4.
I don't know about you, but I think the four actually came out a bit better than the Ultra. But I haven't done a ton of testing with this model yet myself. Since we're talking about AI image, the company Crea AI rolled out the ability to edit images in chat using the chat GPT image model.
We could see they giblified and frogified and did all the types of things that we've seen other people do with the chat GPT image model. It's just now you can do it directly in Korea if you want to. Korea also rolled out this new feature called stage which lets you create 3D environments with AI from images or text.
So they prompt cowboy movie scene and generated it and it actually generated a scene with a bunch of like assets in the scene and they could actually change the assets and move them around and they were able to drop images into a scene and turn them into 3D objects and even rig them up. Also this week the company Tencent released a new model called Hunan 3D 2. 5.
This one's a 10 billion parameter model which is up from 1 billion parameters previously. has high quality textures, has an animation boost, and if we look at their demo video, everything they're showing off here looks pretty dang impressive. Obviously, they always cherrypick for these kinds of videos.
And I'm guessing it's only going to be a matter of time before Crea builds this in cuz Korea seems to just take like every API and just toss it into their tool. But what I'm seeing from this model looks really, really good. The company Character AI, which allows you to create like fictional characters that you can chat with and has been really popular among younger generations, they actually just rolled out a new feature where it can actually generate videos now.
They rolled out this avatar effects feature which actually generates visuals for the characters that you're chatting with. So now it feels less like just talking with a regular text chatbot and now more like talking with a character where you create an animation for. They're rolling it out now over at Character AI and you can apply for early access.
Apparently, it's not available for everybody just yet. And since we're talking about AI avatars, this company, Argil, Argill, I'm not actually sure how this is pronounced. They just rolled out a new feature where you can have these AI avatars hold up actual products.
This is going to be pretty big for e-commerce companies where you could have a product, you could create an AI generated avatar or sort of spokesperson for your brand and they can actually hold up your product and talk about it. Here's another little screenshot they posted of Gary Tan in a Batman mask holding up a Y Combinator water bottle. I don't believe this one's fully rolled out yet because they do say comment argal AI for testing, but we can see some other examples here on their website of, you know, AI generated characters holding up various products.
Here's one of an avatar podcasting, one of an avatar actually cooking. So, you can actually start to use these AI avatars for better product branding. I recently did a podcast episode with Nicola from Wonder Studio over on the Next Wave and we were talking about how companies now could create their own like Geico Gecko or Tony the Tiger kind of mascot and then use that mascot in different scenes and shots to help promote their brand.
Well, now you can do that with like AI generated real looking spokespeople as well. The marketing and business implications of this are pretty massive. I have another episode of the next wave coming out where I talk with Adam Biddcom of the Mindstream newsletter and that whole episode is us testing out different things like Argill, like Cynthsia and Hey Jen and sort of comparing the differences between them all.
So if you like this kind of stuff, make sure you check out that episode when it comes out in a few days here. This company Tavis dropped a new lip-syncing model which is supposedly the best lip-syncing model available now. I still feel like the AI voice with the lips sync still feels weird, but it does seem to match up pretty well.
Let's check out a demo here. I was obsessed, man. Optimizing, prompting, jailbreaking, chasing the high of the perfect output.
And after a thousand prompts like that, I started to ask, "What am I really chasing? Accuracy, alignment, or just some sense of control? " I realized I could prompt a million AIS and I'd still never be satisfied.
I mean, to me, it still looks a little uncanny. It still looks a little off, but it's getting a lot closer. See, even this one here with Donald Trump on mute, I can tell just by looking at the lips, it looks a little bit weird.
Like, the lips don't look like they're totally moving supernaturally to me yet. But again, this is the worst it's ever going to get. And we've come a long way already.
Now, I thought this was really cool from Descript. Thanks to REI or the Moon Midas here for shouting me out on this one, but Descript is testing out some new AI agentic features. They're basically claiming that they're building the cursor for AI video editing.
So, let me open this video up here. I'm going to keep it muted and let's zoom to the demo here where we can see they have a chat window over on the left where they say, "Make a one minute video with my top three tips on seeming natural on camera when reading a skipped. " And then they give the tips and then it actually writes the script draft for them.
They also give another prompt where they uploaded a video and literally just said, "Can you edit this down? " And then the AI responds, "Sure. " It looks like this video is about Descript's create clips feature.
There are several things I'd like to edit. Repeat takes, offscript diversions, several abnormally long pauses, and then it goes on to say, okay, I made a total of 13 edits and reduced your video by about 2 minutes. So, it went and just made the cuts that it thought it should make based on a prompt.
Can you edit this down? Then they prompt it with, can you take a pass on the visuals? Add a few chapter titles and some stock overlays just to break it up.
They map out a plan, add a few chapter titles, insert some stock overlays. I notice there's a bunch of jump cuts. I'll do my best to mask them with zooms and other scenes.
There appear to be some places that call for screen recordings. I'll add those. All done.
I added four chapter title cards, six relevant stock overlays, two short screen recordings, and also masked the most noticeable jump cuts with subtle zooms. This is crazy. Now, this isn't publicly available.
I did apply to test this cuz I really want to test this out, but again, they're trying to build that cursor for editing. You just chat with a bot here and it makes the edits based on you just talking with it. We're really close to agentic AI video editing and I'm here for it.
I want to try it out. And since we're talking about AI video, this week the Oscars announced that they really don't care if you want to use AI in your films. Their exact statement was with regard to generative artificial intelligence and other digital tools used in the making of the film, the tools neither help nor harm the chances of achieving a nomination.
The Academy and each branch will judge the achievement, taking into account the degree to which a human was at the heart of the creative authorship when choosing which movie to award. We're probably not going to see a fully AI generated movie win awards anytime soon, but if you're using a little bit of AI here and there to like help with scenes in your movies, it seems that the Academy of Motion Pictures is fine with it. A couple quick last interesting things.
In a court case this week against Google, OpenAI actually said that they would buy Google's Chrome browser or they would, you know, bid to buy Google's Chrome browser if it was available for sale. The courts are currently trying to break up Google right now because of monopoly reasons. And a handful of companies, OpenAI and Perplexity, have both come forward, but said we'd be interested in buying Chrome if it was available.
So my assumption is that OpenAI might want to get into the browser game and build a fully like AI first browser on top of the Chrome bones that already exist. And then finally to wrap up, Deis Habibus, the CEO of DeepMind, was on 60 Minutes this week and he had some interesting statements. I think my favorite part of the 60-minute interview was when they were talking about whether AI has consciousness or not.
Definitely check out the whole video, but here's a quick sound bite. Is self-awareness a goal of yours? Not explicitly, but it may happen implicitly.
These systems might acquire some feeling of self-awareness. That is possible. I think it's important for these systems to understand you self and other.
And that's probably the beginning of something like self-awareness. So, Demis does believe that it will have something similar to a self-awareness in the future. And to me, that's just really fascinating.
But that's what I got for you today. Believe it or not, this was actually a slower week in the world of AI. I'm sure this video was long enough that it felt like there was a lot of things, but compared to previous weeks, this was actually a little bit of a slowdown.
If you've watched my past videos, you'd probably agree with me. But if you do like videos like this and you want to stay looped in on all of the latest AI news, make sure you like this video and subscribe to this channel, and I will make sure more videos like this show up in your feed. I'm also working on some AI tutorials.
I've got some really cool additional interviews coming up. If you haven't seen my interview with Mustafa Solleman, the CEO of Microsoft AI, make sure you check that one out as well. A lot more of that kind of stuff coming up where we're going to deep dive on a lot more of how to actually apply this AI stuff in your life, where it's going in the future, the implications of it all.
We're going to have a lot of really, really cool, interesting, fun discussions on this channel all about what AI means for the world. And I couldn't be more excited to share those with you. So again, like the video, subscribe to the channel.
I'll make sure more stuff like this shows up for you. And if you haven't already, check out futuretools. io.
I've been in the middle of an overhaul. So the design's getting a little bit of a tweak here, but this is where I curate all the cool AI tools that I come across on a daily basis. I share all the cool AI news here, way more than I have time to share in these videos.
And I have a completely free newsletter where I'll just email you twice a week with just the most important AI news and coolest tools that I come across. It's totally free. And if you sign up, I'll give you access to the AI income database.
A cool database of various ways to make side income using all these various AI tools that are available to you. Again, it's totally free. You can find it all over at futuretools.
io. Thank you so much for tuning in, hanging out with me, and nerding out with me today. I really, really appreciate you.
Hopefully, I'll see you in the next video. Bye-bye.