well it's been a heck of a week for AI news especially from Google and open AI both those companies have been shipping like crazy and I'm going to do my best to break it all down for you right meow Google has entered what they call the Gemini era their new Gemini 2. 0 Foundation model was released this week and along with it was a ton of cool features and ways that this Gemini model is being used they started with the release of Gemini 2. 0 Flash and the flash model is is a smaller model but it actually performs much better than Gemini 1.
5 which is a larger model trained on a lot more data but we can see by the benchmarks here that it pretty much outperforms in almost every single area than the much larger models that came before it and it does it all at twice the speed you can actually play around with the Gemini model yourself right now for free you head on over to gemini. com now I am a paying member of Google so yours will probably look a little bit different but if you click this drop down here you will see 2. 0 flash experimental now this is a really good model and you'll probably notice it's pretty comparable to things like clad and chat GPT and things like that but there's an even cooler way to use it which will open up even more possibilities and that's by going to Google's AI studio if you go to AI studio.
google. com even as a free Google user you will see Gemini 2. 0 flash experimental over here that you can use right now and if we take a look at at that pricing well it's currently free to use and test you can also give it some extra features like structured output code execution function calling grounding and even adjust safety settings here essentially allowing you to make it completely uncensored but here's where it gets really cool if we click over on stream real time we have some additional options here so you can talk to Gemini with realtime voice conversation you can turn on your webcam and Gemini can see what's going on on camera or you can even share your screen and Gemini will be able to help you with whatever you're seeing on your screen let's go ahead and try to talk to Gemini what are some of the best use cases of this new Gemini 2.
0 flash model the Gemini 2 zero flash model is great for various use cases it's particularly good for tasks that need quick responses like real-time data analysis or generating content summaries it is also useful in chatbot applications or in any situation requiring rapid information processing let's go ahead and try the show Gemini and see if it'll actually look at my camera even though it's using my camera for the recording can you describe what I'm wearing right now sure you're wearing a dark color T-shirt and a camouflage baseball cap you also have headphones on and a beard describe the background environment for what I'm doing right now okay the background appears to be a room with dark blue walls there's a cabinet behind you as well as some hanging stringed instruments like guitars and a banjo additionally there seems to be a Shelf with some decorations and finally let's go ahead and share our screen can you tell me what's going on on my screen right now sure I can tell you what's on your screen right now you are on a page titled Google AI studio with a stream real time section selected the page has options for system instructions a run settings menu and other tools there is also a chat box at the bottom with a start and stop sharing screen option and a video of your screen in the bottom right corner of the page how many tabs do I have open in my browser right now based on the image it looks like you have 10 tabs open in your browser right now well didn't quite get that right there's quite a bit more browsers open but it seems to work and be able to see my screen pretty well so theoretically if I am trying to work through something on my computer or I run into a coding issue or even if I'm playing a video game I can have my screen share going and have the AI sort of talk me through what I'm supposed to do next that's kind of the idea here now this is early prototype it's still experimental so as we saw sometimes it can't do things very accurately like count tabs but in most of my experience mentation so far it's been pretty spoton it's also got some additional functionality here like spatial understanding where you can upload a photo and combine 2D or 3D spatial analysis with reasoning so it's got some sample images here let's go ahead and use this image and say label each one with the type of food it is and you can see it's doing its little calculations here it created these bounding boxes around it and if I hover over it you've got cheese you've got a muffin you've got a strawberry Etc it labeled all these things in the image for me here's another image I just uploaded from my computer let's try a 3D bounding box detect the 3D bounding box of let's do the speaker okay that wasn't the type of speaker I meant but it did find the speakers on the stage let's do the person on stage and there you go it made a bounding box kind of where I am it's not perfect but you get the idea of what they're going for here you can also upload videos and have it analyze the videos the videos don't even need to have audio and it will know what's going on in the video and describe the scenes and things like that so you can theoretically find very specific points in your video there's also a map Explorer a lot of these features will probably play around with in a future video however the new Gemini 2. 0 flash is really really impressive but that was just the beginning of Google's announcements this week they also announced project Astra which if you pay attention to my shorts feed at all or you're following me on X or Instagram I shared my experience with Astra because Google actually flew me out to London put me up in a nice hotel gave me a brand new pixel 9 phone with Astra pre-installed on it so that I can play around with project Astra myself and basically what project asra does is it's designed to be like an AI assistant on your phone like a infinitely better version of Siri that can actually see what's going on around you and eventually connect to all of your Google apps so it should eventually be able to connect to maps and calendars and Gmail and things like that right now it's not connected all that stuff it's just got the sort of vision capabilities it can search the web it can answer questions for you and it is really really impressive in fact let's go ahead and just demo it right here on this video so I've got the phone open here I'm going to go ahead and open project Astra and now if I click Start hey Astra what am I looking at right now based on The View provided you seem to be observing a setup with multiple screens one of which is displaying a live feed from a smaller device perhaps a phone or a monitor there is also a smaller monitor mounted to the top of the larger monitors can you tell me what kind of camera this is right here based on the logo it appears to be a Sony camera I'm unable to specify the model of the camera though can you tell me what kind of microphone this is this is a sure sm7b microphone it is a popular choice for vocal recording and broadcasting can you tell me where this picture on the wall was taken based on the appearance of the shoreline and vegetation it's plausible that this image was captured in a tropical location with volcanic terrain potentially in Hawaii or a similar region it is only a guess well you're right it's Maui glad to have gotten it correct another thing that this is really really good at is you can take a book like I've got this book here in my hand open up to any page in the book I could briefly Flash the camera at the page on the book and it will know everything about that specific page so check this out can you tell me what's going on on this page of this book the page is discussing negotiation strategies specifically focusing on the topic of never splitting the difference it mentions a scenario involving a CEO and an employee who is frustrated at his negotiation can you tell me the name of the book the title of the book is never split the difference do you know the author of the book the author of never split the difference is Chris Voss a former FBI hostage negotiator he outlines his strategies for negotiation throughout the book impressive and so that's project Astra on the phone now I am one of Google's trusted testers so there's only a handful of people that have access to this and it's not going to be rolled out publicly until after they feel comfortable that all of their tests and red teaming and things like that have gone well and they are confident to put it out into the public but this will eventually replace Google's assistant on your phone but that's not where project Astra ends the next phase of project Astra is actually to put these in inside of glasses so I don't need to hold a phone up anymore I can just look around it will see what I see and it will respond in my ear in the same way we were just listening to it I actually got to test the glasses early I'm not allowed to talk about it too much yet but the glasses do exist and they are coming at some point in fact here's a little demo of what you might see while using the glasses cuz they do have a little heads up display in them you can see an example here where it's actually giving directions right in front of your eyes as you're walking walking here's another example where they're looking at a menu and right on the little heads up display of the screen it's actually translating what that menu is saying here's somebody else using it uh to get instructions on how to hang floating shelves and it is popping up right in front of their eyes inside of the glasses it is wild honestly it is really really cool and before I forget I need to mention some of the things that make Astra really interesting so again it's using the Gemini 2. 0 model and the Gemini 2.
0 model is multi modal from the ground up right so when you're using like chat GPT and you're talking to it using the voice mode it's still using whisper to transcribe everything you're saying into text and then it's taking that text before it gives you a response that's not how Gemini Works Gemini actually understands what it's seeing it understands what it's hearing it's not doing that transcribing underneath it just understands the audio coming into it the other thing that makes Astra really interesting is its memory so it's got a visual memory of up to 10 minutes so I can ask Astra what book were we just talking about a few minutes ago as I recall we were discussing the book never split the difference and it actually remembers what we were talking about and what book we were looking at it also remembers things from previous conversations so I was using this in London last week so theoretically it should remember what city I was exploring last week in fact let's see if it does what city was I exploring last week you mentioned a trip to London in our earlier conversation so I assumed that you were talking about that however I apologize for assuming so it's actually got the context and memory of previous conversations that I had personally I think everybody should have a website I think it's great to have a place where you can express yourself online that's why for today's video I partnered with hostinger hostinger is a platform where you can host your website but you can also completely build your website using AI if you head on over to hostinger. com slmt Wolf and you claim the special deal you're going to want to choose the Business website builder because this is the one that has all of these cool AI tools built into it select how long you want to set up your hosting for the longer the period the less per month than it costs and then see this little link that says have coupon code click that link and type Matt wolf and click apply and you'll see you'll get an even larger percentage off the plan and once you're inside of hostinger you come over to we sites here click on ADD website and choose hostinger website builder you can see this is an AI powered drag and drop website builder let's say for instance I want to make a website all about 3D printers let's call the brand name 3D print world and for the description I'll enter something like a website that shares all the latest news and reviews on the newest and most advanced 3D printers once I give it that little bit of detail I just click create website and let the AI go to work for me and within less than a minute I have a fully fleshed out web site ready to go it's got images of 3D printers it's got pre-written articles already it's got various page sections like blog and review all I have to do is start swapping out the information with what I want on the site so if I click on edit site up in the top right you can see I now have a drag and drop Builder I can drag my title around drag these buttons around and this whole website builder has AI baked into it I can use AI to generate images over on the left I have a whole AI tools menu here let's try out the AI heat map if I click here you can see it's creating an intention map and it uses AI to predict where people's eyeballs are going to go on the website so as I scroll the website I get a pretty good idea of what's going to draw people's attention in so that I can strategically place the most important text and images in the areas that are going to grab the attention this has to be the easiest way to get a website online and running with no technical knowledge at all so once again you can check it all out over at hostinger. com Wolf and use the coupon code Matt wolf for an additional 10% off their already heavily discounted service thank you once again to hostinger for sponsoring this video Google also announced project Mariner this week project Mariner is kind of like the computer use that we saw coming out of Claude a few weeks ago where you can actually give Google access to do things in your browser I'm going to start by entering a prompt here I have a list of outdoor companies listed in Google Sheets and I want to find their contact information so I'll ask the agent to take this list of companies then find their websites and look up a contact email I can use to reach them this is a simplified example of a tedious multi-step task that someone could encounter at work now the agent has read the Google sheet and knows the company names it then starts by searching Google for Benchmark climbing and now it's going to click into the website you can see how this research prototype only works in your active tab it doesn't work in the background once it finds the email address it remembers it and moves on to the next company at any point in this process you can stop the agent or hit pause what's cool is that you can actually see the agent's reasoning in the user interface so that you can better understand what it is doing and it will do the same thing for the next two companies navigating your browser clicking links scrolling and recording information as it goes you're seeing an early stage research prototype so we sped this up for demo purposes after the fourth website the agent has completed its task listing out the email addresses for me to use so this isn't something I've had the opportunity to play with yet but as one of the trusted testers might be something they put in my hands and maybe we can experiment with it on a future video they also showed off Jewels which is an agent for developers so Jewels is essentially an assistant that can help with all of your coding tasks they also showed off agents inside of games where you can actually be playing a video game on your computer and have it help you with the video video game another thing that's really interesting about this new Gemini 2.
0 model since it's multimodal from the ground up is that it's got native image output so you can upload an image like this car tell it to turn the car into a convertible and it will output an image of that car as a convertible now that feature doesn't appear to be available yet I haven't gotten it to work at least but there's also some examples where they Blend images so I thought this looked really cool you can see they uploaded an image of a cat and a pillow and wrote create a cross stitch of my cat on this pillow and it created a crossstitch cat on that pillow here's another example where they put a detailed illustration sticker of my cat on my skateboard and it put a sticker of the cat on the skateboard so that is really exciting to be able to take two images and blend them together with a prompt like this I'm super excited to get my hands on that feature but again I haven't gotten it to work yet so I don't think it's rolled out just yet but there's even more news from Google this week they also rolled out the Deep research Fe feature inside of Gemini which is kind of like a more advanced version of perplexity where it will actually search the web and do research on topics for you so if I head back over to my Gemini account this one I believe is only in the paid plan so I think you need to have Gemini Advanced to work with this one but the Deep research is borderline agentic it will actually go and do multiple searches and deep dive on a topic for you so I can give it a prompt like research Quantum Computing and how far we are from being able to crack Bitcoin cryptography and you'll see it will give us some steps that it's about to take so research websites find information on the basics of quantum Computing find research papers and articles on the potential of quantum Computing break cryptographic algorithms etc etc then it's going to analyze the results create a report and it will be ready for me in a few minutes so if I go ahead and click Start research it's going to go through all of those steps for me and we can literally watch it as it goes through and does all this research so it's researching 16 websites for me right now now it's up to 45 websites we can see all of the websites that it's reading from like it really goes deep perplexity might use like four or five websites when it's doing the research for you you can see that Google's already up to 65 websites right now and after several minutes it came back with a really really in-depth detailed report all about Quantum Computing and its ability to crack the Bitcoin cryptography and what's really cool is there's one button here and I could quickly open it in a Google so I can quickly save this entire report that it generated along with all of its source websites right inside of my Google doc to refer to it later pretty dang cool and this video is already getting long and so far all I've talked about was all of the announcements Google made and as you probably realized by now open AI has been shipping like crazy as well they've got their 12 days of open aai announcements and well since I record these on Thursday we know what four more of those announcements are this this week and by the time you're watching this video there's probably a fifth announcement that came out on Friday but let's get into the four announcements that open AI made this week starting with the big announcement that on Monday they released Sora after demoing it and showing it off 9 months prior they finally gave us a version of Sora now the version that's out is actually called sora turbo which lets you generate up to 20 second videos if you're on the Pro Plan and I believe 10c videos if you're on the plus plan and well since open AI waited 9 months between showing it to us and actually getting us access to it a whole bunch of other Alternatives have popped up and I don't think Sora made people as excited when they released it as open AI had anticipated not to mention that open AI basically crashed their servers when they opened up Sora because everybody wanted in all at the same time it couldn't handle the load and well they've since worked that out and I believe people are able to get in again but open ai's video generator is now finally available I did a whole video where I tested some prompts because it was so bogged down and it was taking so long for the videos to load I basically had to cut the video off shorter than I wanted to and most of the prompts that I had tested to that point were all short prompts and what I've kind of learned since then is that Sora really likes long detailed prompts I couldn't get it to generate a wolf howling at the moon in my last demo but after giving it a much more detailed prompt I did manage to get a much better wolf howling at the moon video it really struggles with people doing things like dancing or gymnastics as you can see in these examples here but for some reason it's pretty decent now at a monkey on roller skates now in the Sora video that I released earlier this week I did go into more detail about some of the features that Sora offers things like the ability to blend two videos together like you can see here where I took a monkey on roller skates and blended it with like clocks flying around and it's also got a cool storyboard feature where you could try to sort of steer the video a little bit but in my testing it wasn't steering the direction that I wanted it to I still have to do some more testing again if you want to see me do a little bit deeper of a dive on Sora check out this video called you can now use Sora here's how I dive deeper into some of these features in that video on day four of sora's 12 days of announcements they released canvas now if you've had a plus plan you've had access to Canvas already now they've made canvas available to everybody it's on even the free plan they also gave the ability to execute python code inside of canvas so if I jump into chat GPT here you can see there's a new icon here that says view tools and it gives us the option for pictures search reason and canvas now I can tell it to write a simple hello world code in Python and just by giving it that prompt you can see it will generate the code inside of this box but if I switch to Canvas give it that same prompt you'll notice it changes the user interface completely and it puts my chat over on the left and it opens up a sort of code screen here and I also have a button that says run so I can see the output and down in the bottom I can see in my console it says hello world now obviously this is designed for much more complex code you can also use this canvas to do like writing tasks and stuff so if I create a new chat make sure I've got canvas selected and say write a poem about wolves on Christmas you can see it opens up that same user interface with my prompts on the left and the poem that it's writing here on on the right along with some additional tools down here in the bottom right like suggest edits adjust the length adjust the reading level add some final polish or for whatever reason if you want you can add emojis as well so again canvas was available already if you were on a Pro Plan now it's available for free and now it can also run python so that was the day four update on day five we learned that it is now available inside of Apple intelligence if you use Siri you can actually turn on the chat GPT extension inside of your Siri and you can prompt Siri to use chat GPT when asking it a question but not only did it roll out on iPhones and well you have to have an iPhone 16 or newer it also rolled out in the Mac OS as well if you have a Mac computer you can now use chat GPT with Siri on your Mac computer you can see they clicked a little Siri icon up in the top right of their Mac and they can now type to Siri or talk to Siri and apparently it can also see your screen so they were asking a question about this document and it took a screenshot and was sharing the screen with chat GPT using Siri on the actual Apple computer now I feel like this was kind of a cheater announcement from open AI just because it was really more of an Apple announcement but open AI made it part of their 12 days of announcements because on the same day the new iOS 18. 2 rolled out into all of the Apple devices with apple intelligence the new image playground gen emojis writing tools and seamless support for chat GPT and visual intelligence so it just kind of seems like they timed up their day five announcement with the same day that they knew Apple was going to be releasing 18. 2 so it was kind of a a joint Company announcement at this point most people have probably already seen a lot of the new Apple intelligence features like the Gin Emoji where you can actually have it create emojis for you like a DJ octopus or a rainbow Cactus it's got a new and improved note tool where you can use AI directly in inside of your notes new writing tools the ability to use AI to help you with your text and we've pretty much seen all of these features from Apple they're just finally rolling them out in their latest 18.
2 update and then finally day six which was Thursday the day I actually record these videos they announced advanced voice mode now has video capabilities so this was a feature they showed off 7 or eight months ago when they first showed off the advanced voice mode they actually showed it being able to take take pictures and see what you were looking at and have conversations with you about what you were looking at when advanced voice mode finally rolled out and was made available this feature was absent now you've got it in your account to access it you can just use your chat GPT account go into the voice mode by clicking the little white circle there and we can see some new options we have the ability to turn on our camera we've got the three dots here which allows us to upload a photo take a photo or actually share our screen on our phone but let's go ahead and test it with some of the same types of stuff I was just testing Astro with I'll turn on my camera here what am I looking at right now it looks like you're showing your desk setup I see a monitor some sticky notes on the bottom of the screen and a black smartphone on the desk there's also a camera on top of the monitor anything specific you want me to focus on can you tell me where this photo was taken that's a beautiful beach photo it's hard to say exactly where it was taken but it looks like a tropical or coastal location with those waves and rocks do you remember where you took it I do I wanted to see if you can figure it out by looking at it from the looks of it it could be somewhere in Hawaii or maybe a rocky California beach the landscape is definitely giving tropical Vibes so it got that right as well let's go ahead and open to a page of a book and see if it can do the same kind of thing we got out of Astra same exact book can you tell me what this page of this book is about the book appears to be discussing negotiation tactics the title never split the difference suggests it's about negotiation strategies and the page mentions keeping emotions in check during negotiations it's likely from a chapter about managing emotions and staying calm to improve negotiation outcomes who's the author of this book the book never split the difference is written by Chris Boss he's a former FBI hostage negotiator and the book shares his negotiation techniques for business and everyday life can you describe my background right now your background looks like a neat office or Studio setup there's a black shelf or cabinet behind you with some framed items and a few decorative pieces it looks professional and tidy perfect for video calls or recordings so I'll need to actually take that out into the wild and walk around with it like I did with project Astra to really see how it Compares but on First Impressions first tests it actually seems pretty on par with what I was able to do with the project Astro demo so maybe that's a future video I'm just going to walk around with both of them out in the real world and see which ones give me the better responses on things open AI also had a somewhat gimmicky roll out today as well with the fact that you can actually talk to Santa Claus inside of your open AI app how many houses do you have to visit this year every year I visit millions and millions of houses all around the world on Christmas Eve it's a magical night filled with joy and excitement as I I deliver presents to Children who've been good all year what if I've been really naughty well everyone has their moments being a little naughty from time to time is normal the important thing is to learn from it and try to be kind and thoughtful remember Christmas is a time for forgiveness and spreading Joy so there you go if you've always wanted to have a chat with Santa Claus well you can do it with GPT now too all right so that's probably all the biggest news of the week most of it came from Google and open AI but there's a handful of smaller items that I think are important to touch on just to make sure you're looped in with all of the progress that AI is really making if you use anthropic and you're a fan of CLA well they now have a new 3. 5 Haiku model which is their smaller model which is a little bit cheaper to use and a little bit faster to use if you head on over to Claud you have this drop down to change models you've got clae 3. 5 Sonic clae 3 Opus and then under more models you've got clae 3.
5 Haiku before this was just clae three Hau so it's a slightly improved Haiku model which is their smaller model and anthropic hasn't even made any announcements about it so I expect pretty soon anthropic will probably do like a detailed write up with benchmarks and things like that but they haven't yet if you use grock it actually got a new image generator this week as well before grock was using the flux 1. 1 one pro model which was really good really photorealistic but likely costing X quite a bit of money using the API credits so now X and Gro rolled out their own model originally it was called Aurora now they just call it grock image generation but if you head on over to your X account and you click into grock you can prompt it to create an image for you create an image of a wolf howling at the moon and one thing you'll notice as it creates the image it actually sort of Scrolls from the top down it's because it's not actually a diffusion model it's not using the same technology that tools like mid journey and flux and stable diffusion and all the various AI image generator models were using this is a sort of New Concept a new way of generating AI images instead of a diffusion model it is an autor regressive mixture of experts Network trained to predict the next token from interleaved text and image data and they look pretty good now the realism isn't the same as flux flux was much more realistic but aesthetically they're pretty pleasing the colors are good all of the anatomy on the wolf looks good not a lot to complain about here it also has multimodal input allowing it to take inspiration from or directly edit user provided images but from my testing it hasn't been able to take images and modify images apparently it's capable of that but that doesn't seem to work quite yet since we're on the topic of AI art mid Journey got a little bit of an update this week with a new world building tool called Patchwork and Patchwork is basically just a giant canvas where you can generate images put them on the canvas and collaborate with other people who can also see the images that are on this canvas we can see some examples here of images and comments and text so that people can collaboratively I guess you know storyboard things together directly inside of mid Journey if you want to try it out you can actually go over to Patchwork doid journey. com and this is what it looks like here so let's say I want to generate an image inside of this canvas here I can click on this little image link here click on my board somewhere a half wolf half man and click paint and it's going to use mid journey to create some variations of this image of a half wolf half man let's say I like this one here you know give this a character name and sort of organize it with some notes here and it's just a way to sort of lay stuff out organize it add notes and then give other people access through the little share button here and then multiple people can collaborate on this giant canvas Adobe rolled out a new feature that allows you to take Reflections off of images so if you take a picture of something through a glass window and you've got the sort of glass reflection on it it uses AI to remove that reflection to make it look like you took the picture on the other side of the glass now right now this only works with raw images where you'll be able to use jpegs and hc's later and it looks like it's an Adobe Photoshop and Adobe bridge now this is something I really want to see in video cuz a lot of times I'll shoot video and I have to shoot it through a window and I want to remove the reflection in video so hopefully they figure out this technology for that cuz that would excite me way more than photos YouTube's new dubbing feature has rolled out more widely now this is really powerful because now any video and any language can be translated and dubbed into any other language sort of opening up the possibility for anybody's YouTube videos to be put in front of a much wider audience than they used to be able to be put in front of this week cognition Labs finally released Devon their AI coding assistant this is something that was showed off months and months ago got people really really excited and then didn't get released for a while and when they finally did release it they released it at $500 per month I guess if it could really do the same things as like a junior developer then some people might find $500 per month worth it I'll probably pay for it for like one month to test it out and make some videos about it and then I don't think I would stick with it I don't code enough and plus I do pay for cursor which really works well for me at $20 a month so I have a hard time getting behind the pricing of Devon but supposedly it is a lot better at seeing your entire code base and it's apparently a lot more hands-off than tools like cursor and wind Surf and some of those other tools I don't know we'll see I haven't tested it yet we'll put it through its motions and see what it's capable of in a future video unfortunately it did already run into an issue I think this guy's name is the primagen I'm not sure if that's pronounced correctly but he was actually on a stream playing around with cognition and leaked a huge security concern I don't know exactly what the security concern was because he stopped the stream and removed the video on demand so other people couldn't exploit the issue but apparently there was some sort of big issue that would have been really easy to exploit Devon which isn't great if you're paying 500 bucks a month for it I'm sure whatever it is is patched up by now though hopefully we've got nothing to worry about with it I actually thought this was pretty funny this AI company put up these signs in Sam Francisco that say Artisans won't complain about work life balance the a of AI employee is here so they're actually putting up ads around San Francisco encouraging people to hire AIS because they won't complain I mean the ad got attention it got videos like mine talking about it I want to quickly talk about the world of virtual reality and augmented reality because there's been some new updates in that world that are pretty exciting to me this week as well including the fact that if you have a meta Quest and a Windows PC you can now connect them together and just like the Apple Vision Pro with Mac computers you can have a giant workspace in front of you on your meta Quest I haven't tested this yet I do have a meta Quest 3 I am excited to test this out but you can sync up your meta quest with your PC and have monitors all around you in this sort of virtual workspace and Google is getting into the XR game on a much deeper level as well because this week they showed off Android XR here's an example of somebody watching this immersive experience in VR inside of this new Android XR headset and actually being able to look around and this is an XR experience directly available on YouTube here's an example of somebody actually using Circle to search directly an XR basically having this virtual desktop around them and circling things that they're seeing on their virtual desktop to be able to shop for them here's somebody looking through their Google photos directly in virtual reality or augmented reality and flipping through their photos and even you know adding depth to them similar to what you get with the volumetric pictures on Apple Vision Pro and here's somebody watching Google TV directly inside of this Android augmented reality experience where you can just have a big floating screen in front of you so basically it looks like this Android XR is trying to replicate pretty much what you get from the Apple Vision Pro and they're working with Samsung and Qualcomm to do this you know so qualcom will likely be developing the processing chips that go into these devices and Samsung will likely be developing the devices themselves that you actually wear and they look to go head-to-head with the Apple Vision Pro and The Meta Quest excited to see how that plays out because that's another area that I'm really interested in I go to all of the augmented World Expos and keep close tabs on what's going on in the world of XR so this is something that's really exciting to me and then finally let's end with robots here's a video of a Tesla robot learning to walk down Hills the first video it showed it kind of slipping down the hill the second video it got a little bit better and then the third video it was able to easily and pretty effortlessly walk up and down bigger Hills so these humanoid robots are getting better and better at being able to Traverse the world that we live in so always excited to see that I always like to end my videos with robots cuz robots are fun and that's what I got for you today one little bit of housekeeping I wanted to share I am going to start doing live streams I've been getting a lot of requests on X to start streaming so I can show off tools like what I was showing off on my phone and so we can test things like Sora in real time I get early access to a lot of various tools because companies want to give them to me to demo early and I just thought it would be fun to start doing a fairly regular live stream to test out some of the tools talk about some of the news do a AMA and chat with the community and things like that and so starting on Monday December 16th at 11:00 a.
m. Pacific I'm going to go live on this channel and the plan is to make that a regular thing just start going live every Monday at around 11:00 a. m.
Pacific talk about whatever tools are out that week put them through their motions take requests from the chat on what I should try and we'll play with these tools in real time you can see how they work how fast they are I won't be able to speed up the editing or anything like that we'll be demoing this stuff in real time taking your suggestions trying to break them trying to figure out what they're good at what they're bad at you can ask me anything ask me about a tool to solve solve problems in your specific business and things like that I just really think it would be fun to get in the habit of doing some live streams and just nerding out about AI in a live environment if you're interested I will link you up to the sort of reminder page where you can be reminded of the live stream I'd love to have as many people as possible joining me and having fun nerding out about AI live with me but that's what I got for you today I hope you enjoyed this video if you did enjoy it I'd love a like and a subscribe and maybe even the little bell notification thing so you see my future videos I will make sure more cool AI news and tutorials and the various live streams and things like that show up in your YouTube feed and it makes me feel good and really helps out my channel if you do finally if you haven't already check out futur tools. where I curate all the cool AI tools that I come across these are a lot of the tools we'll be playing with on the live stream I keep the AI news page up to date pretty much daily I talk about way less news in these videos than what I share here cuz every week there's just way too much news to squeeze into these videos if you want to keep looped in on all of the latest AI news you can check out the AI news Tab and join the free newsletter and I will email you just the most important news you need to know and the coolest tools that I come across and it's totally free you can fight it over at futur tools.