this week has been one of the craziest weeks in the world of AI in a long time and I know I sound like a broken record like I say that every week but this week there was legitimately a ton of news coming from places like anthropic open AI a whole bunch of image generators a whole bunch of video generators new robots all sorts of really really cool stuff I don't want to waste your time let's just get straight into it I'll do my best to sort of rapidly fly through all this news to make sure that you're fully looped in on all of the biggest events happening in the world of AI right now starting with the biggest news of the week in my opinion coming out of anthropics Claud and this week they gave us the ability to actually let Claud take over our computer and actually use tools on our computer now I'm not going to go super deep into this because I did an entire breakdown as well as step-by-step tutorial on how to set this up yourself so if you want a really deep dive make sure you check out the video that looks like this but here's a quick example they gave in one of their demo videos they have this spreadsheet here as well as this vendor form here and they give it the prompt please fill out the vendor request form for ant equipment company using data from either the vendor spreadsheet or search portal tabs in window one list and verify each field as you complete the form in window 2 it takes screenshots of whatever is going on on the desktop to sort of acclimate itself with where it's at and what's going on at the screen at any moment and then based on the screenshot it will take an action you can see here it's searching the vendor database found some info on their vendor database and then automatically started pulling in data to this form on the right side so again what it's doing is you get a command it screenshots your desktop so it knows where to move the mouse or what step needs to be done it does that step screenshots your desktop again to confirm that that step was done and the right actions were happening and also to figure out what the next step is and it keeps on repeating that process of doing a task taking a screenshot doing a task taking a screenshot until it completes the action that you initially prompted it to take again I did an entire breakdown with a step-by-step tutorial on how to set this up yourself so make sure you check out that video if you haven't already but that wasn't the only news that came out of anthropic this week there was also a new Claude 3. 5 Sonet model and a new Claude 3. 5 Hau model we can see in the various benchmarks here that it outperforms the old versions of Claude 3.
5 Sonet and in most cases the new Sonet even outperforms GPT 40 but anthropic wasn't done this week they wanted to improve their platform even more so they rolled out the analysis tool over in clad here's an example they gave on their website please visualize the sales funnel progression from awareness to purchase with a bar graph so I can identify conversion bottl and they upload a CSV file of sales funnel data it then creates some JavaScript to do the analysis as well as to create the visualization over here if you have a clot account already in order order to turn this feature on simply come down to your name in the bottom corner click on this little feature preview button here and we can see the analysis tool is currently turned off we can go ahead and flip that one on we also have the latex rendering turned off which allows Claude to render mathematical equations let's go ahead and turn that one on as well and then we can close out of this and we now have that functionality I just created some dummy data over on this side called mao. comom we could take a look it is a CSV file here with a name an email a gender and an IP address so we'll just go ahead and use this data as a starting point so that we can do a quick test here so I'll go to Claude pull in our CSV file and we'll tell it to analyze this data and create a pie chart based on gender I'm hoping this uses the new JavaScript feature to do it so let's go ahead and find out yep so we can see it is now analyzing this data writing some code for the analysis now that we have the data I'll create the pie chart so it is now writing code for our pie chart here and once it's finished processing we can see our graph here with 48. 8% female 42.
4% mail pretty cool all I did was give it this one short prompt and this CSV file and it processed it all for me and gave me a really nice looking visual this is obviously just scratching the surface of what you could do with this new JavaScript analytics feature that's built in but I wanted to do a quick test on video to see how it worked for myself so as anthropic inches closer and closer to having true AI agents that can do our work for us Microsoft claims that they are now unlocking autonomous agent capabilities inside of Microsoft co-pilot studio now this article here on Microsoft there's kind of a lot of fluff in it but if I scroll down here we can see the new capabilities that are coming to co-pilot Studio things like autonomous triggers agents can automatically respond to signals across your business and initiate tasks they can be configured to react to events or triggers without human input that instead originate from various tools systems or databases or even scheduled to run hourly daily weekly or monthly each business process could have different paths since agents create Dynamic plans on the fly to handle and complete tasks users can then view the underlying Logic for each of the agents paths which include key details steps and systems involved it's using the newest models like open ai's 01 model to help run the autonomous agent and it looks like they're going to be actually demoing them during the Microsoft ignite event that is coming up meta this week showed off a whole bunch of new AI research including segment anything 2. 1 Spirit LM layer skip salsa lingua open materials 2024 mexa and selftaught evaluator now I'm not going to get too into the Weeds on each one of these but I do want to hone in on this Spirit LM here Spirit LM is a language model that can take both text and audio as input and give both text and audio back as output so some of the examples that they show down here on their demo page is they give it a text prompt 1 2 3 4 5 and the language model responds with an audio prompt 6 7 8 9 or here it starts with an audio prompt a b c d e and we can see that it gave back a text prompt f g h i j k l o p Etc they gave it a text prompt the largest country in the world is Russia our country has about 150 million inhabitants and here's another audio prompt Yellowstone National Park is an American National Park located in the and you can see it went on to continue it in written text they do say up here that these samples are cherry-pick samples not everything we see out of this model is going to be as good as what we're hearing and seeing right now now these were from the spirit LM base model but they also have the spirit LM expressive model so the prompt here I am absolutely thrilled to be embarking on this new Journey it's going to be an incredible adventure and I'm so excited to have you guys welcome with me they here for 12 weeks and there's been to be tons of stuff incredible guests on the show but I also just wanted to go and that's what we get back I mean the audio quality is kind of not very good but you could hear it's got that same like fast-paced excitement energy to it here's some initial audio prompts no I think you're thinking about Days of Thunder which is a Tom Cruz racing movie and then it goes on to give a text response here so that's some pretty interesting research out of meta I'm not 100% sure what the actual use case is for it but really I'm just trying to keep up with the latest research and what's going on and I'm sure people in the comments will let me know plenty of ideas and ways that this new research could be used once it's actually available to us to use it meta also introduced their quantized llama models which are much smaller models designed to run on mobile devices now I'm going to really oversimplify this but essentially what a quantized model is it's kind of like compressing the model it's sort of like taking this large model with tons of data and pulling out some the data that it finds is less necessary to shrink down the overall size of the model now that's sort of a bad example one of the analogies I heard was imagine you have a box of crayons and it's a giant box of crayons and there's 40 shades of blue in that box of crayons well a quantise model would be taking just a percentage of those crayons and putting them in a new smaller box and now instead of maybe 40 colors of blue you only have 10 colors of blue now blue is still in there you could still draw and get blue into the pictures that you color you just don't have as wide of a variety of shades of blue in it right so that's kind of the analogy for quantization you're removing a lot of sort of redundant data that ideally without it being there the language model can still kind of get to the same end result well llama just did that with some of their models in order to help them better run on smaller devices like mobile phones today's video is sponsored by Opus clip now if you're not familiar what Opus clip is it is this really amazing repurposing tool where you can take long form videos it will find the clips that have the most potential to go viral and convert them into shorts with templates and text and captions and the whole works they also just released this new clip anything feature which allows it to find any specific moment from your videos to clip it actually skims the whole video and uses AI to sort of tag all of the things that are going on in the video and then all of those elements become searchable so people objects actions emotions all of that sort of thing is now searchable inside of your videos and you can have it generate a short form clip has the potential to go viral when it creates these little Clips it actually gives them each a score based on the hook the flow the value and the trend this top clip that it found here is 57 seconds long and it's a 99 out of 100 meaning that it's a clip that's very likely to do really really well on places like YouTube shorts and Instagram reels so check out how easy this is right here I can drop a zoom link a twitch link a LinkedIn link a YouTube link whatever I want or I can upload the video directly I'm going to grab the link here from my latest YouTube video and copy it we'll jump over to Opus clip paste it right here down here I can select the clip length I'm going to go 30 to 59 seconds I like keeping them under a minute we'll go ahead and let AI detect the style of content and then we can give it any prompt we want my video did include news about the latest SpaceX launch so let's go ahead and use the prompt find portions that include a rocket ship and get clips and after a few minutes of processing we could see that it found a clip from my recent video where I showed off a rocket ship so it found that rocket ship put these subtitles on the video with like little finger arrows and stuff and it says it has a pretty dang high score of doing well the other thing that I think is really cool about what Opus clip does is this was originally a horizontal video where I was off in the right corner of the video and I was showing something on my screen notice how it reframed it and put my head down in the bottom of the video and then what I was showing on the screen at the top did all that reframing automatically that is really cool and really handy so check it out at op. prcp anything and I'll put the link in the description thank you so much to Opus clip for sponsoring this video IBM who I am planning on making some more videos about in the near future just released some new large language models called Granite 3 IBM said the new granite models are designed as Enterprise workhorses for tasks such as retrieval augmented generation classification summarization agent training entity extraction and Tool use they can be trained with Enterprise data to deliver the task specific performance of much larger models at up to 60 times lower cost so they really seem to be designing their models for internal Enterprise where you can train it specifically on your Enterprise data and get exactly what you need out of it a lot faster and cheaper than if you were using like Google's models they also released this under the Apache License meaning that others can use these models and build off them and iterate off of them xai the creator of the grock large language model just launched an API so now developers can go and use the grock language model inside of their programs so we could probably expect to see all of the tools that integrate with all the large language models to also now include grock as one of them and we can also probably expect to see some tools that are a little off-the-wall and uncensored because grock is pretty dang uncensored we did get a little bit of news out of open AI this week if you're a plus user in the EU Switzerland Iceland Norway and lonstein they now all have access to the new advanced voice mode but even bigger news out of open AI this week came from somebody who just left open AI open ai's senior adviser for AGI miles Brundage is leaving the company and he wrote up a blog post as he was leaving basically saying he doesn't think the world is ready for AGI in his blog post he also made the comment that he doesn't believe that what the research Labs have access to is a whole heck of a lot better than what the general public has access to so he's basically saying that we're seeing the best that open AI has available right now and yes maybe they're keeping some other models Under Wraps but the gap between what we have now and what they're keeping Under Wraps is not that big of a gap okay let's talk about AI video because there's been a lot of announcements in this space this week starting with the runways announcement of act one now this is something we don't have access to but they do claim they're rolling out right now I've been refreshing my Runway account like crazy to see if I have access I still don't have it so it's definitely not fully rolled out yet but basically what this act one can do is it can take the facial expressions the emotions and the words that are coming out of your mouth and sync it up with an animated character so check out this little 20 second demo here so let me get this straight you came all the way down to the Department of Motor Vehicles and didn't bring your driver's license do I understand that correctly and here's some more demos that I'm actually playing on mute because there's music and I don't know the copyright status of the music but we can actually see that these are just cartoon characters that the lips are synced the emotions are synced so as they move their head as they show excitement as they show anger or fear or sadness or whatever emotion that expression comes across on the cartoon character and the lips are synced to the person talking this is really really cool now runways act one page which I will link up in the description has all sorts of demos of what this is capable of so you can take a peek at some of these other examples here and see what it's capable of but hopefully fingers crossed we have it in our hands pretty soon and we'll get to play with it and make our own videos from it moving on to video generators we do have access to right now let's talk about genmo and mochi 1 now Mochi 1 is another opsource video generator since it's open source if you have a strong enough GPU on your computer you can actually download the files and run this locally also since it's open source people are going to tweak with it and iterate off of it and mess with it and we're probably going to see some pretty uncensored video models in the near future uh whether that's good or bad or not that's um yet to be determined but we're probably going to see some pretty uncensored stuff as more and more of these open-source video models become available now if you do want to play with this Mochi one yourself you can the site fall.
a has it available this is the same site that I showed in previous demonstrations on how to set up flux one now there is a slight cost so I have some credits here in my fall. aai account but if we go to explore we can click on text to video here and we can see Emoji 1 is one of the options here once inside we have the option to enter a prompt give it a random seed and enable prompt expansion which I'm guessing just sort of improves whatever prompt you plug in I did my normal test of a wolf howling at the moon and this was the video that it generated it kind of looks like a wolf sort of merging and then walking towards the camera it's not bad but I wouldn't put it at the same level of quality that we've seen from some of the other video generators yet it was really really fast fast though I believe it took under a minute to generate this and we can see that it cost 40 cents per video so this little clip here because I ran it on fall. a and it's using their servers they charged me 40 cents here I went and tested it with my other favorite prompt a monkey on roller skates once again cost me 40 cents to generate this but here's our end result we definitely see a monkey we can definitely see roller skates whether that's a monkey on roller skates or a monkey sort of playing with a roller skate yeah I don't know but that's what it can generate next up we have hyper 2.
0 this is another AI video generation model we can see their demo here over on their X account and I mean most likely these are going to be cherry-picked videos if you're making a sort of demo real or a commercial for what your video player's capable of probably going to cherry pick some of the best results but these are some of the videos that have come out of hyper h a i p r they all look pretty good but again probably pretty Cherry Picked you can actually go and generate videos with this one for free right now if you head on over to hyper. a again that's H AI P r. a they give you 300 credits when you first start now I don't believe that refreshes every month or every day or anything like that I think you just get 300 credits to go and play with the video model once your 300 credits are up you either can't use it anymore or you got to buy more credits but to generate a video it costs roughly 30 credits so you can get about 10 videos as you play with this before I show you some of my results here's some of the demos that they have on their front page here of people doing various dances they all look pretty decent not the best AI video we've ever seen but they seem to be pretty good for generating these little like dance videos here here's some like cartoons that it generated a cat in a bathrobe a cat carrying a suitcase we've got like a flying tiger Drgon thing cat with a saxophone I mean they're they're pretty good looking I'm pretty impressed with what this can do but again fairly on par with most of the other video models we've seen so far they do have both text to video and image to video in hyper here and here's what I managed to generate I did my wolf howling at the moon prompt here and when I blow this one up you can see this one looks a little bit more like an animation of a wolf how at the Moon than the last one that I got out of Mochi the the moon's moving a little bit more than I would like it to but other than that the wolf looks pretty good I asked it to generate a monkey on roller skates for for me and you know to be honest this is one of the better videos of a monkey on roller skates that I've gotten now you can't actually see the skates but you can tell the people around him are skating and he's got something strapped to his feet I think you can tell it's a monkey roller skating I don't know what this weird artifact is that appears and disappears over on the left that's kind of funky not bad at all honestly all right if you thought a lot was happening in the world of AI video this week wait till you see all the stuff that happened in the world of AI image generation this week because that area is even crazier let's start with the fact that we got stable diffusion 3.
5 ever since black Force Labs came out with flux stability AI has been kind of quiet we haven't really seen a whole lot come out of them we did get that previous model of stable diffusion which sort of went viral for all the wrong reasons it was generating bodies that were just like legs a torso and then legs or like three heads or they were just sort of like this nightmare fuel it looks like they've sorted some of that out because this image right here I think is showing that you can get it to prompt a woman lying in the grass which before it absolutely could not and stable diffusion is open source it says it's free for both commercial and non-commercial use and you can run it on consumer Hardware so you could download stable diffusion 3. 5 and run it on your own computer using something like comy UI or possibly automatic 1111 there were two models that were just released with stable diffusion stable diffusion 3. 5 large with 8 billion par and stable diffusion 3.
5 large turbo which is a distilled version of stable diffusion 3. 5 which is considerably faster than the large model but since this large model generates images at a 1 megapixel resolution you're definitely going to get better quality out of the large model than the turbo so you kind of choose between high quality or fast and even the fast model should supposedly be still pretty decent now if you want to play with this you can actually play with it fore free right now over on hugging face there is a hugging face space for both stable diffusion 3. 5 large and 3.
5 large turbo you can see here we've got an image of a capy Bara wearing a suit holding a sign that reads hello world and as far as prompt adherance goes it nailed all of those things let me try one of my long prompts that I've tested in the past and see how many of the elements it actually gets my old go-to of a three-headed dragon wearing cowboy boots watching TV and eating nachos let's see how this does okay so it took about 40 seconds here we've got the dragon I'd say that's only a one-headed dragon they maybe gave it three legs instead of three heads but we've got the dragon the nachos the boots and the TV so it got most of it it missed the three-headed dragon part and gave it three legs instead but not bad better than it used to do let's do a penguin in the snow holding a sign that says subscribe to Matt wolf see how it does with the text all right not bad it got all the text in there let's see did it spell everything right yeah it looks like it even spelled everything right so pretty good now again you can and download stable diffusion 3. 5 and run it on your computer just go to github. com stability AIS SD 3.
5 here and you've got your options to download with your install instructions and you should be good to go it also works with comfy UI it looks like so if you know how to use comfy UI you can use SD 3. 5 with that idiogram also rolled out some new features this week as well including their canvas their magic fill and their extend features so if I log into my idiogram account here over on the left you you can see a new icon that says canvas if I click into canvas we have a whole new UI here so let's go ahead and prompt an image a wolf howling at the moon I'm going to turn magic prompt on that's going to help improve the prompt it'll add a little extra detail to the prompt to make it look even better let's go ahead and generate that and there we go we got four images of a wolf howling at the moon I've got this giant canvas here where I can scroll up or down and move around and if I hold down controll and scroll in I can zoom way in on these images or Zoom way out on them just like something you'd get out of you know like a figma or something like that but let's say I like this one here let's go ahead and pull it out of the rest of the group here bring it down on our canvas a little bit more but let's go ahead and extend it let's click the extend button here on the left and let's extend it down this way like so I can move and change however I want it extended but I think I want it to go about there I'm going to turn the magic prompt off at this point so that it uses this image for more of the reference and I'll leave my same prompt of a wolf howling at the moon and click extend and after a few seconds we've got our extended image here you can see it filled in the rest of the wolf and more of the ground here but you know what I think I want like a UFO in space so let's go ahead and select the magic fill here and I can select areas using this like rectangle or I can use like a brush like this or a lasso tool like this I'm going to go ahead and reset all that but let's go ahead and use our lasso tool and put a big circle here we'll click next and let's go ahead and give it a prompt an alien spaceship click magic fill and just like if we were using photoshop's generative fill we now have a spaceship flying by in the background I'm going to do magic fill one more time I'm going to use the rectangle and this time I'm going to hover over the Wolves feet and I'm going to try a prompt to like give the wolf cowboy boots and let's see if it does it okay so it gave it on the front feet not bad not bad if I zoom out a little bit right so we can see more of our canvas here here's our wolf image that we just generated if I grab this and move it out of the way you can see each iteration so here's our original wolf here here's our wolf that we extended here's our wolf that we added our UFO and here's our wolf where we added cowboy boots so I could actually see each bit of progress and that's what's really cool about this canvas is that we can just keep on experimenting with ideas let's say I don't like this version of the cowboy boots I can always go back to this image and edit from this image while ignoring this one there's also a remix feature if I click on remix here and let's say in astronaut on the moon and click remix we get an image that looks very close to the same style as our wolf image very similar colors very similar moon but now we've got an astronaut standing on the moon with somehow the moon in the background but you get the idea it matches the same style from our original image so some pretty cool new features out of idiogram but I'm not done yet we also had mid Journey this week mid Journey says we're testing two new features today our image editor for uploaded images and an image retexturing for exploring materials surfacing and lighting kind of similar features to what we got out of idiogram this week but one cool thing about what we can do with mid Journey that we've never been able to do before is actually use our own uploaded images so if I come over here to edit in the left sidebar I can upload an image or edit from a URL I'll click on edit upload image and I'll just pick an image of myself here so I've got this image let's go ahead and mask out my head and let's give it the prompt wearing a baseball cap and then click submit edit and now I got four images that look like I'm wearing a baseball cap let's go ahead and select a big chunk over here and I'm going to put a fire breathing dragon and there we go I got a dragon that's more made out of fire than actually breathing fire but it's pretty cool that mid Journey lets you upload your own images and then put mid Journey generated assets inside of the image I'm going to try one more thing that I saw in their demo here I'm going to make my brush huge and just like get rid of almost everything except for my head so now my head is just floating there and I'm going to give it the prompt a man holding a self-portrait and let's see if it can figure out what to do with that all right there's one variation gave me a different outfit here here's another variation and another variation and another variation not bad I mean the self-portraits I don't really feel like look like me but I'm actually pretty impressed with some of these images they also added this new retexture feature this is sort of like control Nets if you've ever used control Nets on stable diffusion where it will maintain the same structure of the image but give it like a whole new style so let's start with this one of me holding up my self-portrait and give it the prompt a colorful psychedelic world and submit retexture and just like that it restyled this image to this image and this image and this image and this image I think I like this one and this one the best but you can see that it followed the same structure as the original image it just put a whole new style over the top canva got some new AI image generation in it this week as well and it actually uses the new Leonardo AI Phoenix model now full disclosure I am an adviser for Leonardo so just keep that in mind but I tend to talk about both the good and the bad with all the companies I talk about advisor or not but let's go ahead and jump into canva here and when you're logged into canva you can see there's a new button here called dreamlab if I click on dreamlab I can give it a prompt let's go with my old standby I can either leave it on smart and let it figure it out for itself but there's also cinematic creative macro illustration fashion Etc I'm going to go with cinematic I've always liked the way cinematic looks when you're using the Leonardo tool let's go ahead and click create and I've got four images of a wolf howling at the moon and they all look pretty dang good this is actually news from last week but I forgot to mention it and since I'm talking about AI tools I figured I'd mention it this week playground AI released playground V3 and it's a new model that's focused on graphic design so it seems like playground is sort of going in a different direction from a lot of the other AI generators and sort of niching down more for those graphic designers if we jump over to playground here we can see all the various categories they have but you'll notice that the main ones towards the top are logo t-shirt social media post stickers things like that that lean a little bit more towards graphic designers than just generating any random piece of art let's go ahead and do a logo pick a template that we'd like I'll go ahead and use this one that says fish on it here and let's say change text to Wolf and incorporate an image of a wolf create let's see what it does and there you go it created an image that has the word text and in the O there's a little wolf head so pretty cool and in the final bit of image news open AI showed off some new research with their consistency model which looks like it does pretty much the same thing as a diffusion model just way faster right like this did the image generation in 6. 23 seconds this one did it in milliseconds we could see some of the like really realistic images that were generated using this new research but this is also something that we do not have access to yet so we can't actually play around with these models yet I don't know if this is going to eventually be part of Dolly 3 or if this is just going to be a whole new image generation model from open AI but I mean they're pretty realistic all right let's talk about AI audio news this week 11 Labs introduced voice design which is the ability to create voices by giving it a text prompt like here's an example of a large yey with a deep rumbling voice that was the prompt they gave it and this is what it sounds like I have guarded these sacred Peaks since before your ancestors first dreamed of climbing them if you want to generate your own voices you can head on over to 11 lab click on voices in the top left click on add new voice and we now have the option to do voice design design an entirely new voice from a text prompt we can select that give it a random prompt I just clicked the randomize button here and it suggested a prompt of a sassy little squeaky mouse it also gave me a little bit of a text to preview it with and let's see what it does I may be small but my attitude is anything but watch it big feet or I'll give your toes a nibble won't forget so it generated that voice as well as two variations I may be small but my attitude is anything but watch it big feet or I'll give your toes a nibble you won't forget I may be small but my attitude is anything but watch it big feet or I'll give your toes a nibble you won't forget so that's a pretty cool new feature in 11 Labs there Grammy winning producer Timberland is now working with sunno to generate music collaborating with a real human and whatso can do most entertainers are fighting against AI so I love seeing actual musicians with credibility embracing these tools and helping spread the word of how they can actually help people creatively so I love love seeing new stories like this all right a few more things that I'm just going to kind of blow through really quick Google Deep Mind this week is open sourcing synth ID which is a text watermarking tool so theoretically any text that was generated with one of Google's models this synth ID should be able to detect it through some sort of text water marking and it looks like it's going to work across all modal so image text audio video all of the things Apple intelligence has started rolling out onto iPhones my iPhone is to old for it to work with but if you have one of the newer iPhones and you have IOS 18.
1 you have some apple intelligence features already and the new 18. 2 has even more Apple intelligence features like gen Emoji the ability to create AI generated emojis visual intelligence meaning it can actually see what's going on in your screen and also chat GPT functionality these are all rolling out in this new 18. 2 version and some people are already starting to get access to it if you're on the developer beta perplexity launched a Mac app this week now I'm not currently on a Mac so I can't really demo it but it basically gives you a shortcut on your keyboard to quickly open up a box ask a question and it will send the question directly to perplexity so just a really quick easy shortcut to get to perplexity hopefully they'll be rolling out a Windows version as well but this new app that they're showing off is specifically for Mac right now qualcomm's Snapdragon Summit was this week out in Hawaii and they announced the new Snapdragon 8 Elite chips these Snapdragon chips are mostly designed for mobile devices like Android devices laptops tablets things like that and they're getting better and better faster and faster more and more efficient making it so more and more newer devices can more capably run AI on them AA this week launched a no code tool for Designing AI agents I'm not a big user of a SAA but here's the screenshot they have creative request triage we can see that it's automating this task here which says confirm that requirements are met if they are not met ask for more details and this s AI sent a message to one of the users saying before I route this request to a designer can you please clarify who is the target audience for this asset so basically I guess you add these tasks in a sauna and it knows what prerequisites are necessary for the task ask and if the prerequisites aren't met then it will ask for more information and I don't know I have not used this yet myself but if you are a user of a sauna that's something new that you could go into a sauna and play around with and finally I want to end with this cuz this is really cool but also really creepy this is like a upper torso of a humanoid robot and it's using like simulated muscles to actually do all of that movement that you're seeing there they describe it as a b manual Android actuated with artificial muscles the first bmanual torso created at clone includes an actuated elbow cervical spine and anthropomorphic shoulders with the sternoclavicular acromioclavicular scapular and glenohumeral Joints yeah those the valve Matrix fits compactly inside the rib cage and bi manual manipulation training is in progress on L I just wanted to show this off cuz I thought it looked really cool and also really creepy it looks like the robots from Westworld to me and uh this is one step closer to that actual reality I wonder if the people who are making this saw how that show ended anyway that's what I got for you today huge huge week in the world of AI so much happened especially in the like creative world of AI video images audio that sort of stuff plus tons of news around more autonomous agents and actually getting AI to just sort of do the work for you really really fun stuff really exciting stuff things are really picking up steam a lot of momentum coming out of these companies and I'm here for it if you want to stay looped in on all of the coolest AI tools and all the latest AI news make sure you check out futur tools.
this is where I share all the cool tools I come across I keep the AI news page up to date even with news that doesn't make these videos even though these videos are long I still cut stuff out and there's a free newsletter where I'll send you just the coolest tools and most important news directly to to your inbox also if you subscribe I'll send you free access to the AI income database a bunch of ways that I've rounded up to actually make money with these various AI tools so check that out it's all free over at Future tools.