it was a Bonkers week in the world of AI film making the world's first AI feature film is here Runway just announced the ability to control actor performance directly inside of their tool there's a brand new AI video tool that basically connects all of the AI video generators together and anthropic release the ability for computers to control your mouse good gravy there's a lot to cover this is your AI film news of the week the first big piece of news that we're going to talk about is the big news from Runway which it kind of
feels like they have big announcements every single week the feature that we are of course talking about is Runway Act One essentially this brand new tool allows you to control actor performance directly inside of Runway so previously you would have to use a tool like live portrait to take a video of an actor performing and then put that on an AI video or image Runway is basically creating their own tool that looks like a modified version of live portrait and putting it directly in their platform so you don't have to daisy chain all of these
third- party applications together they released a few demo videos showcasing the tool let's take a look you're going to fire me cuz there's hot milk everywhere on the floor of the coffee shop I mean it's one latte it's not that big of a deal everything's fine everything's totally great this is going to be the best day ever smash my phone the other night and you know how much I love that phone okay all right just breathe okay maybe you don't breathe that [Music] hard so as you can see the videos that are generated generally require
the subject to be looking at the camera or at least oriented closer to the camera so you're not going to get the control over an actor who's let's say facing to the left or the right you still have to have them generally looking at the camera if you want to get the best result one of the more impressive examples that came from Runway is this Diner scene that kind of feels like it lives in the Breaking Bad Universe let's take a quick look you really think you can just walk away from all this I have
to there's no other choice there's always a choice you just don't like the Alternatives what would you have me do stay just watch it all crumble around me that's what you s signed up for it's what we all signed up for okay so as you can see this is kind of the beginning of being able to use AI to have conversational scenes that have emotional depth now obviously in the example even inside of that Diner scene that looks pretty darn good like that's one of the more performative and interesting AI videos that we've seen up
to this point if you look in the background with the guy talking there're like slow motion characters that just e and flow and just kind of Fade Into the distance so with that being said do I think that this this feature will be Silver Screen ready probably not but for online distribution or putting things on YouTube or social media seems like this tool actually could do a really good job now I should also note that in the video with the man who is older the lip syncing doesn't quite 100% match up I'd say it's like
98% of the way there which is fantastic but if you're looking for that extra refinement to be used inside of a professional project you still are going to have to go through the process or use some higher-end tools to make it happen but the fact that it's going to be so easy to do this inside of Runway is a huge step forward from the demo that Runway put together it does seem like this kind of only works whenever you are using a character that is either hean or humanlike you probably aren't going to get Dynamic
mouth movements from animated characters that like have like snout movement and things like that now unfortunately you don't have access just yet but Runway is actually really good at at announcing new features and then releasing it just a few weeks later so I imagine that we'll have access to this tool within the next two weeks there was some pretty cool news from the people behind Korea now Korea is an incredible tool that has been used to generate images to do Uprising video Uprising but they actually just came out with a new feature that is basically
an API integration with most of the major AI video tools that are out there essentially now you have the ability to generate generate a video from virtually any platform directly inside of Korea this includes Minx Runway paa Luma and more to use this feature all you have to do is go to the Korea website and select the video icon from the homepage from here all you have to do is type in your prompt for our example we'll just type in a cinematic liveaction shot of a baby duckling sitting in a porcelain teacup and in the
bottom left here you can see that you can select the menu and essentially change it into any of the video models that are listed there so we have Luma Runway cing minia Max which goes by hu and Pika and when you're ready all you have to do is hit generate I went ahead and generated results for all of the video models let's take a look we have this shot from Runway of this duck not as much movement but you know I like the camera shake it does come across as a little realistic although I think
it's uh a little too contrasted and kind of is coming across more as concept art as opposed to realistic we have this shot from clling and I think the duck just like shattered that cup cuz his butt is sticking out the end of it so uh that's like a very strong little duckling there here's minax and yeah it looks pretty good the feathers look a little extra sharp they're kind of coming across more as hair as opposed to like Duck fuzz but honestly a pretty good generation and finally we have the crush feature in Pika
and I should note that this one is actually a rubber duck so so uh no harm no foul there so the cool thing is all of those models can be used directly inside of Korea you don't have to go to all of the individual tools to generate the video and the cool part is with your subscription to Korea you actually can select any of those models so again you don't have to have a bunch of different subscriptions to a bunch of different tools so text to video is cool but what's even better is image to
video because when you're working on a film Project you'll quickly realize that image to video allows you to have everything from character consistency to style consistency and it just gives you more control so using Kaa you actually have the ability to do first frame and last frame if you're using Luma or cling and you have the ability to do first frame if you're using Runway so let's look at a quick example so I have two shots here this is the first shot of this kid holding an ice cream cone and then we have a wide
shot basically of the same kit and we want to interpolate and animate between those two frames so we'll run this through Luma and cing and here is our result from Luma can see the kid just standing there and yeah did a pretty good job it's kind of like some warping around the hand and obviously the people in the background it's a it's a little odd it's not 100% there but not too bad and here is the same result from cing and this one was actually pretty good I think there's like some weight behind the steps
the overall physics look better which is very characteristic of using cling so I think they did a pretty good job and I should note that you do have the ability to adjust the camera movement inside of Korea if you are using cing but you don't have that ability using Luma you have to just go back to prompting in the normal way that you would prompt camera move it I think news like this is really important because it really just showcases that AI video tools are becoming increasingly userfriendly now the one question that you probably have
is how much does this cost well if you go to the Korea plans page and scrub down you will find the section that says video beta and they basically break down how much each one of their plans allows you to generate per month so their most expensive version is $60 a month and essentially that allows you to create anywhere from 163 videos all the way up to 571 videos it just depends on the model you're using and ultimately how much they charge Kaa to connect to the API and if you compare that amount to how
much they actually charge you if you were to use the tool directly like if you were to go to Runway or go to Luma you're essentially paying about 50% more to use Kaa but to be fully honest to just have a single subscription and be able to use these tools is pretty powerful I still think that the unlimited feature inside of Runway is probably your best bet if you want to get maximum bang for your buck but using multiple video models is pretty normal if you talk to somebody who creates AI videos professionally they will
tell you that they just use certain models for different shots so you may use one model for one specific shot and if it's not working you'll go to a different one and pick and choose the right result for you so it's cool to see that Korea is basically taking that experience and putting it in one place now you're not going to get the very best model from all of these tools directly inside of Korea either so I recommend checking out their page to see some of the limitations for example if you're using cling the generations
can take up to 10 minutes long which is just kind of slow compared to using the actual platform itself the team at Mid Journey also came out with a new feature that you definitely should know about now up to this point if you were working with mid Journey you could use images as style references as image reference as character reference but you didn't have the ability to just directly upload your image and edit it using mid Journey well all of that has changed today now you have the ability to use a brand new editor inside
of their platform to use it all you have to do is go to the mid Journey website try not to get distracted by all of the incredible Community generations and go over to the edit button on the left so here you have the ability to upload an image from a URL or you can just upload the image directly I have this shot of this beach scene here we'll go ahead and import that directly into mid journey and similar to using their inpainting feature whenever you're actually editing a mid Journey image using their image generator basically
you have the ability to Define where you want to add in your new information so I want to basically upload a photo let's say of me at the beach here so what I'm going to do is I'm just going to erase the area where the character would be standing and maybe a little bit of the ground here just to have some room for like Shadows or any other details that we might want to add in and we will prompt in we'll say a man at the beach easy peasy and the cool thing is you also
have the ability to do style reference character reference image reference all directly using this new tool so we can select character reference so I have this picture of me wearing a sweater and what I'm going to do is I'm going to do das Dash CW for character weight and say zero because I shouldn't be wearing a sweater at a beach should definitely be wearing like a t-shirt and you do have the ability to go in here change the aspect ratio do all sorts of really interesting things and it's tell you what I'm going to add
in a little bit more information to the bottom here just to expand it so there's just more room mid journey is always better whenever you give more room for it to in paint just because it adds in more creative data and I'm going to say a man in a t-shirt at the beach and we'll say facing the camera there we go and we'll go ahead and click submit edit okay let's take a look at our results so we have result number one I think my head might be a little too big but you know not
bad we have this one here here yeah that one's pretty good this one here I don't know if the lighting is quite right and then finally we have this one here which does kind of look like a selfie I will say the Fidelity on the character's face isn't perfect but it did a pretty good job now this tool can do a wide variety of different things for example we also have the ability to extend the edges of images so if we wanted to like change the aspect ratio we can just basically grab the edges like
this and move it we also have the ability to move and resize the image if we wanted to make it larger so let's just say we wanted to generate it with this new composition here so we can click submit edit and it will basically add in new information into the frame Okay cool so it did a good job at expanding the frame there and there's not those hard seams on the edges that can come across as basically unrealistic it did a really good job at smoothing that out so uh great job for Mid Journey there
now I should also note that mid Journey has another feature called retexture that basically allows you to change the overall style of an uploaded image so for example we can retexture this image here and let's say that we want to change it to a completely different style so I'm going to say that the new style we want to do is like a line drawing so I'm going to say a line drawing of a man at the beach there we go and we'll get rid of the character weight code because we don't need that anymore and
you also have the ability to upload a reference if you want I'm not going to do that for this one we'll go ahead and click submit retexture okay let's take a look at our results here so we have shot number one here yeah it did a pretty good job I do think that it broke down some of the character style here like it seems like that might be a different person compared to the original image so you know it's really adding in a little bit more stylization there if you wanted to turn that down you
can just turn down stylization a bit uh we have shot number two here which completely got rid of the face which is great uh we have shot number three completely remov of the face there the composition is the same but you know generally it uh changed the face then we have the final shot here which again the background is slightly different but the overall composition is pretty close I think having the ability to use AI tools to edit images in a wide variety of applications is really important and really helpful whenever you are in the
creative process so it's cool to see mid Journey coming out with these new features the team at kyber also had a major update recently called super studio so it's a really interesting pivot for the company because you're probably familiar with kyber because of all of those crazy hallucinogenic videos over the years well they are basically trying to be some sort of mixture between a whiteboard and some of the functionality that we talked about in Korea earlier where you basically have this creative space to throw a bunch of ideas together and organize them in a single
spot to use the new studio all you have to do is go to the kyber website and go ahead and click create Now the default canvas that they have inside of super studio is actually really confusing and hard to use so I don't recommend using that default one you can just go to the side menu click the plus icon or click on a new canvas to create a new canvas so we're inside of our canvas here and it's pretty interesting it works by going to this little menu at the top it says flow menu you
can go ahead and click that button and there are basically these flows which are kind of like nodes that allow you to perform a certain creative task so we have for example flux for generating an image in flux we have Luma video Runway video and then all sorts of other kyber related tools like image upscaling and video restyling things that you may be used to if you're used to working with kyber I'm going to ignore those for now so let's focus on generating our image first so let's select flux and you see here we have
the ability to type in a subject I'm going to say a cinematic still of a dog on a patio real creative right and you have the ability to adjust the aspect ratio width height all that good stuff and when you're ready to generate all you have to do is select the little Gummy guy here and uh I love him he kind of looks like a grape flavored dot you remember those old like uh candies that are like super hard to get out of your teeth he looks exactly like one of those with a smiley face
which is just delightful and you can see we generated an image of a dog on a patio here looks pretty good and you can of course just download that directly to your machine if you wanted to now the other thing that makes this pretty interesting is having the ability to bring in a node that allows you to use Runway or Luma so for example if we wanted to use Luma we can just select Luma here and we can drag and drop this image into the start key frame and you can see that it loaded it
into the start key frame here which is pretty cool and now you can basically just type in a prompt like you would if you were prompting inside of luma so we can say a dog looks around on a patio handheld footage and you do have the ability to select Loop video if you wanted to and you also can select an end key frame if you wanted to as well we're not going to do that and we will go ahead and select the gummy guy here to generate our video and our video has been generated let's
take a look here we have our dog on the patio he's kind of looking around yeah not too bad I also have a few other examples I want to show you real quick so I wanted to test out the Dual key frame feature inside of kyber so we have shot number one here and shot number two we want to interpolate between those shots and we have this scene here which looks pretty good like the particle Dynamics they're not bad they're not perfect but it did a pretty good job there and we also have this shot
here of this like ghost character which is perfect for a Halloween film and there's actually quite a bit of fidelity there too so not too bad and we also tried generating a video using kyber's video lab module and this is the result we got so not great it is interesting that it can generate a clip up to 17 seconds long just for my prompt alone but the quality is just not great so I don't really recommend using it so it's cool to see different tools using different approaches to the creative process while I personally don't
think that we will recommend using kyber at curious Refuge just because it can kind of get disorganized really quickly I think having the ability to create images and videos and have just this whiteboard creative space is an interesting way to go about the creative process so I recommend checking out the tool if you like it you can use it if you don't like it you totally don't have to use it and you can go back to just traditional folder structures for organizing your content in the world of Hollywood there was some pretty interesting news that
the production company lumous has partnered up with meta to give artists access to the new meta movie gen tool as part of the announcement meta released an AI short film called I hate AI that was created by Anish chaganti and the film is pretty interesting because it takes an old film that he created whenever he was 10 and it basically created visual effects that created an entire world that the film can live in blumhouse said that they will integrate movie Jin into their Productions in in 2025 this is of course really important news because blumhouse
is not the first Hollywood studio to widely integrate AI into their Productions obviously at curious Refuge we've trained thousands of people in the industry to use AI in their day-to-day work but there was also the news from Lionsgate just a month ago where they're working with Runway to train their own AI model very soon AI will be completely normal in the Hollywood production process and artists like yourself that know how to use the tools to maximum creative effect will have many opportunities available to them in voice news the team at 11 Labs has released the
ability to type in a prompt and generate a completely new voice so obviously they have the voice library that has a ton of voices that you can use on your creative projects but this one goes one step further where you can actually type in a prompt and hear a voice to use this feature all you have to do is go to the 11 Labs website and click voices on the left side of the screen and click add a new voice from here just click the voice design button and you can type in a prompt so
for our prompt we'll say an angry old pirate with a British raspy voice and the text to preview we'll just have you know uh just a line here and we will go ahead and generate that voice and here is our result I'll have none of your mutinous talk ye scurvy rats you've got two choices go back to your chores or find yourself a place at the bottom of Dave Jones's Locker not too bad honestly and we can take that and integrated with Runway lip sync that's not Runway act one just yet and we generate this
video I'll have none of your mutinous talk ye scurvy rats you've got two choices go back to your chores on find yourself a place at the bottom of Davey Jones's Locker amazing so you can see that having the ability to have fine-tuned control and basically combine different voices together to create a completely new voice is pretty interesting and I should also note that they announced that the API will be available in about a week so you'll be able to do some real-time generation of brand new voices and basically prompt again and again to create characters
in an entertaining experience now that's kind of of the beginning of what I can imagine will be prompt to entertainment where you want to watch a story about a raspy British pirate and the voice that will be powering that story will be automatically generated we're probably a couple years away from that being in real time but we are probably only 6 months away from actually being able to prompt and watch something that is pretty interesting using technology like this the team at comfy UI also announced that they are releasing a Mac version of their application
that you will be able to use directly on MAC machines the reason why this is important is because up to this point it's been very challenging to run comfy UI locally on a Mac machine you've basically had to use a cloud instance of the tool the tool will come preconfigured with the python environment and really everything you need to have to get going with comy UI you can click the link below this video to join the wait list and of course you will be the first to know when it goes live directly here on Curious
refuge and there is some wild news from the team at anthropic this last week essentially they announced that they are releasing AI agents that have the ability to control the mouse and take actions on your computer so you can type in a prompt and the system will basically control the mouse and will do whatever you ask it to do there are some pretty interesting examples for example in the announcement video anthropic basically had the system create an entire website from scratch there's also a really funny example of somebody using the feature to skip YouTube ads
which is uh just delightful this is basically the beginning of having AI digital agents you can ask them to do task and they will go off and do the task for you now it's not perfect at this point and there's a lot of room for this to go wrong so I imagine that if you're working with sensitive information or if the system goes crazy and just like deletes a bunch of files on accident that could be problematic but I can imagine that very soon especially because open aai is already working on agents to do automated
tasks we will have ai systems that can do a lot of the grunt work that we do whenever we are working on our computer and on that note I want to remind you that enrollment for the November session at curious Refuge opens on October 30th that is next Wednesday at 11:00 a.m. M Pacific time this is the very first session where we are going to offer AI animation we partnered up with story book Studios to create the most cuttingedge AI animation course in the world not only will you learn how to create animations from scratch
including how to create environments and custom characters but you'll also learn essential techniques like rigging character tracking and scene development this is the essential course if you want to learn how to create animated films but also sell them because we are going to show you how to train custom models that allow you to own the final pixels on screen be sure to go over to the Curious Refuge website if you want to learn more but I should also note that we will be opening up enrollment for AI film making AI advertising and AI documentary so
if you're interested in joining any one of those four courses we would love to have you inside of the program voting has also closed for our second annual AI horror film competition we had around 500 horror films uploaded for this competition and our judges are reviewing these submissions the winner will receive $3,000 in prize money thanks to the folks at Infinity Ai and of course you will also get a cursed potato which is one of the best Awards I have ever seen we will announce the winners of this competition early next week there are also
a ton of AI meetups happening around the world I want to give a shout out to the Curious Refuge Meetup that happened in Stockholm just a few days ago we also have a Meetup in Los Angeles on November 6th so come by if you'd like to say hello there's a curious Refuge Meetup in West Palm Beach on November 7th and we are hosting our first Meetup in Chicago on November 14th so if you happen to be in any one of those areas be sure to stop by and meet some new friends in the world of
AI and that brings us to our AI films of the week the first film that I want to highlight is truly a moment in the world of AI film making and I am of course talking about where the robots grow by AI Mas Studios this is from what we can tell the very first AI animated feature film the team basically put together the film using a combination of different Technologies from what we've seen so far it looks like it was a combination of wonder Dynamics with simple 3D environments and probably some other AI tools to
just add texturing and just you know generally create the world in which the characters live they actually said they put the project together with a team of 10 people and they even released it for free over on YouTube really as a marketing play to get Studios interested in their Studio which might be a pretty good strategy because it's had a lot of people talking about it I highly recommend checking out the film I think they did an amazing job the next film that I want to highlight is called those who remain by Cavin the kid
Cavin is one of the best AI curators out there and the project was put together using cing it basically has zombies and robots and ninjas and this like epic battle world and I also think it's a wonderful example of fantastic sound design sound goes a long way in enhancing your AI visuals and just creating the larger World in which your project lives and Cav really has a strong grasp of sound editing and the last film that I want to highlight is called last stop by mean orange cat mean orange cat is created by Don teal
who created some bonus lessons inside of our AI film making course the project was created for the Culver cup competition and it really just showcases the mixture of having a really good story and script in conjunction with really sound AI technical skills thank you so much for for watching this week's episode of AI film news of course if you want to subscribe here on YouTube to get AI film news and tutorials directly here on the platform be sure to like And subscribe you can of course get AI film news sent directly to your inbox not
only will you get AI film news sent to your email inbox each week you'll also get free access to an intro to AI filmmaking course and if you ever have a project that you would like to submit for our AI films of the week we have a submit button on our AI film Gallery page and if you ever have news that the community should know about be sure to hit us up at hello curiousreason