Adobe's New AI Video Generator is Bonkers!

121.72k views4665 WordsCopy TextShare
Curious Refuge
Adobe just entered the AI video world, Luma is bringing cinematography controls to sora-level footag...
Video Transcript:
Adobe just entered the AI video World Luma released cinematography controls on Sora level generations and California passed a new bill on AI film making so I wonder what's inside we'll take a look at that and more in this week's episode of AI film news thanks for joining us okay we have 11 really important pieces of news that you need to know about let's get started the first big piece of information that you should know about is the fact that adobe has announced Adobe Firefly video so you may have used Adobe Firefly to create images up
to this point well there's a suite of tools that they are looking to integrate into video and they're coming really soon they have a press release page that you can check out over on their website and it has a ton of the various examples of their new video tool in action so there's a lot of things to unpack here we'll cover them one by one the very first thing that you need to know is that the model can of course do text to video so for example we have this shot of this reindeer at Sunset
and yeah it actually looks pretty good we have this shot of like a drone flying through Lava and uh you know not too bad I think it's uh pretty good and we also have the shot of these like delightful just like yarn characters dancing they're just like hanging out and just like having an awesome party which uh looks great I guess and then finally we have this shot of Drone footage with wind blowing over Dunes creating waves in the sand you can see that adobe Firefly video actually did a pretty good job at bringing that
prompt to life Adobe Firefly video will give you the ability to upload an image and then animate that image using video which is much more important you hear it all the time on this channel text to video is fine but image to video is really incredibly useful whenever you are actually trying to work on a filmed concept so really cool to see that that's involved they also have this really interesting feature that basically allows you to take video footage like liveaction video footage pull a frame from that footage and then you can generate additional shots
that are in that same style so this is really important because if you have video clips that come from a specific camera or have a specific color grade you can essentially generate other clips that are in a very very similar style so that'd be incredibly useful if you're inside of Premiere Pro you can just generate more clips that will help to tell your story further they have a really interesting example over on their website of a little girl basically looking at a dandelion through a magnifying glass and that was shot on real footage and then
they cut to the video footage that was generated that basically has the magnifying glass and you know has the closeup of the dandelion so it seems like this could be really helpful for just kind of filling in the gaps if you're in the middle of an editing project and you don't have the shot that you need they'll also give you the ability to control the camera movement which is incredibly helpful there's also some news later on in this episode that we'll talk about that empowers you to do camera controls and some of the other AI
video tools as well but very cool to see that that will be available inside of adobe's tool and they also say that you can use Firefly to basically generate assets that you can use inside of your footage and the reason why that's kind of interesting is Adobe already has Adobe stock that is essentially a tool designed to give you assets that you can use whenever you're working on if video project but essentially they're saying you can now use Adobe Firefly to generate your own assets to use on your project so very interesting how they are
basically kind of becoming competitors to tools like invat with this new feature and the final tool that I really think you should know about is a new feature called generative extend so essentially what this does is it allows you to take your footage inside a Premiere Pro and let's say that you want to extend it like for whatever reason with the pacing you need another 1 to two seconds of your video footage well using generative extend you can just grab the end of that clip extend it and it will generate the clip for you now
we saw examples of this about 6 months ago but the quality at least in the demo from Adobe looks really impressive so I can't wait to get our hands on this and actually start playing around with it the team at Adobe said that all of these tools will be available to you by the end of the year so of course you can subscribe here and we will let you know as soon as these new tools drop so I was actually curious how do the generations compare in Adobe Firefly when you compare them against other AI
video tools like runaway well we did a quick comparison let's take a look so here's the generation from Adobe of this reindeer and honestly it looks pretty darn good there's a little bit of wonkiness in some of the physics with the the snow but honestly not too bad and then here's the S generation from Runway and it does look pretty good I would say that it's not super cinematic it almost it's coming across as a little cartoony and there's definitely some wonkiness with the antlers there but you know it's a pretty good generation next up
we have this shot of basically a drone flying through some lava from Adobe and honestly that video footage looks fantastic the video footage generated inside of Runway looks like this and honestly Runway did not understand the assignment so it did not do a great job here it's really not understanding that it's flying through the lava there so I think that adobe did a better job there we have this shot from Adobe of these characters basically dancing in this 3D style and it looks good obviously the characters their uh coloring is kind of changing so not
too sure what's going on with that but you know the physics look pretty good it feels feels like there's some weight behind their steps and here's the generation from Runway which I have no idea what's happening they look like like cursed puppets but I I'm more entertained by the runway generation but I will say I think that the Adobe generation is better and then finally we have this stock footage of the sand dunes here's the result from Adobe looks really good it followed the prompt very well and then finally here is the shot from way
which is more generic and it doesn't really showcase some of the prompting as it relates to wind blowing so I would say that again Adobe did a better job so that's just a quick comparison of course when the tools fully drop we'll let you know which one is our favorite here on Curious Refuge our next news comes from China now you probably have seen this incredible new AI video generator called Minimax by hilu and some of the results that we've seen from the tool are absolutely mind-blowing for example we came across this short film of
these hamsters that basically they break out of their cage they get into robots and they wreak havoc basically and it was a pretty cool film but the crazy thing is it was created by a father and son duo in less than an hour using miniac so an incredible job and the prompt adherence is really incredible to create create a video with Minimax all you have to do is click the link below this video there's a very specific website and for some reason when you Google it it's kind of hard to find so you'll find a
link below this video and you just sign in it's completely free and then you end up on this screen now if you're used to creating videos and let's say Runway or Luma you may think oh cool I'm just going to type in my prompt and then I'll be good to go well the problem is this is a chines based model and if you just type in English alone a lot of times the generations are not quite as good so my quick tip to you is to actually go over to Google Translate and translate your prompt
from your native language into Chinese so for example we'll paste in our prompt here which is a cinematic handheld camera shot so we're defining what the shot is of a woman wearing a black dress holding an axe in a Victorian Mansion we're just getting ready for the Halloween season there may or may not be a Halloween AI film contest coming very soon and what we can do is we can copy The Prompt here so the Chinese prompt and then we'll go over to Minimax paste it and go ahead and generate that video now unfortunately you
can't do image to video just yet they say that that will be coming very soon but that is just a limitation that you want to think about now the generations take anywhere from 3 to 5 minutes you can only do one at a time as well so it takes quite a bit of time if you want to create a full film but you know it's completely free at this point so it's a small price to pay so let's take a look at our shot here so we have essentially this shot of this woman in a
black dress in a Victorian Mansion holding an axe and it really adhered to our prompt Direction incredibly well and you can see that it actually did translate it back into English the Curious Refuge team was testing out Minimax earlier and we created a few really interesting examples so here's a bunny in a flower bed and uh yeah 100% looks like a bunny in a flower bed it's basically realistic we have this shot here of the high-speed motorcycle chase and honestly it looks pretty good some of the physics in the cars don't look quite right but
honestly uh not that bad we have this shot here of a mermaid floating through outer space and admittedly we did talk about the style so technically is a mermaid and she is floating throughout her space but it's not very realistic it kind of looks like uh cheap uh early 3D animation but you know it it did adhere to The Prompt so not too bad and then finally we have this shot of dining room furniture floating above a lake and yeah yeah it actually is dining room furniture floating above a lake kind of a weird shot
but honestly kind of a weird prompt now the generations that you get from minia Max are 6 seconds long 720p and in 25 frames per second so if you're using them on a film Project you'll need to convert that 25 frames per second into 24 and then also use an AI video uprer like topaz video AI to get maximum quality from the result this news from Minx really just illustrates the fact that there are a ton of really impressive AI video tools hitting the market so while we don't really have access to Sora at this
point there are a lot of options that in many cases produce results that are just as good I'd like to extend a huge thank you to the Venice International Film Festival reply and MasterCard for flying Shelby me and 10 incredibly talented AI filmmakers out to an AI filmmaking event that happened during the Venice International Film Festival it was really incredible to meet the finalist and the judges and it was really cool to see that many of the finalists were curious Refuge students I think that this is just the beginning of some incredible AI film festivals
that we will see in the coming months and years the team at Luma also came out with the ability to control camera movements so you don't have to R on just prompting alone you actually can Define how you want your camera to move let me show you how to use it all you have to do is go to the Luma website and click on the picture icon we'll go ahead and select this shot of this like werewolf here and bring him into the scene you can do an inframe if you want I'm not going to
and then I'm also not going to type in a prompt you totally could if you wanted to and all you have to do is type in the word camera and when you type in that word you'll see basically a drop- down list of all sorts of camera movements that are available to you and if you kind of scroll down you'll see there's a lot more as well so what I'm going to do I'm going to pull out so like a slow pull out shot here and basically want to track with that werewolf as he walks
and go ahead and click generate after about 2 minutes we get this generation here of this werewolf and honestly it looks pretty good like the the movement the physics look really really nice and there's a ton of other really interesting examples of camera movements from Luma this uh orbiting tracking shot was really interesting I there's tons of dynamic movement there obviously things get a little strange there towards the end but it's pretty cool that we are beginning to get a lot more customizable control from the generations inside of luma and from our Quick Test we
found that it really does stick with that movement about 90% of the time so it does a really good job at adhering to the direction that you give it I think this is really important because it really showcases that traditional film making techniques are increasingly being integrated into AI tools tools like Runway cing and Luma are basically giving you the ability to have more control over your outputs which lead to better stories and speaking of Runway the team released the ability to extend video clips up to 40 seconds long using Runway gen 3 here's a
quick example that we put together basically we started with this shot here that was 10 seconds long and then we prompted in for a hooded figure to appear and after just a few seconds basically it Transitions and you know there's the hooded figure and then we stopped prompting we just started like extending the frame again and again and things just get really weird so uh the entire thing just starts breaking down he kind of is in like a half dab there and uh just explosions start happening and it gets really wild and uh I definitely
recommend uh just trying to extend your Clips to see what happens cuz it's a lot of fun to extend a clip inside of Runway all you have to do is go to the runway gen 3 website and then drag and drop an image I have this image of a woman in a cafe we'll go ahead and drag and drop that into our scene here and for our prompt we'll say a handheld camera shot of a woman talking in a cafe 10 seconds long and go ahead and click generate okay so we generated our footage let's
watch it back see this woman she's just like talking in a cafe really enthusiastic about that conversation I might add a ton of energy she's been drinking a lot of espresso uh but let's say that we want to extend this video clip well to do that all you have to do is click the extend button and you can type in any prompt that you want so we'll say a handheld shot of a woman slowly becoming sad in a cafe so that conversation's going one way and and then she just becomes sad apparently and you can
go ahead and click extend and let's take a look at our results so of course we have the video footage of her talking and she's just like just really enthusiastic like I'm not too sure what their conversation is about maybe they're talking about she got access to Sora but wait they took away her access and now oh no her hand is backwards I don't know how she's drinking that cofy oh no oh that's great so as you can see the generation's totally begin to break down and it's actually pretty unique to have like a 40
second long shot in a film is kind of unique typically shots are just a few seconds long so I'm not too sure how practical this will be for day to-day film making but it is cool to see that there are AI tools like these that are beginning to think about ideas like extending footage and giving more creative control to the people who are generating the app assets the team at meshy released meshy 4 which is basically a tool that allows you to type in text or upload an image and get a pretty decent 3D model
let me show you how to use it so this is mesh's website they have a free plan of course they have paid plans but you can actually do some really cool things with just the free plan so I went over to Mid Journey typed in uh 3D bear we have this 3D bear here and let's say that we want to turn that bear into a 3D model well all you have to do is go to image to 3D and we can drag and drop our 3D model here so we have this model of this bear
we'll drag and drop him into the scene and we can call him 3D bear and you do have the ability to change the target polygon count you know if you're working in a very hyper professional workflow and you need maximum quality uh you can subscribe to the Pro Plan to get 100K I'm just going to stick with 30k for now and we'll stick with quad and go ahead and click generate now the generations inside of Meshi are actually pretty quick it takes about a minute to produce each 3D asset okay let's take a look at
our result here we have our 3D bear here you can see like it's actually pretty darn good like it's not production ready but it gives you a great jumping off point uh for taking it into a 3D application like zbrush or uh 3ds Max or Maya to really clean things up and the cool thing is you can actually export these models in a variety of different formats so they have fbx obj uh they even have blender so you have the ability to download them in a variety of formats that are accepted across 3D applications and
I am curious because they actually do give you the ability to add an animation as well so let's see if we can animate this character I don't have High Hopes here so let's click animate and yes it's a humanoid style character and it looks like we need to move this char character here so let's kind of turn down the height and put the head head height and we will offset the character to where he's right in the middle there we go go to next and now we need to place markers so we'll go ahead just
kind of find where all of these things go okay let's uh see our result here uh so we have our bear and uh yeah that that walk cycle is uh pretty cursed I'm I'm not too sure what's going on there uh okay yeah he looks like a person wearing a bear suit like running like one of those inflatable suits uh so not the best job here definitely seems like it's a a bit more designed for humanoid style characters but I'm loving some of these results what is even happening here tools like this are important because
of course AI is transforming 3D animation and modeling very soon we'll be able to just upload an image and everything from the modeling to the rigging to the animation will happen with artificial intelligence now of course it will always require humans to go in and finesse and direct these specific movements and direction to be exactly what you're looking for but this dramatically streamlines the process to make 3D animations easier than ever before there are also two big pieces of legislative news that you should know about the first is the framework convention on artificial intelligence so
since 2019 there's a group of countries that have gotten together and basically signed a resolution saying that they would encourage one another and work with each other to create Global safety surrounding artificial intelligence and this is kind of the first Global treaty that has been signed now it's mainly Western countries that have signed this resolution but it really is a step forward in countries working together to make sure that artificial intelligence benefits Humanity now the second bit of legislation that you should know about is California Assembly Bill 2602 and essentially this is a bill that
says Studios cannot use the likeness of an actor unless they have permission from that actor and that actor basically had legal help or the help of a union when basically giving over the rights to their likeness so they're trying to prevent people from basically signing over documents they have no idea basically what the document says and then they get taken advantage of and they lose out on work opportunities which totally makes sense now this document is kind of vague and leaves a lot of room for interpretation so I don't think that this is going to
be a great working document uh long term for fighting deep fakes or even integrating AI into the industry and there's some major problems because if an actor wants to get paid for their likeness to be used used in artificial intelligence they either one need to hire a lawyer to help them sign the paperwork or they have to be represented by a union so if you're not represented by a union it's going to cost you a lot of money to even have the right to sign your likeness away so there's kind of uh some special treatment
happening towards people who are a part of a union this resolution also says that for your specific project if there's a shot or a performance that you need that could hypothetically be shot on location or in a studio you basically have to do that unless you agreed otherwise so you basically can't add in an extra line or an extra shot unless you get approval from the actor who's likeness you're pulling from I think we'll have to see how this plays out the bill is honestly very vague and frankly there's big incentives for Studios to move
out of California right now and so if they start pushing back too much in the next era of film making it's only going to make things even more challenging for the industry as a whole there are also a lot of AI filmmaking meetups happening around the world that I think you should know about just last week we had a curious Refuge Meetup in Berlin we're also hosting a curious Refuge Meetup in Singapore on the 15th Runway is hosting a Los Angeles meetup on September 18th there's a Runway Meetup in Quebec on September 19th curious Refuge
is hosting a Meetup in the hay in the Netherlands on September 25th and there's a collaborative event between LTX studio and the creative AI Network in London on September 27th Shelby will be a keynote speaker at that event so if any of these AI filmmaking events are happening in your area I highly recommend checking them out I also want to wish a huge congratulations to everyone who entered our AI advertising competition our judges are currently judging the submissions and we will announce the winners next week be on the lookout for the second iteration of our
AI horror film competition next week as well and that brings us to our AI films of the week the first film that I want to showcase is called Alone by Daniel T this is a really incredible film for a variety of reasons it's essentially kind of a tone poem that explores the idea of being alone but I love the style that they incorporated into this film they kind of put it all inside of uh Square aspect ratio with a film grain and the the color graining is just desaturated and very interesting and some of these
shots look pretty darn realistic also want to highlight a film called seeing is believing by Mark wultz Mark actually was the winner of our AI trailer competition and he put together just a quick uh just kind of demo of minia Max in action so it's really just a montage of people with really expressive faces in a variety of just kind of film making shots I think even though it's kind of lacking a story it really does showcase that visually expressive AI characters are just around the corner and so having the ability to type in a
prompt and have control over your actor's performance is going to be here very soon I also want to shout out Alexandra Axel who put together this AI spec ad for Tesla it's just kind of like a high concept sci-fi commercial that takes place in space and the shots are pretty cool it has some realistic camera movements and uh some lip syncing and I think they did a really good job with curating this project and the final project that I want to highlight is called magic coin it was created as a release film for Mini Max
that came out just last week and basically just follows this boy through just fictitious lands and it just kind of highlights some of the more incredible Generations that you can get from this brand new tool thank you so much for watching this week's episode of AI film news of course you can get AI film news sent directly to your email inbox each and every week with even more news than we talk about in this episode by subscribing over at curious refuge and when you subscribe you'll get a free intro to AI film making course that
basically kind of gets you up to speed on everything you need to know to get started with AI film making of course if you want to get even more AI tutorials and training here on YouTube you can Subs subscribe here on Curious Refuge we would greatly appreciate it thank you so much for watching this week's episode we will see you next time
Copyright © 2024. Made with ♥ in London by YTScribe.com