[Music] if you've been following the AI video scene lately you know things are getting pretty crazy it feels like every other day there's some new breakthrough and honestly it's hard to keep up well I've got something really exciting to talk about bite dance the company behind Tik Tok is making some serious moves in the AI video generation race the same bite dance that practically rules the short form video world is now charging ahead in the AI space and it's fascinating to see what they're cooking up so they've just rolled out two new AI models pixel
dance and seaweed and these could really shake things up when it comes to generating videos from text prompts and when a company like bite Dan which already dominates social media dives into AI video tools it's a big deal all right so let's break down these two AI models first up we've got pixel dance which is still in private beta at the moment uh right now only a select few users can try it out but it could be released to the public next month depending on get this the outcome of the the US general election yeah
you heard that right according to YouTuber Tim Simmons who focuses on AI tools this could go live after November apparently there's some political tension around the release but I'll leave that discussion for another time so what makes pixel dance special well for starters it focuses on character animation it can generate 10-second video clips with characters that move super realistically we're talking about things like walking turning and interacting with objects in ways that look so natural it's hard to to believe it's AI generated you can practically see the characters walking through a scene picking up objects
or making gestures that look like they were captured by human actors but where pixel dance really shines is its multi-shot capabilities most AI video generators struggle to keep visual consistency across different shots Things Fall Apart when switching angles or perspectives this AI model solves that by keeping characters appearances proportions and scene details consistent across multiple shots which makes it ideal for creating complex scenes while maintaining visual coherence a real GameChanger for AI video production not only that pixel dance offers camera control on par with other major models like Pika and runways gen 3 you can
create impressive camera movements things like 360° pans zooms and tracking shots with just a text prompt one of the demos bite dance showed featured a woman in sunglasses The Prompt was something like in black and white the camera is shot around the woman in sunglasses moving from her side to the front and finally focuses on a closeup of the woman's face and the result was absolutely stunning it's that level of camera control that puts pixel dance in a league of its own now let's talk about its sibling seaweed while pixel dance focuses on character animation
seaweed takes on environmental generation this model can generate longer videos stretching up to 30 seconds or even as much as 2 minutes which is pretty rare for AI tools right now and just like pixel dance it keeps things consist consistent across shots this makes it super useful for creating longer scenes or sequences where you need everything to flow together smoothly the timing of bite Dan's launch couldn't be better either everyone have heard about open ai's Sora which was announced back in February Sora was supposed to be this revolutionary AI model capable of generating up to
60 seconds of highquality video from text prompts people were hyped about it but Sora still hasn't been released to the public that's why bite dance is swooping in to fill that Gap with pixel dance and sew now even though open AI Sora is still nowhere to be seen they've been busy working on some new tools that are definitely worth checking out just recently open aai introduced a suite of new tools aimed at fast-tracking the development of AI voice assistance these new tools will make it easier for developers to build voice applications using a single set
of instructions which is a big deal for anyone working on voice enabled AI previously developers had to go through a multi-step process transcribing audio generating responses and converting text back to Speech but now with this streamlined system they can do it all in one go this will save developers a lot of time and simplify the process of creating more advanced responsive AI voice tools not only that open AI also unveiled a fine-tuning tool that allows developers to further improve their models by fine-tuning them with images and text this opens up possibilities for better image recognition
and more accurate object detection which could be a GameChanger for applications in Industries like autonomous vehicles Vehicles another intriguing feature is the prompt caching tool which helps cut development costs by reusing previously processed text making it more affordable for smaller companies to get in on the action open AI tools are pushing the envelope on efficiency which is critical given the growing competition in the AI space all right now there are other big players trying to dominate AI video generation and they're all going hard take qu show for example they launched their model cling AI back
in June and it's already one of the top tools out there clling AI is integrated into their video editing app and can generate 2-minute videos just like cweed but cling AI has its limitations it mostly generates single shot takes meaning it's not as versatile when it comes to complex scenes with multiple angles or camera movements still cing has racked up over 2.6 million users and those users have created over 27 million videos already which is a serious volume then there's P Labs one of the OG's in AI video they just upgraded their tool to Pika
1.5 and this thing is wild Pika 1.5 comes with more realistic movement big screenshots and these crazy special effects they call paa effects that practically break the laws of physics think about characters in your video getting crushed exploding or revealing hidden layers like virtual cake the tool is already live and people are sharing their mindblowing Creations on social media but let's not get too sidetracked bite Dan's pixel dance and seaweed models are built on the DU bow family of foundational models which are based on something called the document image Transformer dit architecture it's super technical
but essentially this architecture allows bite dance to optimize these models for things like business applications this could potentially lower the cost of producing AI generated videos which is awesome news for anyone who's been scared off by the high prices of AI tools in the past speaking of costs bite Dan's strategy of slashing prices has really paid off off since May they've reduced their cost by so much that they've triggered a price war with other Chinese Tech giants like Alibaba and 10cent this aggressive pricing has fueled bite Dan's growth and as of now they're processing over
50 million images and 850,000 hours of speech every day that's insane growth in just a few months now let's switch gears a bit and talk about the hardware side of things bite dance is planning to develop a new AI model that's trained primarily using Huawei chips this move comes after the US started restricting the export of advanced AI chips like those from Nvidia which has been a go-to supplier for many AI companies bite dance is now leaning on domestic suppliers like Huawei for its AI chip needs specifically they're using the Ascend 900 10B chip which
is mainly used for Less computationally intensive tasks like inference basically when an AI model is already trained and it's making predictions but here's where things get interesting bite dance is planning to use these chips for training new AI models training an AI model is a much more demanding task that requires a lot of computational power so this is a big step for them but they're facing supply issues bite Dan has ordered over 100,000 Ascend 910 B chips this year but has only received less than 30,000 as of July this slow pace is making it hard
for them to fully develop their new model on top of that the asend 910 B chips aren't as powerful as nvidia's GPU so bite dance is still struggling to keep up with the performance levels they need bite dance is also one of the largest buyers of nvidia's H20 AI chips which were specially designed for the Chinese market to comply with us trade restrictions they've also been buying chips through Microsoft's Cloud Computing Services making them Microsoft's biggest client for NVIDIA chips in Asia so as we can see bite dance is definitely positioning itself to be a
major player in the AI video generation space with pixel dance and seaweed they're tackling some of the biggest challenges in AI video production like maintaining visual consistency between shots and extending video lengths without sacrificing quality but at the same time they're up against some tough competition especially from companies like quu pabs and of course open AI with their yet to be released model Sora the race is on and it's going to be fascinating to see who comes out on top anyway that's all for this video and if you're a content creator these tools could be
a game changer for you so it's worth staying updated on all of this thanks for watching and don't forget to like subscribe and drop a comment below about which AI tool you're most excited about catch you in the next one