Unknown

0 views6417 WordsCopy TextShare
Unknown
Video Transcript:
okay thank you um okay so I will first start with some introduction and then we'll get the actual content um of this class started okay so first uh so my name is dong I'm a professor in computer science here at y Berkley and also a co-director of a campus wise Center called the center on responsible decentralized intelligence uh so I'm the instructor for this class and also we have uh guest co-instructor s from Google who is also alarm she my former student here teaching his class together and also we have our great Alex and cun
and also we have our great reers Tara uh and as um okay uh so this is the teaching staff uh who will be working together with you uh this semester okay great so everyone's here uh everyone has been seeing the exciting uh growth of large language models uh the speed of advancement is just Adon astonishing uh however these large language models they operate in a fairly simple manner they take tax input and produce tax output so what we will cover in this semester in this class is the next Frontier large language model agents so instead
of just taking tax uh as input and produce tax as output here we use a large Lang model as the key brain for reasoning and planning for the agents and enable the agents to interact with external environments observe uh uh the environments and take actions in the environments and the Agents will uh be using um external tools and also external database and knowledge base uh and so on uh for retrieval uh to help uh the agents to perform these tasks and the reach capabilities of um these large ener bottles makes these LM agents very flexible
uh and they can easily operate in diverse environments without uh much uh particular training and these uh our agents they can uh interact with different types of environments uh including for example suring the web uh through different apis online and they can also be embodied even in a robot and operating in a physical world and they can inter sense the environments through different types uh of inputs uh even in the multi model setting even including very sensory uh inputs and taking actions in these diverse uh environments and through this interaction with the complex and diverse
environments uh they can uh update their memory they can learn to do to use they can interact with humans and uh they obtain grounding through this uh uh interactions as well and these agents not only just interact with the environments they can interact with other agents through multi-agent uh interactions and collaboration including humans as well and this multi-agent uh collaboration can help agents together to solve even more complex tasks so why is our agent the next front here why do we need to empower ours with the agent framework for a number of reasons solving real
real world Tas is never just uh in one goal with tax inputs produce tax outputs often times it involves a trial error process and leveraging external tools and the retrieval from external knowledge can help expand lm's capabilities and more importantly uh this Dynamic uh agents uh agentic flow uh this agent workflow can facilitate solving complex tasks through enabling task decomposition allocation of self tasks to specialized modules division of labor for project collaboration and throughout the course we also see that multi-agent generation can help inspire better responses even though um agents has has been a fairly
recent development we have already seen um agents uh helping transform different uh education domains uh through wide ranging including education law Finance Health Care cyber security you name it and the development is really exciting and is fast improving there are many different leaderboards for different agent uh uh benchmarks that you can see online and you can see the really fast improvements on all these different agent Frameworks so overall to better enable um engine deployment there are a number of key challenges that we still need to address so first we need to improve the reasoning and
planning capabilities of Agents agents tend to make mistakes when performing complex tax and to end and it's important to improve the reasoning and pl uh and planning capabilities and also when to improve embodiment and the learning uh from feed uh environment feedback for these uh LM agents LM agents are still not efficient of at recovering from mistakes for long Horizon tasks we need to further develop uh methods and capabilities for continuous learning and self-improvements for this L agents and also improve multimod understanding gring and world the capabilities of these agents and also as I mentioned
multi agent can really help uh agents to uh provides a better solutions for tasks and developing threeory of Mind helps uh multi agents to better develop as well and safety and privacy these issues are also very important for um agents LMS are susceptible to ADV attacks can emit harmful messages or Le private data and so on solving these challenges are also really important for deploying agent safely in the real world uh and also enabling human AG interactions and uh ethics uh how to effectively control our agent behaviors and design interaction mode between humans and our
agents to best in Nal our emissions to serve human needs is also really important so to help students learn and uh better develop uh methods to address these challenges uh the course uh is has been designed to cover a broad spectrum of topics uh actually throughout the different uh layers uh of the agent framework uh and also the domains so first in the class we'll cover uh key model uh capabilities including reasoning planning multimodel understanding we are also cover um popular real world agent Frameworks to enable students to learn how to better design uh agent
applications and use various agentic flows easily and this will help uh students to also learn and to use uh our asan Frameworks for workflow design to use uh retrieval arguments the generation rack and multi-agent systems and we'll also cover a number of exciting application domains using these our agents including software code development workflow automation multimodel applications and Enterprise applications um and finally we'll also cover important topics on our agents safety and ethics to cover this wide ranging topics we have assembled an amazing team of guest speakers uh and researchers to cover these topics so the
class will be L by me and shin and we have this amazing a crew of uh guest speakers to help uh cover these important topics in class before the talk um I want to ask one question for everyone so what do you expect for let me take seconds about it so I can imagine many different answers like solve the hardest math problems that humans Cann solve for example you can car cannot solve right or discover new scientific the or even South AI um my background is machine learning I don't know in the current days still
many people study ma C or not because it's like transform is always need right as a m person I have a little impation about AI AI should be a perform from just few examples like what humans usually do in the past DEC case the mamic community spend great efforts to develop data efficient methods like Sam supp learning Active Learning Ma and you know if you look at newspaper in the past decade the people always excited about one point two point games in the as over Pap so in practice actually I never saw data efficient approaches
I would like to say Miss and failed you know don't feel bad about that I de back I started almost I actually that Mo me to s a different problem what's missing ma learning so I sort of T them for years and finally I found out the answer reason me in the C days in particular for people in the course today the so obvious right this lecture about reasoning humans can learn from just few examples because humans can reason not because of data statistics s Street for let's start from a to problem in my research
I us prefer a very simple problem but it Con all the detail all the challenge places so this problem is called last letter if you are familiar with neur symbolic literature you'll find similar problems so for this problem given a people name as input the output will be the conation of the first of the last letter of the first name and last name for example like you mask and the last letter of is n the last letter of M is K so output is n ke it's so simple and if you have this problem few
years ago you probably will try to solve it by Machine learning model for example you could use transformal model is why is decoder is encod why is encoder is decoder and then you will find that okay you probably need penc label examples to train the model and finally you get accuracy like 85% or 9% something now it's interesting about in uh methods you know for such a simple task I mean simple for humans okay and if the method requires a vast amount of label data to learn and would you like to call it as AI
or not AI means artificial intelligence right as I suppose a intelligent model should be able to learn this part just using one or two grp examples now let's see how this problem can be solved by using uh lar Lo models um I suppose most people know lar models but Professor told me to uh explain what LM are okay LM is a Transformer model train to predict next world for example given the Tex AI is the future where mask future just AI as the as the input well that model predict what will be the next word
if the next prediction is not the world future we need to adjust parameters to make produce in correct you one that's called back probation of course here you can change your model with many sentences for example you can use all text from the internet if you don't want to go to details you can simply think of training LS as training pars to mimic human language actually um I cre this sentence and one guy me he said he he's very fous about CH par has look for job um why we CH this model okay and then
we can just minic the process of the training the training is about PR NE toen we can use whatever as input and to see what will be the output the just PR that token and then you can inut the the generated token as you can use the input token and the next token as how to get answer from and for this problem we can simply contaminate all the examples we have had as the input and also concate with Test example Barack Obama here we can try this has use any LM and see what happened and
probably see you get a wrong answer here is c k of course it's not correct right because K is a last letter of bread and and a is the last of Obama the output should be a k so it's wrong right the problem this called f short PR it's just mimic of machine learning process instead of training model we just use the examples at input that's the only difference in the C days we know how to fix this uh this prompting idea we just need to add reing process before answer like we just add the
exclusive press here the last L is n the last of M is k n k n k like that it's called reing process and similarly for bases and now we the this as the new input and you will see okay we get a perfect response from the large models so even like a humans one demonstration is enough to get accuracy 100% that's exactly what I looked for we cannot imagine any machine learning method can achieve this perfect generation here this no way but by the way don't overrate what I said about machon maon is just
so useful and important for doing research in the C days I saw many naive mistakes from social media news even from the the papers conferences all the native mistakes mostly from people who have no AC on machine learning they just random different ideas it's interestingly you know this kind of idea of adding intermediate steps has been uh propos many years um in the literature so this is the amazing paper we know um by researchers mind published Ino 2017 so in their paper they uh use natural Lang re nail to solve Mass problems in their paper
the even Road derive The Final Answer through a series of small steps and then they trained a a s six model from scratch if you know Channel work you'll be so surprising about the paper right the authors are just like Time Travelers they know how to make a different approach in 2021 a team in the OPI published an amazing data set called GSM K they are followed the idea in t paper in 2017 in this set every problem uh followed by intimidate steps as solution and also final answer and this team created this amazing data
set and they use that to F tune gp3 model they are greatly scale up the work by good demand in 2017 even in the same year 2021 a group of researchers in Google brain now part of Google mind part the work like show work s path for intermediate competition with logal models they discovered the similar ideas independently but in the domain of program synthesis that's why they used um Abra symbols here instead of using natural language in the country probably many people know our work CH of s property and the chain of s actually literally
CH of s is not a term way invented it's just common English phrase it means M St reasoning so in this work we extensively evaluated from individ steps and show amazing results on almost every NLP Tass so that's all the pap here together in 2017 who demand publishing paper training with inmedia steps in 2021 and the few poty papers LS with intermediate steps in 20 21 2022 and promting with intermediate steps I will see okay which part is more important you can see here actually it doesn't matter if you are CH or F or model
what really matter appear here intermediate steps that's the key so let me summarize here regardless of training funing or promting when provided with examples that include intermediate steps L will generate responses that also include intermediate steps Okay g the mid St when ask question is it helpful to introduce reason strategies in those examples for humans they when they solve a problem they could have a strategy for solving so this um so here's I work from our team is this most PR team in this work we enable easy to hard Generation by the composition probably many
people saw this famous book how to solve it by Pia a classic book for Mass education so there's a chapter about de composition so if you just if you go details you may loss yourself in details yeah now let's see what is difference by um by DEC computation so given this uh Mass problem here so by the way so um in this talk the mass is keep at a elementary level so every time I a par um before actually my daughter also came my par and make she could understand she's at the fifth grade now
and U you say es has three apples Anna has two more apples than Isa how many apples do they have together okay we see the difference is that okay we first show light models how to break down this problem to Sol problems and then solve one by one and that's why could least most from least to most complex problems is very simple idea but surprisingly awful so I suest is how to De compose complex task into simple tasks so this is a um scan task for composition or generalization you can look at examples here give
a naturally natural language uh command and we need to translate to a sequence of accents and that could be executed by robots something like that so if you use list most promp we get accuracy 99.7 so we just use1 % demonstration examples so I wonder why I chose this task actually um I knew this T from sh she's here today and she invented a beautiful approach to solve this task many years ago when I looked at this task I was really surprised because it look so straightforward for humans why could be so difficult for Maes
finally we can make it by LS and this is another task with save here test to code again a compositional generation part I don't know if anyone knows the concept compositional generation roughly speaking the test examples are more difficult than then training examples or or prompting examples so for example uh for the for the for the T code problems the T problems we need uh longer snipp here where approach is a little bit Chang littleit called Dynamic L most PR and we just use one person data and achieve the proper great results way better than
uh the solar results in the literature and the solar result in actually are by specialized architecture design and the training and they use of course all the train data set yeah so far any question here otherwise I'll go to the next section yeah okay I suppose this part is quite a familiar for everyone um I have two Cas my daughter is uh um 10 years old and my my son is 7 years old um actually when the CH pring paper came out I uh heard a very interesting conversation between my daughter and my my son
and my daughter asked uh her little brother so was 11 17 * three um the little brother said I don't know and then she asked was 10 times St 30 was 7 time 3 21 so what's the 17 3 oh yeah I know 51 and the funny say is my daughter shouted to me Daddy CH pring also works my little brother bra okay now uh okay why may see okay why intermediate steps are helpful so why me say okay that's so natural for humans but if we are doing research we have to S it deeper
you know that's just something similar that LS are just models while understand what happened and this year we have work published at 2024 and I collaborated with Bren s from Stanford and in that work we are give rigorous mathematical analysis okay so um here are the key results Transformer generating intermediate steps can solve any in currently seral problem as long as it deps exceeds a constant sh colde after emphasiz get constant that means independent for your input however if a Transformer generating direct answers either requires a huge deps to solve or canot solve at all
yeah please check the statements again then I'm moving to the next slide probably can see tons of practical implications of this Theory yeah if you are couldn't Sol a problem you may think about generating more in STS and also probably you could have um call some external tools that search to help intermed steps right so I think in in this LM agent um course many people talk about how to use exal force and you could think about how to diing L fures and limitations yeah so I have one of my big fun is to find
problems my daughter can solve in settings by fail yeah okay so far we have talked about how to use examples to trickle LS uh to generate step by step re so one one is possible to triggle with you without using it's an amazing work actually when this paper came out I thought it was a joke it turn out not and then I was inspired a lot by this work it's called let thing step by step so given this question okay we don't need any examples we just need to see Lessing step by step and the
motel can generate buiness steps yeah it's really cool but usually you know the Z short approach that mean z short there's Nostra examples um it's worse than F sh so we wonder okay if we have approach is still to short but can do much better work so this these two our another work is called L as analogical reasoners so again this beautiful book how to solve it and bolia you um so in book say okay how to do analog reasoning in s Mass problems so we see in new problem you first ask your a question
do you know a related problem or methods or strategies here so after how you're going toy find this and provide another paper um so this I really like the code from B yeah if you started fun analysis you will know space and I was really Amazed by that last sentence the ultimate mathematic is one who can see analogies between analogies of course I I show the here to let you know how far from a um so given this simple problem okay of course can see okay lesson step by step but now we can see different
way okay we call a related problem and then solve the one Sol this one okay you can see that actually indeed we c um relevant examples and the knowledge is here but those problem are exactly the same problem but they are use for that's amazing you and uh we found that actually which of course we try to bench marks and see it works really well so you can see that um the last R is from analogical Reasoner by prompt of course you can optimize the prompts by yourself get better results the most important thing here
see that um it's much better than just see that's thing step by step that's step step here means Z short z t yeah and even this approach uh out performs manual so here as the main Reas is that you know we use this approach the model automatically generate related questions each different problem this uh results on big bench yeah was great performance and uh the result on fores competive programming yeah if you are interested in competive programming you could try this approach so what we didn't do here is about scaling you can U you can
search the web from all the related problems knowledges uh for for the problem you will solve so the key idea here you know thatly generate relevant examples and the knowledge for each given problem instead of using a big set examples as a manual genal prompting okay now we can see that okay we can use few short examples to show the model how to do step by step reason what can do zero short without using any examples just see that's step by step now I could ask another question is it possible to trigger step by step
reing even without using any problem that's step by step you could say okay all the models in the count are just like that right you're right but they did or something that means they already used many examples in the data mixture for training or training so yeah we found the answer is yes this in our or World CH of s reasoning without problem without problem that without saying anything just give problem to the model even for for L not so um let's look example here I have three app my dad has two more app than
me and how many appers do we have together for this example is see the approach actually is very simple um at decoding at the first step we look at all the pass tokens here at Le to the file tokens here okay so we started the first file tokens and then continue gr decoding okay for the first one is say file app okay the first is a file and the next was F and if use um to to can is I then the four generation will be I have three app my dad has two more appers
than me and Sil has five app yeah it's correct yeah and I see that so that's that's very interesting right so we didn't see anything about reason here but the model can do some you started from different tokens here is another example you say okay with Nu close cage Bor in even or all year um the first one say Okay cage was B in all year Nu was the first token and second one that even and then period third one is all then period okay now probably say okay if the you know the model could
have had CH SW in their responds the promise how to F it I say okay you can take a longer sentences longer senten means the model could do some readon steps actually a surprising s that to look the probability of the of the token of if you look at the probability on the first roow here a nichos cage was born in order in year and the is quite low and however if you see if there Rec past like the last one uh K was born in 1964 and an even year this a reason process here
and then probability finally jump to 98 that's amazing right it seems that the model is so well calibrated at I was really surprised at seeing those probabilities see that like two the street if say even or old the probabilities are really low so key observ and pre have had responses with step by step reasoning and generations started with the top ke to Cas we don't need use any PRS here not needed and have confidence in decoding the final answer when a step by step reasoning pass is present so here is a compressing between gr decoding
and the chn decoding we see that the de coding performance much better yeah so far any question here now let's uh move to the next topic right generating intermediate steps are helpful even really helpful you know but any can on Genera sets instead of direct answers any concerns [Music] yeah so probably all say depends on your problem your need yeah so actually in the C days you know we need to always keep in mind that LMS are PR mods of generating next tokens they are not humans no matter if I use a key examples or
not keep this in mind so it's a PR model so that's see what LM does in deod so it's actually ARX probability of reasoning pass and The Final Answer here the problem however what we want is ARX probability Final Answer given problem right that's what we learned in machine learning this doesn't mean WR pass is not important but I just say final answer we have to make sure final answer is correct and then look at the Reas pass they are not aligned right the two different objective okay now let's look one step further okay the
probability of find answer given the problem for the computer which should sum over all possible reason parts that's uh that's how is computed from our course will learn right so given Mass problem you could found different solutions which lead to the same answer yeah we need that we need to do the summation here and then okay how to compute side if you started machine learning you know answer here right simply so simple okay now uh this du to our work self consy probably many people have no self consy but my talk here is I really
want to let you see the underly motivation how we approach this problem from the first principle you learning so let's look the question here okay give this math problem and you could sample the answer multiple times yeah again and then finally you see okay the most frequent answer is 18 okay what will give one here is not most frequent reason pass we choose most frequent answer that's huge difference Rec pass here is lat varable this idea is so simple by by using self consistency we simply Crush solar results in the literature that time and see
that in doing research we don't need we we can you know it's really just about your idea you don't have to have to know a lot of sense and of course you know give our explanation on self consistency is about probability it's about samp so imagine that more consistent results more likely to be crack when you look at the curves here if the consistency is more than 80% then the I the uh accuracy is nearly 100% here okay so when the outputs a direct answer without intermediate steps we use the sample several times and then
choose the most common answer anyone like answer here yeah okay great yeah one token okay that's already the the the token with maximum probability yeah Al and for the second question and change S consen by La LM generate multiple responses instead of sampling multiple ttimes and then choosing the most common answer does this make sense yeah no great yeah that is no and for both answers we just need to follow this principle I Max probability to find answer given problem that's all you need to understand self consistency is a very very simple principle was the
first one of the first principles in machine if you know more probably know okay this this maximum marginal inference okay so one okay how about free form answers um is uh Universal s see here so this idea is a little bit different uh but relation I put here and given this problem where do people drink less coffee than they do in Mexico now if you look the answers and each answer is different from others but the most common response is to here Japan China and India um any question otherwise we could move to the next
section okay yeah self consy oh sample your answer multiple times and then chose the most um frequent answer as yeah and next I'm going talk about limitations the first one I talk about LS can be easily distracted by context from psychology studies you know um re information May significantly decrease some children and even adults problem solving accuracy so I want to check if this observation code for so this is a simple problem here um the highlighted text is manually added so Mario's M which is $10 is Rel for the original problem but I see after
that the the model made a wrong uh solution here so um actually interes that okay if we add a PR like ignore EV context and the model immedately notice that and make a correct one but it's still hard to take back if we make the problem make contents op big so even we can simply just add development synes like the sky is blue and the the grass is green or something you know those mountains you can make this uh input up along you will see a significant performance draw across or L the next limitation I'm
going to talk about LMS cannot self correct Reon yet let's start from a math problem again and actually this problem is a little bit tricky you look at and uh you see that the model um give a wrong answer and then we uh from to the model with review your previous answer and found problems with your answer okay and interestingly after reviewing the model re recognize the mistake and uh proactive design this looks something right you see and then we see another problem based on the problem you find improve your answer and The Final Answer
here is correct if the or is correct we do the same prompt the mod could have made mistake that's the problem so overall when allowing a to review their generated response can help correct inacurate answers it may also risk changing correct answers into incorrect ones we run extensive studies on um on some benchmarks like GSM commiss QA and QA and we didn't notice any improvements from self- correction methods they just makes sense worse um prob you supp some improvement from the literature you know they they said at Improvement reasoning and actually they use Oracle answers
you can see the Oracle here Oracle means you only prompt LMS to correct the answer when the answer is wrong the problem is the model doesn't know if the answer is correct or wrong your wrong and also this Rel to multi agent debate the Y could could use multiple a debate each other and to uh to achieve agreement or consensus and also with try this approach know we uh we find out actually the trick is how many response are generated for example if we have three and if let everyone generate a response that will be
three if let debate that that re on that will be n response together so how part just to S consistency with n response and see what happened we find that those approaches cannot outperform self consistency self is might simple simple times and take the most frequent answer as final prediction so the lesson we learn here is oral feedback is needed for a to self correct if so that we our work is self depar self depar naturally leverage unit test as Oracle you coding problems you can natur have unit test track actually we started this work
quite early and we didn't make make a and then finally we move to the and the last lastation I want talk about the premise order matters in reging so um you know in the C B where every time we show T reports from archive or to somewhere and the people will show great results for example recently model could be FL and result onk and I I have to tr those numbers in the C the model Trend was all data from internet there already some problems so one of um working on my team is to generate
different Evol tasks so to to test the models so that here here we just did a simple check we are given this original gsmk problem we U reorder the sentence a little bit and see if the model can solve it so here know in this in the OR problem he loses 10 beers while getting home and we could you know just uh move this sentence to to the to the end and see what happened we just did some change for some JK problems and we notice that there are about 10 points drop on solving R
across all from TM so here the response here can look compare response for the problem for or problem and theorder the problem so um you see that the model actually just the model just know how to solve the problem sequentially they couldn't go back and forth and one could say okay that may related to some semantic understanding about reason okay then we design another another task logical inference is is more pure than the mass problems yes say B if if then if then if then right even even we don't use real real words we just
use a random uh random um tokens here and given the rules and the facts and then model inference logical inference in the query and the rules for the original problem the rules are order according to the use in the inference process but I I uh point out not all rules are necessary for the query and another way you know we could just randomly order those rules okay I only only random order rules relevant for query if not relevant for query they just keep the positions and surprising was then we saw City Plus points per draw
across all [Music] Frontier um from my personal experience I think it's really important to design experiments when doing a own research so this just like a depart okay now uh let me summarize the talk here so um the first thing I talk about is generating intermediate steps improves performance lot and you can do training F PR with intermediate steps but you can do also do Z SH analogical reson or some kind of special decoding like 7 deing I presented today and also self consistency great improves step by step reasoning no matter your from from F
model or from F and I also a lot of limitations you know like you have context self correction and the prise order all those are matters for Reon performance so when I next what problem right [Music] and I the most important here is you know I see we work on something we put work on ai ai that's not problem the problem is Define a right problem work on and solve it from first principles not just from principles why is super important here and actually I'm organizing a conference called conference modeling with a of amazing people
and this the first ever companies dedicated to langage modeling and welcome to yeah that's it SS
Related Videos
CS 194/294-196 (LLM Agents) - Lecture 2, Shunyu Yao
1:08:44
CS 194/294-196 (LLM Agents) - Lecture 2, S...
Berkeley RDI Center on Decentralization & AI
10,342 views
CS 194/294-196 (LLM Agents) - Lecture 6, Graham Neubig
1:00:42
CS 194/294-196 (LLM Agents) - Lecture 6, G...
Berkeley RDI Center on Decentralization & AI
3,639 views
Parables on the Power of Planning in AI: From Poker to Diplomacy: Noam Brown (OpenAI)
56:54
Parables on the Power of Planning in AI: F...
Paul G. Allen School
39,917 views
ML Was Hard Until I Learned These 5 Secrets!
13:11
ML Was Hard Until I Learned These 5 Secrets!
Boris Meinardus
326,135 views
Protocol Berg Workshop: Laurence Kirk - Essential Maths for Zero Knowledge Proofs
36:03
Protocol Berg Workshop: Laurence Kirk - Es...
Department of Decentralization
468 views
Stanford CS149 I Parallel Computing I 2023 I Lecture 1 - Why Parallelism? Why Efficiency?
1:12:22
Stanford CS149 I Parallel Computing I 2023...
Stanford Online
19,609 views
Decoding Google Gemini with Jeff Dean
55:56
Decoding Google Gemini with Jeff Dean
Google DeepMind
64,190 views
No Priors Ep. 80 | With Andrej Karpathy from OpenAI and Tesla
44:17
No Priors Ep. 80 | With Andrej Karpathy fr...
No Priors: AI, Machine Learning, Tech, & Startups
173,614 views
llm.c's Origin and the Future of LLM Compilers - Andrej Karpathy at CUDA MODE
23:29
llm.c's Origin and the Future of LLM Compi...
Latent Space
28,682 views
CS 194/294-196 (LLM Agents) - Lecture 3, Chi Wang and Jerry Liu
1:04:49
CS 194/294-196 (LLM Agents) - Lecture 3, C...
Berkeley RDI Center on Decentralization & AI
8,650 views
Jim Fan on Nvidia’s Embodied AI Lab and Jensen Huang’s Prediction that All Robots will be Autonomous
49:14
Jim Fan on Nvidia’s Embodied AI Lab and Je...
Sequoia Capital
26,882 views
Stanford ECON295/CS323 I 2024 I Business of AI, Reid Hoffman
1:18:21
Stanford ECON295/CS323 I 2024 I Business o...
Stanford Online
50,381 views
CS 194/294-196 (LLM Agents) - Lecture 4, Burak Gokturk
1:02:37
CS 194/294-196 (LLM Agents) - Lecture 4, B...
Berkeley RDI Center on Decentralization & AI
5,757 views
CS 194/294-196 (LLM Agents) - Lecture 5, Omar Khattab
1:04:58
CS 194/294-196 (LLM Agents) - Lecture 5, O...
Berkeley RDI Center on Decentralization & AI
4,627 views
Is o1-preview reasoning?
1:24:51
Is o1-preview reasoning?
Machine Learning Street Talk
44,016 views
Common mistakes in statistics and how to avoid them
1:26:47
Common mistakes in statistics and how to a...
UOW NIASRA
63 views
CompTIA Network+ Certification Video Course
3:46:51
CompTIA Network+ Certification Video Course
PowerCert Animated Videos
7,220,221 views
How I'd Learn AI (If I Had to Start Over)
15:04
How I'd Learn AI (If I Had to Start Over)
Thu Vu data analytics
833,364 views
Andrew Ng On AI Agentic Workflows And Their Potential For Driving AI Progress
30:54
Andrew Ng On AI Agentic Workflows And Thei...
Snowflake Developers
64,703 views
ICML 2024 Tutorial: Physics of Language Models
1:53:43
ICML 2024 Tutorial: Physics of Language Mo...
Zeyuan Allen-Zhu
28,155 views
Copyright © 2024. Made with ♥ in London by YTScribe.com