Say hello to OpenAI o1—a new series of AI models designed to spend more time thinking before they re...
Video Transcript:
the first example I have is very simple this just counting the letter RS in a word strawberry so let's start with the traditional like existing model GPT 40 so as you can see the model U fails on this there are three Rs but the model says there are only two RS so why does uh this Advanced model like gbt 40 makes such a simple mistake that's because models like this uh are built to process uh the text not with the uh characters or words it's somewhere between sometimes called a subword so the if we asked
a question to uh a model a question that involves understanding the notion of characters and words the model can really just make a mistakes because it's not really built for that so now let's go on to our new model and type in the same problem this is a 01 preview which is a reasoning model so unlike the GPT 4 it starts thinking about this problem before outputting the answer and now it outputs the answer there are three Rs in the word s so that's the correct answer and this example shows that even for seemingly unrelated
counting problem having a reasoning uh built in can help avoiding the mistakes because it can maybe look at its own output and review it and more just be more careful and so on [Music]