Hi everyone! Welcome to our new course, Introduction to Notbook Alm. Notbook Alm is a research assistant powered by Gemini 1.5 Pro, launched by Google. Notbook Alm has taken the internet by storm and is now being used for all sorts of use cases, such as generating audio podcasts, which is a very interesting and viral feature. Today, in this course, we are going to dive into the best practices for how to use this powerful research assistant called Notbook Alm. We will touch on a variety of use cases and compelling ways to use Notbook Alm in the
workplace and for many interesting personal projects. Do stay tuned for all our demos and exercises! I am the instructor for this course. My name is Elvis, and I'm the CEO. A little bit of information about myself: my name is Elvis Aravia, and I’m the co-founder of Thea TI, where we mostly focus on professional services and helping AI startups develop with large language models and generative AI. I have been an NLP researcher for the last 10 years, and my main focus has been on information retrieval systems and language models. I'm also an independent AI researcher and
have worked with teams such as Elastic, Meta AI, Paper, Codefear, Pyge, and many other AI startups. As I mentioned earlier, I do a lot of technical AI consulting, mostly with AI startups. I am very passionate about delivering professional technical training, such as this one. You can find me on X at @om. The objectives of this course: In this course, you will learn how to effectively use Notbook Alm as a research assistant. We will cover best practices and provide all sorts of tips and tricks on how to make the most out of Notbook Alm for your
personal and professional use cases. You'll also learn how to unlock unique and advanced use cases with Notbook Alm. We cover a wide range of projects in this course, from generating podcasts for newsletters to generating quizzes, exercises, and additional notes for students using Google Slides presentations. Here's the structure for this course: We're going to get started with Notbook Alm and slowly get you familiarized with the tool and the different features available. Then, we are going to present different use cases and examples. For instance, how do you analyze PDFs, and what can you do with that? Then
we're going to touch on probably one of the most popular ways Notbook Alm is being used today, which is audio overviews—how you can generate audio overviews and what you can do with that. Then, we will cover an interesting use case for how to analyze YouTube videos and chat about those YouTube videos, similarly to how you can chat with PDFs. That's a feature that's AI-supported in Notbook Alm. Finally, we're going to provide you with an example of how to work with slides—how to add Google Slides as a source and demonstrate some of the capabilities such as
image understanding, understanding of tables, and different things you can do with those slides. In the end, we really want to give you a comprehensive set of use cases and ideas for how you might use Notbook Alm on a day-to-day basis. We're going to start slowly, so we will show you first how to create your first notebook and then go through some of the features that are offered in Notbook Alm. Later down the road, we will do deeper dives into the functionalities and some use cases that you can start to experiment with. Our hope is that
it inspires you to be able to use it in your personal life and also in your professional work. To get started, you need to go to notbookalm.google.com. I'll provide a link down below so you can access that. Create an account using a Google account. There are some restrictions for what's allowed to access Notbook Alm; you have to follow the instructions in the link that I’m providing. Once you have set up an account, you will be presented with the following screen. When you're getting started, the first thing you see is "Create Your First Notebook." I don't
have any notebooks here that I've created so far, but I do see some example notebooks. Now, I highly recommend that you look at the introduction to Notbook Alm before you get started, because there's a lot of great documentation here that they have posted in this Notbook Alm. The Google team has done a really good job documenting how you may use Notbook Alm, what the limitations are, and so forth. In this course, we will provide you with ideas on how to use Notbook Alm for different use cases and various kinds of documents you may have around.
We will also go through some best practices and follow some of the recommendations from the Google team. We'll go back to Notbook Alm and start from the front page, and then create our first notebook. We can go here, create, and we will be presented with this screen. Everything about Notbook Alm is centered around sources. The idea is that it will provide sources, and then you can use the Gemini model to interact with those sources. That's the overarching idea at a high level. Gemini 1.5 Pro is one of their best models available, so it's a very
powerful model we can take advantage of to do all sorts of creative tasks around some of the sources we have. Sources can be audio files, markdown documents, PDFs, Google Docs, Google Slides, websites, YouTube, and even notes that you can paste as text. Those are the types of sources that are supported. By Notm, so what you need to do is you can drag a file here if you have one, or you can use one of the options below. We're going to go through all these different options throughout the course, so don't worry about that right now.
For this very simple getting started section, we're going to add two sources. So recently, Google DeepMind released this very interesting war called AlphaChip, and as a researcher, or an AI researcher, I'm very interested in this work. Most of the time, I really don't have time to go through some of these announcements, so my plan is to use Notebook Alm to give me a quick overview of what this launch was about. What is AlphaChip about? Why are people so excited about this particular announcement? So, I'm going to give it some sources. I have two web pages
that I'm going to add as sources to Notebook Alm. The first one is this article by Google DeepMind on AlphaChip and how it transforms computer chip design. This one was released on the 26th of September, 2024. Instead of copying and pasting it to Notebook Alm, I can just provide the link, and that's what I'm about to do. So, copy the link to the web page and then paste it over to Notebook Alm. Now I am in Notebook Alm, and I have that link copied, and now I'm going to paste it here. The options here to
paste links are two options: YouTube and website. We're going to touch on YouTube later on in the course, so I'm going to go and copy over my next URL. Again, I'm providing the URLs below this video. I'm going to add another source. I can go here to this plus sign, choose "website," and then I can paste my second source. This one is an addendum that was provided as part of this announcement; this addendum was published in Nature, so it provides more information about what the updates are on this particular announcement. This is something that was
previously released, and there are a lot of new things here, and that's what I want to find out: What are the new things? What is exciting about this particular announcement? Note that you have some notes here: only the visible text on the website will be imported at this moment. Paid articles are not supported, so do pay attention to that. We're going to insert this, and now you'll see it's being added here. All right, so we have two sources, and actually, we can add up to 50 sources per notebook. Notice I mentioned "notebook," so this is
a notebook that I've created, and the notebook has a name as well. Here is a name. So far it's untitled, but I'm going to rename this to "AlphaChip Research," so that's going to be the name of my notebook. Okay, and now it's going to be saved as that. Just to show you that it actually saved it, I'm going to go back to the homepage. I can just click here on the logo, and it takes me to the front page. This is my dashboard; I have the example notebooks, and I also have this AlphaChip Research notebook
that I've just created. I can continue to create new notebooks. I'm going to click on it again, and you will see that it has the two sources, so those were saved automatically. From here, we have a bunch of options, and this is where the fun actually starts. After we add our sources, we can do a few things. We can add a note. So the first thing I want to show you here is how to add a note. There is this option right here. As of the time of this recording, which is October the 1st, we
can add up to a thousand notes. These notes could be written by you, or they could be notes that correspond to responses that you get from the model. I'll show you that step later when we are using the chat option available in Notebook Alm. Just to show you how this works, I'm going to type something here and save a note. I'm going to say, "Researching could be anything: researching about the new release of the AlphaChip model by Google DeepMind." Okay, very simple note, nothing too fancy here. I can give it a title; I'll just say
"Research Purpose." That's it. I just wanted to show you how to add a note, and this is referred to as a written note here. You can add styling if you want; that's your choice, but I'll leave it as is for now. So, that's saved automatically, and I could do a few things with these notes. I'll show you some interesting things that you can do, but because this is a very simple note, I'm going to skip that step and then show you later on in the course the things that you can do with notes and how
you can engage with them and interact with them and do all sorts of conversions and analyses on these written notes. So the fun part of Notebook Alm is this notebook guide. I'm going to click on this notebook guide here at the bottom, and what you see here is a bunch of options. When I uploaded these sources, it's giving me the option to create an FAQ, study guide, table of contents, timeline, briefing doc; those are referred to as pre-formatted guides. Those are a good starting point. It also gives you a summary of what these sources are
talking about, and then it has this feature which everyone is talking about, which... is his audio overview, which is basically a deep-dive conversation involving two hosts, and it's English only. It's about the sources that you have passed, and what I like about the audio review is super engaging. As you will see, we'll go through many examples of how to use it later in the course, so stay tuned for that. But just to show you that you can generate an audio overview here, I will click on this button, and you will see here that it's saying
"generating conversation." This might take a few minutes; no need to stick around, so you can do other things while you wait for your audio overview. We will have a specific section on audio overview, and we will do a deeper dive on this and some tips on how to use this feature. You will see that we have the audio overview generated now, and you can play it. So, I'll play a few seconds just to demonstrate what the audio overview of the Alpha chip announcement was. All right, buckle up because we're diving into some seriously cool tech
today—AI that designs the brains of computers. We're talking about Alpha chip, a system from Google DeepMind that's changing the game for chip design. You've sent us a couple of articles on this, and let me tell you, it's a wild ride. It really is fascinating stuff. To understand why this is such a big deal, we need to start with what a chip layout actually is. Imagine a city plan, but instead of buildings and streets, it's transistors and circuits—millions, even billions of them—all packed onto a tiny chip. Okay, so like a micro-metropolis with its own crazy traffic
flow. And designing these layouts, that's where it gets really complicated, right? Exactly. It's a painstaking process that hasn't fundamentally changed in decades. Skilled engineers spend weeks, sometimes months, meticulously placing each component to optimize performance, and this reliance on human brain power has actually created a bottleneck in chip development. So, in Alpha chip, the AI that designs AI chips, it's like AI making itself smarter, right? It does have a certain elegance to it. Essentially, they've trained Alpha chip to approach chip design as a game. Imagine a blank grid representing the chip. Alpha chip gets to place
the components one by one, like pieces on a game board, and it receives rewards based on how efficient the final layout is. So, it's learning by playing, figuring out the best moves based on the feedback it gets. Precisely! But here's where it gets really interesting. So, I played a minute and a half there just to demonstrate what the feature is, but later we're going to go through a deeper dive on this feature and other things that you can do with Notebook Alm. So, there are also these suggested questions, and these questions are based on the
sources as well, so everything for the notebook is centered around sources. You can ask a simple question, you can select one, and once you select one—for instance, I can choose this one: "What are the key ways Alpha chip has revolutionized computer design?" I'll click on that, and notice that now it takes me into this chat mode, and now it's generating a response. Okay, after a few seconds, there it generated this response here, and I can continue to have a conversation with it with some of the suggested follow-up questions down below. There is a lot more
here that we can explore, but I want to keep it at that for now. Hopefully, you see how exciting this is and how powerful this can be as a research assistant. So, stay tuned for more as we go through deeper dives into all the features and functionalities, going through use cases and very compelling ways on how to use Notebook Alm for personal and professional engineering work. But I also do a lot of AI research work, so I'm both an engineer and a researcher. I read a ton of papers, and one of the ways that I've
been using Notebook Alm is to act as a research assistant to help me stay up to date with the AI research that I'm interested in. I think this is a very interesting use case because NLAM can read sources and also understand PDF documents. At the end of the day, it's using ChatGPT-1.5 Pro, and I've already used that model to do a lot of really complex analysis on research papers I'm interested in. Using Notebook LM to help me stay up to date with some of the latest developments around AI research concepts that I'm interested in, in
this section I'm going to show you how I use Notebook LM as an AI research assistant. So, the first thing I need to do is create a new notebook. I'm going to create that here, and then I'm going to add my source. This will be for every new notebook I create. It's how it's going to ask me to add sources; in fact, sources are a requirement for a notebook. You cannot really interact with the notebook or use the chat function or generate audio overviews if you don't have sources. The source I'm going to use in
this demo is a PDF; most of the papers come in PDF format, so that's the format that I'm going to be using this time. So, I'm going to choose file, and then I'm going to select my research paper. As someone that keeps track of prompt engineering techniques, I've been reading this paper called "Meta Prompting," or I'm interested in reading it, and I would love to use Notebook LM to help me get a better understanding of what this is. So, I'm going to click on it, and then I'm going to open it. I can see that
it's uploading, and it uploads really quickly. I can actually click on it, and you'll see that it has already uploaded this for me. What I get here is a source guide, so I get the summary for it, and I also get key topics that are discussed in this paper. So that's really interesting, right? Because I can click on those, and I can drill down into whatever topic I want. So let's do that as an example. I'm going to click on, let's say, something like "reasoning task," and notice that it says "discuss reasoning task." Those are
preformatted chats that you can use, because when you click on it, it is submitted here as a prompt as part of this chat. Now, I get a drill down on the reasoning task. So, just in case you're lacking inspiration and you want to drill into what this work is about, this particular research paper, you can literally just click on buttons and use those preformatted topics, questions, and even the guides that are available for you here in the notebook guide section. So, I'm going to close this one. I'm going to name a notebook; that's something that
you have to get used to. Maybe this changes in the future, but for now, you have to do this manually. Ideally, this should be automatic, but let's just add a name for it. I want to call this "meta prompting and paper analysis." Okay, and T prompting. I can do all the regular stuff; I can add nodes, and I can even add more sources here. This is something that we are going to be experimenting with in future demonstrations in this course. Now I can do a bunch of things with this particular research paper that I've uploaded.
I can do the audio overview, but I'm actually going to leave this part for a specific section that I have coming later down the road in the course, so I'm going to skip this for now. What I will do is actually click on one of the suggested questions. I really like the suggested questions because they are preconfigured for you to start this exploration of whatever this research paper is about. I think this is a really powerful feature. So, the one I'm going to select here is, let me see, what makes more sense? All right, so
it says, "How does meta prompting differ from future prompting in terms of structure, application, and effectiveness in solving complex reasoning tasks?" I think that's a very interesting one, and as a researcher, I want to know the difference. Maybe this gives me an idea of what meta prompting is all about and why so many people are excited about it and why people are talking about it. I'm going to click on that, and you can see that now it prompts the model here in the chat window, and I see the question here. Now, the model is generating
a response. Okay, so we have a response from the model. It's a very lengthy response, and obviously, what we can do here is ask the model to shorten it, but we're not going to do that. That's something that you can try on your own. It provides a title here, or like a heading: "Meta Prompting vs. FIA Prompting: Structure, Application, and Effectiveness." Quite comprehensive in the response, and this is something that you have to kind of get used to with these language models. They tend to be verbose. By default, they want to generate very long responses,
so it's your job to know how to prompt these models to give you shorter, more concise responses if that's what you want. Then you have to prompt the model to do that, but we haven't really asked it to do that yet. Okay, so here's the breakdown, and it basically tells me this is the structure. So, meta prompting is like this; it gives me a definition of it, and then it has all these citations as well. We'll go through that later. Then it has the application of it. The application of it will help me better understand
it, and notice that for each of these sections, it does have a bit about FOP prompting, and that should help me differentiate between these two concepts or these two prompting techniques, or that's the idea, I believe. It says the sources highlight that meta prompting emphasizes structure and syntax, making it well-suited for interacting with symbolic systems and code environments, which are crucial for complex reasoning tasks. So, let's go through one of the citations here. I’m going to pick this last one here since I’ve already read this out. If you hover over it, it gives you an
explanation, but you really cannot tell where and in which part of the paper this is. To get a better sense of that, you can click on the citation itself, and then it will take you directly to where that is located in the paper. So, this is how the paper has been extracted from the PDF. You see the actual text where this particular statement was pulled from, so that's really nice to be able to know the source of truth because that's how you verify whether this is a correct statement or if the model is just making
it up. But one thing about NUM that I've noticed from my own experimentation is that it's usually not hallucinating, and usually, because it's forced to use citations, it almost always gives you correct statements or truthful statements. I really like the citation feature because of that. You can always validate that, and that's how you would use the citation feature. Now, as I continue doing this research, I... Can actually save this as a note, so I'm going to save to note right here. You can see that now it's saved as a note. I can actually go here
and change the name of it again. Ideally, this should be done by the LLM or can be done by the LLM; it can be automated, but for now, we have to do this manually. So, I will say "meta prompting" versus "few-shot prompting." Here we go! So, that's our little note right there, which is a saved response note. Notice that it is different from our note that we write manually. One thing you'll note about these saved response notes is that they only show you 10 citations, right? So you only get 10, but in reality, this whole
note has up to 11 citations. Here, you will see that this one has 11 citations, so the note shows 10. That's just something to keep in mind; you don't see all the citations; you only see up to 10. So, 10 is the limit there for what you see in the note preview, but here, when you go into the chat, you see the full list of citations. Something that you can also do is go to the actual document here or the research paper. If you are reading through this, you can also save those as notes. So,
this is typically how you would save notes if you're reading a book or something like that. Let's say I wanted to actually save the abstract as a note as an example. So, what I'm going to do is select this entire abstract. I think it finishes right here. Notice that some suggestions are mentioned here, and one of those suggestions is to "add to note," so the selected text will be added as a note. That's another way to create a note. That's really cool because sometimes, maybe you are reading through this and you found something interesting, and
you want to make a note of it. This is a very cool feature for that. Instead of pasting things around and manually creating the note, you can just basically hit this button right here. So, let's do that: add to note. You will see that it was quickly added here as part of my notes. It says "code from meta prompting." In fact, I'm going to change that, and I'm just going to say "abstract," so now it's explicit that this is the abstract. This is how I continue to analyze the paper and try to understand it better.
One thing I also do with any paper that I'm reading is I would like to know what the contributions are. Apparently, there are some contributions here, but this is a very lengthy contribution. So, what I'm going to do is actually select it, and then I can summarize to notes. This is actually another very useful feature. Something that you can also do while you read the paper, if there's something that's very difficult to understand, is use this option here: "help me understand." So, I'm just going to summarize the contributions and see what the model gives me.
Here, it's generating a note automatically, so I can open the note, and you will see that now it has nicely formatted into bullet points. It says, "Here are the main takeaways of the passage condensed into bullet points." So, it introduces meta prompting first, then it has a theoretical foundation, and then it talks about recursive meta prompting. Then it has experimental validation, and then it discusses zero-shot performance, and you can see the performance there. So, it's not explicit here that these are contributions; it seems to me this is more like the takeaways from the research paper.
You must understand that if you're not really prompting the model explicitly to give you contributions, then it will take a guess on what you want just based on the selection. So, maybe it did not really do a good job at the contributions. One thing I can do now is be more specific with the model. I go to the chat part, and I can do that explicitly. This is where the chat actually comes into play. I can go to the chat window here. I know this is selected because, by default, it's going to be selected. Then,
I can ask it, "What are the contributions of the paper?" In fact, I'm going to say, "What are the main contributions of the paper?" It could be a lot of contributions, but what are the main ones? Then I'm just going to ask it. You can see here now it gets into all the details, right? It generates the main contributions of the paper based on information provided in the sources. The authors propose meta prompting, then a theoretical framework, followed by a distinction from FOP prompting. There is a lot of comparison with FURE prompting, apparently, and then
meta prompting for prompting tasks and recursive meta prompting as well. So, there is another version called recursive meta prompting. Then, there is empirical validation of the meta prompting effectiveness, so that was part of the contribution. Finally, expanding meta prompting to multimodal foundational models is also a part of the contribution. Again, you can also go back to the citations, and you will get the citations for where this particular thing was mentioned. So, I can go here, and here is where it was mentioned. I can go for deeper reading on why this was considered a contribution. So,
that's how you typically work on this. This and again, what I can do here is save this to notes, so this will be main contributions. Main contributions, okay, so that's saved. Now this one, again, wasn't that useful, so what I'm going to do is select it, and I can also delete notes. So, I'm just going to delete this one. This is how you experiment with Notm; maybe adding notes with the suggested prompts wasn't enough, and in that case, I used the chat functionality to generate something more precise. You have to experiment to see what works
best for the data that you are making available to Notebook LM. As a researcher, something that's very common practice if you are from a research lab is that you need to present papers; that's something you have to get familiar with. So, something you can do here in Notebook LM is ask it to summarize something into a slide. Why not? Let's try that. What I'm going to do is select all these notes, and then it gives me a bunch of suggestions here, but I don't want any of these suggestions. What I want is to actually generate
a slide from the content that is in these notes, so I can use the chat functionality for that. Notice that it's saying it has selected three notes, and it's going to use those three notes specifically to generate the slide that I want. So, I'm going to say, "Create one short slide for the concepts mentioned in the notes." Let's see if it follows exactly what I want here. So it says, "Create one short slide for the concepts mentioned in the notes." This is about those notes, so I just want to create a slideshow, and this is
going to be my first slide. Again, these models tend to always want to generate something very long, so you can see here that even though I'm saying "one short slide," it's basically trying to create a slide. Yes, because I can see it's formatting things really nicely, but this would be too heavy for a slide. Although I really like how it's summarizing things, it’s distinguishing between the F shot prompting; it’s talking about applications overall. The structure is really nice, and it’s also talking about how effective meta prompting is for complex reasoning. That’s kind of where you
want to use this, but I think this is too long for a presentation. So, what I can do is save it to notes, then I can name this. I’ll just call it "notes first line," and then what I can do is deselect all and then just select this one. Then, I can ask it to just summarize that. Maybe it generated a very long response because I gave it too many sources; that could also be the case. But in general, I would say with Modelite Gemini, they tend to always want to produce very long text—that's something
you have to know, and something you have to kind of work around to be able to prompt the model better or guide the model better. And that’s what we’re trying to do here. So, I'm going to use this note. You can see that one note is selected, and then I'm going to prompt it again to create a shorter version of this: "Create a shorter version of the slide, maximum three paragraphs." If I'm more specific, obviously it will help them more, so I'm testing again if I can get a shorter version of that. So, I'm iterating,
right? And that’s kind of a process that you have to get comfortable with when you work with these LLMs. And here you go—this is a much shorter version. I can get shorter versions of this; I can tell it to just give me three sentences or something like that. And so, that’s how you kind of experiment with Notebook LM, the model, the chat window, and the notes. Once I’m happy with the results here and happy with my notes, I’m very excited about what I have found. I could also create my own notes and so on. I
could also share this with my lab mates; my lab mates might also be interested in this approach. So, if I’m in a research lab, I could also share this with my adviser; I could share this with my research lab mates, and they also get to see what meta prompting is about and learn very quickly what it's about because I've already done the work together with Notebook LM. So, there is a feature here called "Share," so you can click on share; this shares basically the notebook. You can select who you want to share with right here.
So, for instance, I have multiple accounts; I’m sharing it with myself, and you can assign different roles. You can assign "viewer" if you only want people to view the notebook and not change anything, or you can also revoke the access if that’s something that you want to do, or you can also set them as an editor. If you set them as an editor, they will be able to change, add notes, and things like that to the notebook. So, I’m just going to select "viewer," and I’m going to send. I can also just copy the link
as well, but "viewer" is the default role. I can then hit send here, and that will be sent to whoever you added that you wanted to share this notebook with. So, that’s it for this section; hopefully, you learned a few things about how to take advantage of the notes, how to use the chat feature, how to upload PDF files, and how to interact with the PDF files, taking advantage of the citations to verify the information. The model is producing as well; that's really important when working with LLMs. I think that's one of the powerful features
that I really like about Notebook LM, because you can always verify answers, and the model tends to hallucinate less. Stay tuned for upcoming sections to see more interesting content. By now, you're seeing how powerful Notebook LM can be as a research assistant, and how useful and fun it is to experiment with Notebook LM using all the different functionalities. In this section, in particular, we are going to be focusing on one of the most exciting features that has everyone talking about Notebook LM: this is the audio overviews. Some people refer to it as deep dives. I
think this is a very exciting feature because it allows you to use your sources to generate audio overviews of those sources. For this particular demonstration, I want to continue with this idea of using Notebook LM as an AI research assistant. It's part of the important work that I do for the scientific community and also the AI community more broadly. I love using tools like this to help me accelerate thinking and understanding of these concepts, and also to communicate those to others in the community. As we know, AI is such a transformative technology and continues to
reach many different places and industries, so it's important to be able to communicate to more people and make all this content more accessible. I've been thinking about how to use some of these Notebook LM features to make content more accessible for folks in other industries, and audio overviews are such a great feature for that. So, I'm going to start a new notebook, and for this one, I actually have a newsletter that I write. I'm very passionate about communicating all the scientific developments and breakthroughs that happen in the AI field; that's something that I spend a
lot of time on. I write newsletters, I have my own community Discord, and I even talk about this stuff in some of the training and courses that I develop because I also do a lot of professional training for companies. It's something that I do on a daily basis. Now, I'm starting to use Notebook LM for this type of work, and that's why I'm really excited to talk about Notebook LM and why I have chosen to build a dedicated course for this particular tool. For some of you that know me, I have this very popular newsletter
called "Top ML Papers of the Week," in which I summarize all the great research papers that are trending. Some of these I also create myself because I think they are interesting for developers and researchers as well. However, this particular newsletter is only available in text format, so I've been thinking—and I've had this idea for some time—to be able to generate audio versions of this particular newsletter. That would be really powerful! This is something that a lot of my subscribers would love, and they continue to ask me for this particular format. I haven't been able to
do this, but now I have this tool, Notebook LM, that looks very promising and can help me solve this problem. So, I'm going to use audio overviews to generate an audio overview of my newsletter. I'm back here in Notebook LM; I've started a Notebook LM, and I need to upload my sources. As you saw, I have this newsletter, and I have the latest issue of the newsletter. I'm going to paste a website here; that's what I'm dealing with—this is the type of source I'm going to paste it here. You can see this corresponds to the
last issue that I posted for my newsletter. I'm going to insert it, and it's just going to quickly extract information. We can always preview what it extracted, so you can see it extracted all of this. It has some information about the date that could be important too. You can see here from September 16 to September 22, and then it has all the different papers that I highlighted—there are ten different papers. So, I can see that all of the papers are here, right? So, it's ten, together with all the links and the tweet source for where
this paper was mentioned and discussed on the X platform. What I want to do with this now is directly feed this to Notebook LM and create an audio overview. I think that would be powerful, and this is something that I can directly use in future issues of my newsletter. I think a lot of people that are building newsletters will be excited about this particular use case. So, I'm going to generate the audio for this. I’m going to hit "generate," and now we need to wait. All right, so the audio overview is finally completed. Keeping in
mind that it takes a bit of time to generate the audio overview, just have patience with that. Maybe you can try other features as you wait for the audio overview to complete, but it's finally completed here, so I'm going to play the audio overview just to give you a little sample of what was generated by Notebook LM. "Okay, stack yourselves in for this deep dive, because we're going to be diving into some pretty mind-blowing stuff about large language models. And let me tell you, yeah, the future is looking pretty wild. The rate of progress is
really something else! It's kind of like, remember a little while ago when this was all sci-fi? It's true; now it's all over the news, and it really makes you think: what's next? What can't AI do, right?" Let's unpack this a little. We're not just talking about, you know, AI that can like spit out a canned response, right? We're talking about actual, like real, back-and-forth conversation here. And that's exactly what this Oshi research is trying to do. They call it a full duplex dialogue system, which basically means, yeah, what does that even mean? It means it's
supposed to be like a real conversation, right? Okay, so like natural. Yeah, natural. Imagine like an AI that can actually, you know, follow the thread, pick up on your cues, and respond in a way that makes sense, you know? Like "uh" and "ah." And all that's a lot harder than it sounds, though. Think about it: like when we're just talking to each other, we do so much without even thinking. We're reading each other's tone, like, are you being serious right now? Are we joking? We even finish each other's sentences. Yeah, exactly. And Moshi's goal is
to replicate all of that, which I mean could be huge! Can you imagine what that would do for virtual assistants or like customer service? No more arguing with robots on the phone. I'm so here for that. Okay, just pause the audio overview. As you can see, the hosts are having a blast and having so much fun; it's super engaging. I could listen to the whole audio overview, and that's really where we are at with AI—like, it's engaging and I'm learning at the same time. They are talking about MOSI specifically; this particular announcement essentially is a
speech-to-text foundation model. You can interact with it, and they're discussing how exciting it is, right? And they are quite excited about it. They are also explaining what this particular system is doing and what it is, and how it might be used in different use cases. So it is quite impressive what Notebook LM is doing here with this audio review, and it has so much potential. So what you can do with the audio review is you can also share it. You can go here and share this publicly if you want. You can also play it at
different speeds, so you can change the playback speed to 2x if you don't have too much time. So as you maybe are doing some other task, you can always play this in the background and change speed if that's something that you like. I usually like to listen to audio at like 1.5x, so that's a pretty useful feature. You can also download it as well; it stores as MP3, I believe. And then you can also delete it and regenerate the audio overview. Something that you will notice with the audio overview is that it actually saves with
your notebook. So I will go back here and I will show you what I mean. I'm going to go back to my summarizing newsletter notebook, and then I'm going to open the notebook guide here, and then you will see that it’s generated here. After some time, this disappears; this option is not available to play. And so what you need to do then is you'll need to load it, so you get an option to load the generated audio, and it loads it for you. Because what I want to do in this section is I'm going to
show you what else I can do with this specific newsletter. So something I do with the newsletter every week that’s very time-consuming is I actually generate the newsletter for different places. So I cross-post this on LinkedIn; I also cross-post the newsletter on GitHub as well. We have a GitHub repo for this, and there are a lot of researchers that prefer the GitHub repo—it’s much cleaner and I like the formatting there and so on. So I’m going to show you first the GitHub repo, and then what I’m going to tell you is what is the problem
that we’re trying to solve and how we will use Notebook LM for it. So we are in the ML Papers of the Week GitHub repo. This is under our organization, DETI, and here we highlight again the top ML papers every week. This is a very popular GitHub repo; it has 10k stars and there are a lot of people that actually prefer to consume the newsletter in this format. So as an additional service, we try to update the repo and keep it updated as much as possible, but it’s a very time-consuming process, I must say. And
right now, we haven’t really spent the time to automate this, although we probably can automate this process of just extracting the information from the original Substack newsletter and then just sending a pull request here. That’s something that we can probably do, but I actually wanted to use Notebook LM for this because while I can use an agent for that, I prefer to do this as a manual process because that way I'm sure that what I'm putting here is the actual thing that is in the newsletter—the actual content. That’s really important for me, especially because of
the kind of content that I’m posting here. So for every week, again, we are listing all the papers. So let’s say we go to September 23 to September 29. That was the issue that we did last week. You can see here that we have all the papers and it’s formatted nicely. So this is Markdown format. So if I click on the edit button, you will see that it’s written as Markdown format. You can see the formatting here. So the task that I'm interested in using Notebook LM for is to convert that Substack newsletter issue into
this format. And so the question is: Can Notebook LM actually do this for...? Me, if it can, then it basically saves me from doing this work manually. Because currently, how I do it is I copy over the explanation of the paper, I have to do the little editing here as well, the formatting, then I have to paste the links. It takes a bit of time; it almost takes like 10 to 15 minutes. In fact, I think Notebook LM can do this 10 times faster than the time I actually spent doing this these days. So that's
the task here, and I’m going to show you how I can do that with Notebook LM now. So, something Notebook LM is really good at, I'm noticing from all my experimentation recently, is that it can be used to convert certain formats into other formats. I have this website, and I have my newsletter that's basically like listicles of these top papers, but it's formatted in a very specific way. What I want to do now is to convert this format here into the markdown format that I showed you in my GitHub repo. The whole mission here is
to take this one and convert it into markdown that I can copy and then paste back into my GitHub repo, and that process should be really fast—less than a minute. So, what I'm going to do is, I'm actually going to use the chat feature for this. I'm going to go here in chat; the sources are already selected, so this is the only source I'm using. Then here, I am going to prompt the system to generate something in markdown format. I'm going to say, "Generate the papers in markdown format using the following format," and I have
a format that I'm using. All this information I’m going to provide below this video just in case you want to reproduce what I'm doing here. In fact, I would encourage you to pause the video every now and then to reproduce the examples that I'm showing you throughout the course. Okay, so I pasted the format. This is the format that I want the system to follow. Again, I'm converting to different formats; that's pretty useful. For this one, I haven't been able to do this using other systems like ChatGPT or Claude, which are systems that I use
a lot—like those are products that I use pretty much every day now. The reason is that they don't have the scraping functionality; they don't have the ability to scrape information like Notebook LM is doing. Because Notebook LM has this as context and preserves the links and all of that information, it should be able to convert that into this format, and that's basically the test that I'm giving it. So, I'm going to go here, and then I'm going to hit enter. Now it's generating something. I just gave it the structure that I want, and it should
be in markdown format—something that I can go and paste directly into my GitHub repo. While you may be thinking, "Okay, so this is just for a specific GitHub repo," no, in fact, you can use it for any kind of formatting that you want. You may have a table or any type of format that you may be interested in for your domain and use case. So here you go; it's directly converted into a table, and this is now something that I can go and directly paste into GitHub. I can copy this, and I can go and
directly paste it. I can actually save this as a note as well—save this as a note—and you will see that it's saved as a note now. I know that the links are working because you can see the links there, and now I'm just ready to go and paste this inside of my GitHub repo. It works! I've tested it already; it works brilliantly. I really like the fact that these LLMs are really good at converting data into different formats—that's really powerful. As you can see in this case, I do use it for my newsletter because I
cross-post to different places. This really reduces the amount of work that I do for my newsletter, and I'm excited to keep using it. I've tested it so many times, and it works. As I mentioned in the previous video, I do a lot of science communication. So when I see a paper or a deep dive discussion on a specific AI topic, I always think about how to make that very technical content more consumable or more accessible to more people, especially developers and other researchers that are trying to get into the space. That's something I think about,
and I work a lot on, putting a lot of effort into that. In fact, my whole company is doing a lot of work around that to make things more accessible—specifically research and different developments that are happening in the field. As an example, I really like this prompt engineering deep dive by Anthropic, and I watched the whole video. It's like a 1-hour-plus video where they go through a lot of prompt engineering tips. These are researchers, product folks, and also customer-facing people who were discussing this, and they had a lot of really interesting insights into how to
better prompt these models. The talk was titled "Prompt Engineering," and so what I was thinking was, how do I use language models to make that kind of content more accessible? I must say that the content is pretty technical, and I don't think it's consumable by a lot of people. I think it's mostly going to be folks that already have some insight into how to use LLMs, and maybe researchers or hardcore developers. So, the idea here is, how do I take that discussion and create a more consumable or accessible version of it? Also, what format am
I going to use for that? How do I use LLMs for this? So, what I did was I actually took the discussion. The discussion I’m talking about is this YouTube video; I’m going to link it below. This video is an excellent talk about prompt engineering and how Entropic is using prompt engineering techniques to build a variety of powerful use cases with language models. What I want to show you is how to use Notebook LM to create something like this. The way I did this was a couple of weeks ago, before I even knew about Notebook
LM. I actually did this manually. I created a little project—this is a web project—where I fed a YouTube link, and that application downloaded the YouTube video. Then, I used Gemini 1.5 Pro to transcribe the video, and from the transcription, I could convert it into something like this, basically summarizing the prompting techniques. To do this in this format, I actually used Cloud because Cloud tends to perform better for things related to creative writing. Notice that I’m using a combination of models: I’m using Gemini 1.5 Pro for the transcription part, and I’m using Cloud to do the
generation of the prompting tips from the transcription. So, this is how I came about this particular list, and people really liked it. You can see here that this tweet had 385,000 views, 5.6K bookmarks, 2.7K likes, and 383 retweets—very viral content, and it’s one of my most viral posts in the past couple of weeks. This was all possible through these LLMs. Fast forward two or three weeks from when I posted that tweet, we have Notion AI exploding in popularity and going super viral. They recently introduced a new feature where you can upload a YouTube link, which
reminded me exactly of that whole process I built an application around. They have essentially replicated that feature, and I couldn’t be more excited because it makes things so much easier for me. Now, I can just use that model, and I can use Notebook LM to generate everything I want in whatever format I desire. So, that's what I'm going to show in this section. I’m inside Notebook LM, and I’m going to start a new notebook. Down here, you see that there’s a YouTube feature. I can provide a YouTube link, so I’m going to paste the link
to the YouTube video that I showed previously. Here is the YouTube link. Some notes: only the text transcript will be imported at this moment, so this is just a text transcript; there’s no access to the actual clips or video or anything like that. Only public YouTube videos are supported, so it has to be public. Recently uploaded videos may not be available to import—that's something I saw a lot of folks struggling with. Just be aware that not every new video is going to be available here. If the upload fails, just check out Learn More to understand
the common reasons why this might happen. So, I’m going to insert the YouTube video here, and now it's uploading. Okay, so now I can click here, and what I get is basically the transcription of this YouTube video. You can see the transcription here; this was a conversation, so I had a lot of questions about how to convert this into something more accessible for a broader audience because I felt like it was super technical, and they really went deep. That’s what I love about the discussion. I'm going to show you the process of how to generate
that content that I showcased in that viral tweet, and it’s super simple! You don’t have to do all the stuff that I did, where I had to create an application for the transcription, and then to do the actual content generation, and so on using different models. I can just use one system for that. So, I’m going to go here, and I’m going to show you how I can generate something very similar to that tweet. I’m going to say, "Generate the top 20—in fact, I would say more specifically, prompt engineering," because they might be talking about
a bunch of different things in the discussion, which they do. So, I want this to be very specific. I’m going to hit enter. Again, I believe this error was due to my internet connection, but it should have no problem generating the response now. All right, here we go! You can see that it did a great job because it generated 20 items, which is the first thing I wanted. Then, it actually used the format that I specified. I specified a heading and explanation format. All right, so that’s looking good! It used a maximum of two sentences,
so I have to check specifically if it went over two sentences. It looks like most of them are okay from what I can see. Yeah, most of them are like one sentence; maybe some have two sentences, but that looks great. I like this format already. What could have been improved here is the formatting of the actual content. Maybe this could have been a list, and it could have used items that were clearer and easier to read. Markon to make it easy to read, but that's not a problem because this is the first iteration. Again, I
could save this as a note, and you can see here now it's stored as a note, right? And then I can go back to my notebook guide, I can go back to view chat, and I have it here. So what I can do here now is I can also look at my citation. So let's look at one of these; let's see if there is an interesting one. It says, "Don't over-rely on R prompting." I thought that was a really important insight in this discussion. It says, "While assigning personal can be helpful, it shouldn't replace clear
test descriptions and context." That's a very important suggestion and recommendation they made in the discussion. So what we can do here now is drill down into the transcript of the YouTube video to learn more about what they were exactly talking about. So I'm going to click on that and click to 4, and you will see that this line, in particular, is talking about that. It says, "You're in this product, so tell me if you are writing an assistant that is in a product. Tell me, I am in the product. Tell me I'm writing on behalf
of this company. I'm embedded in this product. I am the support chat window of that product. You're a language model; you're not a human. That's fine, but just being really prescriptive about the exact context of where something is being used." I found a lot of that because I guess my concern most often with R prompting is that people use it as a shortcut for a similar task they wanted them all to do. So just be more clear and provide better descriptions for the task and provide a better context. Usually, that will give you better results
as opposed to just relying on this idea of world prompting. That was what the recommendation was about, and I fully agree with this because this has been the case for the experiments and use cases that we have worked on as well. That's really neat, and you can go on and continue looking at the different citations and get the different contexts from the transcript to learn more about what they were discussing. This is how you learn really fast about certain things, and you can iterate on this. You can, for instance, say, "How does the role of
prompt engineering change with the evolution of language models?" You can really have a lot of fun and learn a lot from just interacting here in Notbook Alm. With your sources and YouTube videos as sources, it's really powerful because there's so much really good educational content on YouTube. So I'm going to call this Notbook YouTube Prompt Engineering. Being able to interact with YouTube videos using M is a lot of fun, and you learn a ton! It's basically what a research assistant should be doing, in my opinion, and I'm having a blast because I do learn a
lot from YouTube, and obviously, I also create a lot of YouTube content, and I see the importance of that in this section. What I want to do is continue with that YouTube demonstration, and I want to show you some ideas on things that you can also try. And again, I encourage you to pause the video and try to reproduce all the demonstrations that we're showing you. We're going to provide you all the links to all the sources we have been using on all the prompts and all that good stuff. What I want to do in
this section here is to use this content in different ways. So I was able to use something like this for that great viral tweet, but I also do technical guides as well. So I have fun doing tweets and so on, but what I tend to do with most of the stuff that I learn is to produce technical guides and tutorials for developers and researchers. One popular guide that I've been working on for over a year now is this prompt engineering guide. It's very popular in the community; it gives you tips about how to prompt these
M's really well, and there are a lot of different techniques and so on. I do update this guide very frequently, and it has a lot of really technical content, but it also has a lot of introductory content as well. So I want to make it as accessible as possible to everyone. So what I want to do with the prompt engineering tips that were generated by Notbook LM on that prompt engineering discussion or YouTube discussion is to integrate them into my guide. In this page specifically, I had general tips for designing prompts. Now, this one has
tips like "start simple with your prompt," "focus a lot on the instruction," and even some examples here, as well as the importance of being very specific. We have discussed that in this course, but it's also something that we discuss in our other courses related to prompt engineering, which we also have in the academy. Avoid impreciseness—not to do and so on. There are nice little tips there, and in fact, all of those tips that we saw in that YouTube are more recent tips for the more recent models, and it would be nice to kind of integrate
them here. So the question is: how do I use Notbook Alm to integrate those steps into this website or this particular section of the guide? So what I'm going to do is I’m going to first have to integrate this. Notebook LM, so Notebook M has context into what I want it to do, what I want it to update, and how it should update this page. That's what I'm looking for. I'm going to copy over the URL for this page into Notebook LM, so I have the page link. I'm going to go here, go to the
website, and here's the link to that specific guide that I just showed you. So I'm going to insert it, so you can see I'm mixing up all of these sources. I'm using the chat, I'm using all of these different functionalities, and that's what I encourage you to do: to experiment and just be creative with all these features. Just to recap the goal of this, just to recap what I want to do here, is I want to be able to use or integrate these prompt engineering tips from this discussion and write a summary of it, integrating
it into my prompt engineering guide. That's the goal here. So how do I do this in Notebook AL? I have my sources here, going to go back to the Notebook guide. It has the two sources, but when I select and want to use this note, it says "one note," so I'm assuming that it's only using that note and it's not using the sources. What I'm going to do is I'm actually going to take this note here, and I'm going to copy this. Then what I'm going to do is I'm going to add a source, copy
the text, and then paste it here. So I'm going to insert it; now it's added as a source. I'm going to rename this source into "Prompt Engineering Tips," so now I have it here as a source. I have it as a note, but I also have it as a source explicitly, and now the three sources are being selected. I can now chat with those three sources. What I'm going to ask the model here is a bit more complex, so let's see how it does in the first iteration. So this is what I'm going to prompt
it: Help me to integrate the prompt engineering tips into my current prompting guide. General tips, so that it all knows that this is the one I'm talking about that I want to integrate into. I'm just going to prompt like that. Then one more thing I want to add here is, please use a format that's ideal for my prompting guide. I'll let the model figure it out again; these models are very creative, so you can sort of try to prompt them this way to see how creative they are and if they can actually achieve the task.
Don't be afraid to prompt the model this way: this is a format that's ideal for my prompting guide. All right, I'm going to hit send here, and now we wait a bit. All right, so this is what it's recommending. You can see here are the top 20 prompt engineering tips from our conversation history. It's using the history to format it for inclusion in your prompting guide, and then it says "clear communication, iterate and experiment, anticipate edge cases," and so on. So here is, I guess, the conversation that they were having and then employ the model
itself to create diverse and realistic examples for a few-shot prompting scenario. So this is for few-shot prompting, how you might improve that, and use this particular text. I can see that the model is trying to provide this additional context here, but that quote might not be needed when I'm actually trying to include this in my guide. So I can further improve this, but I like that it tries to format things really nicely here and tries to summarize things for me, because I think these ones are already pretty good, and I can already use them in
my prompting guide. I don't really need the code; what I probably could use is some examples. So maybe I can spend a little bit more time trying to come up with examples or actually ask the model to produce examples. So I'm going to leave that as an exercise for you to try to figure out. I'm going to provide other links below the video for you to test this; this will be your exercise. I'm very pleased with this, especially because it tried to format things really nicely here this time, and now the only thing that's missing
is to actually come up with examples, as is shown in my guide. So in my guide, you will see that there are some prompt examples and some examples of the output. So that is what I want the model to do. I might actually be able to take an example of this and then add it right here in the prompt itself. Something that you can also try is maybe get creative with the notes, so you can actually convert this into a note and then just use the notes to be able to generate the format that you
want. So that's a way to kind of constrain the model even further, so that it doesn't look at the sources and gets confused about what you want to do, because I think it can happen that it will get confused about the task itself because it's looking at so much information and context. Again, with these models, they do struggle with longer and longer context, so by kind of tuning this and putting it in notes and making it more specific about what the content is that you want to apply some transformation on, I think you can get
better results. Those are things that you can try. Anyways, that will be it for this section. I think the YouTube feature is... Very powerful! You should definitely be experimenting with this. I did this webinar on building advanced applications with LLMs, so some of the slides are a little bit outdated. There are a few things I want to be able to do with these slides. I want to be able to, for instance, reuse them to prepare a special lesson for some students who have been asking me how to apply some of the concepts that I was
introducing or talking about in this presentation. For instance, prompting techniques—how do you apply them? How do you apply this concept of reactive agents? What are the sort of use cases as well, and so on? I would love to be able to use Notebook ALM to come up with quizzes and generate different artifacts, like additional notes that I can provide to students to help them study all these advanced concepts related to large language models, providing quizzes and suggesting exercises that I can share with students. So that's what I'm going to be using Notebook ALM for. Note
that I did this presentation using Google Slides; you can add Google Slides as a source in Notebook ALM, so we're going to be using that in this section. I’m going to start a new notebook here, select this notebook, and then I'm going to select Google Slides. Then I’m going to go to "Shared with Me," and this is the presentation that I shared between my accounts. Now, make sure you have access to that presentation before you upload it as a source. If I go to it, you will see that all the slides are here. From here,
I could do a couple of things. The first thing I want to do is generate an audio overview of this presentation. There might be a few things that I'm forgetting from it because I did this presentation a couple of months ago, so I want to recap, and that's an excellent use case for an audio overview. I can go and hit "Generate" here. Now, the audio overview is generating. Here is the audio overview now of the lecture: "So, you've been digging into Elviss Savia's presentation on building advanced applications with LLMs. It seems like you're ready to
move past just chatting with chatbots. Yeah, this is different! It's more like using LLMs to build really complex systems, not just asking for a poem. Definitely next level! From the looks of these slides, we're going to cover it all: advanced prompting techniques, the tools you need, even reactive agents and retrieval systems. So whether you're prepping for a big meeting on LLMs, want to build your own application, or are just curious about this stuff, this deep dive is for you." That's what the presentation is about, as you can see in the slides right here. This is
a good way to get a recap of content that you may have worked on a long time ago and just need to review. Assuming I have reviewed this and I already know what I want to do here, what I can do now is try to add quizzes. The main goal of this use case is that I want to use the slides to improve the lecture. I want to add quizzes and use Notebook ALM to suggest exercises and so forth, and be able to generate other artifacts like notes that I can share with my students. So
let's get started with that. The first step is to add a quiz. I can go here—this is already selected—and prompt it: "Can you add a multiple-choice quiz for slide 10?" I'm going to focus on slide 10 along with the correct answers. What is slide 10? Let's look at that. Slide 10 is about external tools and retrieval systems, so I want to add a multiple-choice question to see if students can better understand it. This is something I'm going to incorporate into the presentation itself. We can go ahead and generate that now and double-check whether things are
working correctly here. Okay, that was fast! We can see here a multiple-choice quiz based on slide 10 along with the correct answers. Question one: What is the purpose of external tools and retrieval systems when working with LLMs? A) To generate creative text formats like poems, codes, scripts, and so on. B) To enhance the capabilities and reliability of LLMs. C) To provide a visual representation of data for better understanding. D) To store and manage large datasets for LLM training. The correct answer for this is B) To enhance the capabilities and reliability of LLMs. This is something
I mentioned here, so that's a nice quiz. That's something I could add as a follow-up here to keep the class engaging when I’m delivering the presentation or lecture. We have a few more examples here: question two, question three, and we can keep generating this. What I'm going to do is save this; this is really useful already. I'm going to give it a name: "Quiz on External Tools and LLMs." It's good to name things explicitly because that way you can find them easily. It also gave me a citation; I just want to check that you can
see that it’s citing the slide itself. It has a good understanding of the slides and where exactly... Those slides are located all right. So the next one that I want to do here is—I actually want to generate some quick notes for slide 15. This slide 15 is basically summarizing how you should approach prompt engineering. When do you use Chain of Thought? When do you use agentic workflow, and how does it all make sense in a pipeline? Like, if you're following and you're working in a use case with TMS, first you do zero-shot prompting, and then
you go through these different steps. Eventually, you want to maybe build a rack depending on whether you are optimizing context or you want to apply fine-tuning if you want to refine the style and tone of the outputs of the LLMs. So, I want to ask the model to generate more detailed notes for this because this might not be enough for the students. Right? They look at it, and they're a little bit confused. I've added a few things here and there, but if I add this in a note version, it might be super useful for students.
So, I want to actually generate that artifact. I'm going to go here, as I've been doing, and then I'm going to say, “Can you write a detailed explanation to slide 15?” I can just tell it whichever slide—this is the cool thing about supporting slides here in OPM, right? So, I'm going to go here and then say sent. I'm going to send that to Gemini, and Gemini is going to do its magic. All right, here we go. So, these are the notes for the explanation to slide 15. It starts with the basics of prompt engineering. You
can see even categorizing and doing headlines here; this is nice. You can see that it’s saying zero-shot prompt, so it has a good understanding of this figure. This is very impressive because it has an understanding of the text, it has an understanding of what content I'm talking about, and it also has an understanding of the figures inside the slides. That's a very powerful feature that I think has a lot of potential, and you can see how I'm using it already. It says, “Zero-shot prompt is not enough. Then, use few-shot prompts for even better performance. You
may use manop prompts,” and so on. You can see it's saying exemplars 5 to 100, and I mentioned that somewhere; you can see the citation there. And then it says incorporating reasoning and text—right, that's Chain of Thought—and then advanced techniques like agentic workflows for fine-tuning, and so on, and fine-tuning, and then retrieval-augmented generation. So, these are nice notes, right? They're more comprehensive and more detailed compared to what I have here, and I think Notebook did a great job at summarizing all of this. Now, I'm going to save this as well, so now it's saved as
notes. I'm going to say, “Optimizing LLM Performance Notes.” So, I have the quizzes, I have the notes, and the last thing I want to do here is—I want to create exercises. In particular, I want to create an exercise for this table here. This table is talking about prompting techniques and when to use them, so I want to create a nice quiz here because I want students to understand when they should apply each of these techniques. So, I'm going to prompt it here. I want to create an exercise on the prompting techniques mentioned. I'm not going
to mention the slide this time, and I'm going to say, “Can you please help with some creative exercises for the students to work on?” See here, I'm testing not only the capability of Notebook AI to look at the content and be able to read the information in the table, which are the prompting techniques, but on top of that, it should be able to do creative tasks for me, which in this case is to create exercises for the students to work on. In this case, I did not give it the slide number; this one doesn’t have
a slide number, so it should still be able to pick it up because it has it in context. All right, here we go. It says, “Here are some creative exercises for students to work on based on the prompting techniques mentioned in the sources: Zero-shot prompting exercise: ask students to develop zero-shot prompts for a variety of tasks such as translation and summarizing factual facts.” I really like that. “Few-shot prompting exercise: provide students with examples of few-shot prompts and then ask them to identify the examples used to steer the model.” That's nice because then students can reason
about the few-shot examples, which is really important when designing that Chain of Thought as well. There are some exercises there. There’s one for React as well. This one says, “Present students with a task that requires them to interact with external tools. Ask them to design a React agent that can effectively solve the task by reasoning about the information needed and deciding which actions to take. Use appropriate tools such as search engines or APIs. Combine the retrieved information with its reasoning to generate the final answer.” And then there's also one for RA here, so you can
see that it gives me some examples. Maybe it lacks specificity, but perhaps if I can provide it with more context and examples, it can do a better job. But this is already great, and so I'm going to save this because I have some ideas on how I might want to use this already. So, this one I'm going to name "Exercises to Understand Prompt Engineering." There we go—techniques. All right, that's done. Great! So, that’s basically what I want to show you: how to integrate slides. Notebook has a good understanding of... Images, figures, and tables on the
entire slide deck can be used to do things like exercises, creating notes for students, and adding quizzes as well. This is awesome because now the students can use this to continue building on their knowledge, and I can use this to improve my lectures and my courses. So, this is another excellent way on how you can use it. You have made it to the end of this course. We spoke about Notbook AlM and how amazing it is for unlocking all sorts of use cases where you can leverage AI-powered research assistance. We use it for doing more
extensive research on topics that we are interested in, such as Alpha Chip. We use it for analyzing papers, and we also use it for marketing-related tasks such as summarizing the newsletter using audio overviews, which is an incredible feature that has so much potential, and I think everyone should be experimenting with it. We also went through a YouTube example where we provided Notbook AlM with a YouTube video and engaged with that video, converting it into a nice, digestible, and accessible format, which Notbook AlM is really good at. We also covered this idea of adding slides as
a source and the ability of Notbook AlM to understand figures and tables, enabling it to create artifacts for our students to keep learning about the wonderful world of large language models. So, we generated quizzes, notes, and exercises as well, and we're just getting started. Hopefully, you are inspired to go and try out Notbook AlM for personal use or your professional life, and we hope that you find it as useful and fun as we are finding it for our own company and our own use cases. Notbook AlM, I think, is in its early phases; it's actually
labeled as an experimental product by Google. That means there are going to be a lot of features added in the future, and I think those features are coming really fast. As you saw during the course, there are certain limitations, and that's because this product is in its early phase. For instance, we wanted the ability to steer the audio overviews. I think that will be a powerful feature, and maybe that's something that will be coming in the future. We also want to be able to do better with the YouTube citations. For instance, if we can better
extract information from YouTube videos, such as extracting clips, that would be amazing. So far, you can only engage with the transcript of the YouTube video, but if you can engage with the clips directly, I think that could really be powerful and unlock amazing use cases. With Notbook AlM, automatic workflows could also be powerful. As you saw, we were able to scrape information; it already has that capability. But I think what would be even more interesting is the ability to use external tools, like a database, for instance, or make a connection to a data warehouse, or
the ability to create an agent that can search the web for information. This whole process could be automated while ensuring that the integrity of the experience is not compromised. The ability to verify information is very powerful. The fact that Notbook AlM sometimes hallucinates is also significant, and I love it for all the work that I do. There are other features, such as having prompt templates or perhaps the ability to generate even more complex and creative reformatted actions. Those will also be interesting. But overall, I'm very happy and have been using Notbook AlM a lot over
the last few days, and I'm very excited about where this goes. Again, congrats on completing the course, and hopefully, we will see you all in one of our other courses!