What Is Google AI Studio? πŸ’‘ Gemini 2.5 AI Coding πŸ“ Learn & Build Apps

18.16k views7685 WordsCopy TextShare
Wanderloots
Hi Friends, my name is Callum aka wanderloots. In today's video, I introduce you to one of my favour...
Video Transcript:
As someone who is interested in learning how to use AI to do things I would have otherwise thought to be impossible, Google AI Studio is the best tool I've come across for turning my ideas into a reality. From video generation to app building to fine-tuning AI models, Google AI Studio has a huge range of tools available. As a non-coder, I've recognized that artificial intelligence tools are getting more powerful than ever before, and it's opening the door for me to be able to build the apps and ideas I've dreamed of.
I've already built three applications for myself in two weeks that would have taken me months or years to do without artificial intelligence. Students, content creators, developers, and business people, anyone can benefit from learning how to use this tool. And this studio is the perfect place to begin.
Hi, my name is Callum, also known as Wanderloots, and welcome to today's video on Google AI Studio, an overview of all of its features, including chat, live stream, video generation, and app building. I'll also include some practical examples on how you can use Google's AI Studio to learn, build, and code even if you're not a coder. Typically, I use Notebook LM for learning complicated topics, as it's one of my favorite artificial intelligence tools.
It has a new discover sources feature and a mindm feature that allows you to search the web and bring in a ton of information and then distill it as simply as possible. Notebook LM is a truly great tool and I highly recommend checking it out, especially for its privacy policy, which tends to be better than many other AI tools. That said, Gemini 2.
5 Pro, a model that you can use within Google AI Studio, does a much better job at digesting and understanding and explaining technical knowledge bases. This makes it an especially powerful tool when it comes to learning how to code and how to build applications. As someone who hasn't really coded much and has been experimenting with coding with artificial intelligence, this has been an incredible way for me to try and actually turn my ideas into a reality.
I've used Google AI Studio to build two apps, create a dynamic image generator for my digital garden for more aesthetic social sharing, and customize my digital garden, my website to have new features. So, I'm familiar with the basics of what it can do for coding, learning, video generation, and app building. In today's video, I'll give an overview of all of the features of Google AI Studio, including the chat, live stream, video generation, and app builder.
I'll compare the benefits of using Google AI Studio to other AI tools like Claude 3. 7 Sonnet or Notebook LM. Finally, I'll walk through some practical examples on the workflow I've used to build out a few apps, showing you what I've used this incredible tool for and why I think you would benefit from using it as well.
I'll also include some ideas on who would benefit from using this tool and how. If you find this video helpful, I would love if you would please consider liking and subscribing as your support enables me to continue making these videos. So, I appreciate it very much.
I'm going to talk about a few tools in this video, including Obsidian and Notebook LM. And if you're interested in doing a deep dive in these tools afterwards, I do have a couple of playlists available that go much more in depth on all of the different features. So, I recommend checking them out next.
Now, let's dive into Google AI Studio. Thought I would give you a brief overview on all of the different elements in the base studio because when you first take a look at it, it can feel overwhelming. So, I'll give you a walk through on what all the different options are and then give you some practical examples on how you can use the different features and how I've specifically been using this in my actual daily workflow.
So, the first one you can see here is the chat feature on the lefth hand side. And this lets you choose between a bunch of different models here. Lately, I've been using the Gemini 2.
5 Pro preview. And honestly, this has been one of the most powerful and impactful LLMs I've ever used. I tried to do so many different things on different platforms like chat GPT on cloud.
And it didn't even come close to what I was able to do on Gemini. So, I've been extremely impressed with the chat functionality here. And part of that is because when you go to upload files or have a conversation with it, you can have up to a million tokens in your context window.
So that's many hundreds of thousands of words. You could drop in an entire textbook in here and then ask questions specifically about the textbook, keeping in mind, of course, copyright issues with that. You can add videos.
You can add entire code bases, which I'll show you in a few minutes. But honestly, I've been incredibly impressed with this chat prompt. You can see here I have a few chats in my history here.
I've used it to build a couple apps and debug those apps with honestly very little coding experience. Like I'm quite new to this and I was able to code entire working apps. Like I built this app right here for example where I'm able to go and take something like my latest YouTube video.
Copy the link, paste it, and it automatically pulls up the thumbnail and generates a share link for someone else to be able to go use this app and play that YouTube video directly embedded. and then click share and warpcast and it will actually create a new mini app that runs and plays that YouTube video. So someone could actually just go and watch this video directly within the Warpcast feed which is a part of Farcaster.
So I'll explain that more in a moment. Kind of wanted to just show you that I've been practically using this tool to actually build useful things for myself in very little time. Like I would never have been able to figure this out on my own if it weren't for using Google AI Studio.
So that's the chat function. And to me, this is similar to what I've been using Notebook LM for where, like I talked about in my last video with Fman's favorite problems, I'm able to go and drop in a whole bunch of sources. I can go and actually discover new sources based on what I'm looking for, and then I can drop them all in here and have a conversation with it.
So, Notebook LM already has a massive context window in the millions of tokens as well. But I think at the moment, Notebook LM doesn't use Gemini 2. 5, it uses Gemini 2.
So, I've actually found for certain features, for certain functionality, for certain use cases, like coding, for example, or mocking up an app, I found Google AI Studio to do a much better job. Though, that said, uh if you go to the settings here, there are different terms of service and different privacy policies in Google AI Studio compared to Notebook LM. Notebook LM is more safety conscious for privacy.
So, I recommend checking out the two different terms of use to make sure that you're comfortable with what you're putting into Gemini. But, I'll get more into that in a moment. The next feature that we can get into is called streaming.
This is when you can just talk to Gemini. You can just click the talk button and start speaking into the microphone and have a live conversation back and forth with Gemini 2, the Flash model. This one doesn't have Gemini 2.
5 yet, but this is just a cool way that you can start to operate in what's called a multimodal way where you're able to actually just have a conversation and get Gemini to answer questions. And more than just using your voice to talk, you can also share your screen. So, this is kind of like taking your screen on your computer and using that screen to FaceTime or have a video call with Gemini and Gemini can see everything that's on your screen and it can answer questions based on what it's seeing.
So, you could be, for example, working on a piece of art and asking for feedback based on the composition of photography rules and Gemini would be able to analyze the images that you've been putting in there and compare them back and forth. You could be on your phone as well if you're sharing your screen. You could pull up your camera and then you could perhaps take a picture or a video of everything that's inside your fridge and ask Gemini to come up with a recipe based on the ingredients that you have in real time.
If you're trying to learn a new piece of software, you could have this pulled up. Maybe you're using Adobe Premiere Pro and it's complicated software that's difficult to use, but you could just have a conversation with Gemini using it as a tutor based on it actually looking at your screen. You could use this to debug your code where you've got your entire codebase up on your screen and you have Gemini that's able to interpret all of the text, all of the code on your screen and help you debug it.
So, this is a really powerful way for you to be able to interact with AI in a multimodal way. You could also, for example, use this stream to learn a language. You could be talking to Gemini and asking it questions in different languages or asking it to tutor you based on that language and it would be able to go back and forth with you having that conversation.
So, there's a lot of really cool ways that you can use this. The next major feature is that you can generate videos. This is cool with uh Google VO.
And at the moment, there's only one model you can use and it's limited to 720 resolution, but they are upgrading that soon. And basically, what you can do with this is you can describe your video and generate it. So, let's give that a go for a moment.
Cool. So, now I can just click play. That's interesting.
I was trying to emulate my Obsidian notes because that has kind of a geometric knowledge graph view. So, it's interesting that they have this video here where the light bulb is actually generating this curve. So, that's pretty cool.
And then once that's done, I can download it. I can view it in large view. I can give it feedback or I can export it to drive.
I can switch to different aspect ratios because I can make it 916. And I can increase the number of results if I wanted to have two appear side by side. The negative prompt here, I'm able to type in words that I don't want it to have.
So, this is just a way to remove specific words from the way that Google is developing this video. That's a pretty cool feature. And you can choose the duration of the video here.
And then we get to the starter app section, which is the final major feature of using Google AI Studio. And this is where you can actually build apps with Gemini. So there's a bunch of example apps here.
You can use it to make gifts. You can make a p5 js uh which could be like kind of a video game or generative art code ecosystem. So you can just click this and have a starter app with p5.
js. You can create a new app that does spatial imaging. So, it's able to recognize 2D and 3D systems and perhaps make some type of augmented reality app.
And all of these are just the basic ones. This lets you connect to the Google Maps API and then you can start exploring the world. You can make perhaps like a choose your own adventure game with this but based on actual places in the world.
And all of these are just starter. You can also just create your own app from scratch. So there's an FAQ here that explains how you can use starter apps as a surface for building, editing, and sharing apps that use the Gemini API.
So that brings up an important point, too, that all of these apps have the potential to directly connect to artificial intelligence to Google's AI suite, which is why this is called Google AI Studio. So when you run the app, you can get an API key and then you can actually query Gemini directly. That's a super powerful way to get going on your apps without having to try and think about how you're going to be integrating AI at some point.
You can build in AI from the beginning if you want to. The apps that I've built so far, like my embed app here. This one doesn't involve any AI at the moment.
And I was able to get Gemini to build out everything from parsing the YouTube link to converting it into pulling the thumbnail and then generating this button. So, that's just a cool way that it designed the UI. I gave it a lot of feedback on how I wanted the buttons to look.
And I just iterated. I recalibrated back and forth and eventually got this mini app that looked like how I wanted it to. So, again, I'll show you more on that in a moment.
But one thing to note here is that if you actually want to export your app, you can do this. You can download a zip file that has your app. But you have to be careful because you don't want to put your real API key in there when you're making it a public facing application like a production level application because if you do that, the API becomes available to anyone which means that anyone would be able to use your key to access Gemini directly.
So if you are going to try and convert this into a production level app, if you want to run the apps outside of AI studio, then you shouldn't deploy it directly using the initial output that Gemini gave you as part of Google AI studio. You want to move some of the logic server side, which they have a tutorial here for. So that's just something to keep in mind if you're doing this.
And then there's a few more elements here on what works and what doesn't work when you're building out these little apps. So I recommend exploring this a little bit more. Today is meant to be more of an overview with a couple specific examples and not a complete deep dive.
Stay tuned and I will be making a deep dive on all of these different features so you can understand how to use Google AI Studio more. But for now, I recommend just clicking around just exploring. When I first started using this, I felt pretty overwhelmed by all the different options, especially in chat with all of these settings on the side here.
But the more I've used this, the more I was having these really long conversations with the chat feature based on the apps I was trying to build, the easier it was to understand how it all fit together. And honestly, I have learned a ton about coding through my exploration, through playing inside Google AI Studio. So, that's pretty cool that not only is it able to do things that I wouldn't be able to do on my own, but it's also teaching me how to then begin doing those things on my own because I'm getting the AI to tutor me on coding as I go through and get it to write the code.
Now, I want to just quickly talk about the settings on the side because this can feel like a lot. So basically, you can choose the creativity of the response here on the side here with the temperature. If I increase the temperature, it makes it more creative.
If I decrease it, it makes it less creative. So it depends what you're trying to do here. For example, if you're trying to get it to write a blog post based on your YouTube video that you made or create social media content from your long form blog post, maybe you want to give it more creativity.
It's totally up to you, and I recommend testing it out and exploring. You can also export structured output. So if you're more technical and you want to have this export code specifically for you in a particular way like JSON, you can choose to turn it on and then generate structured output.
You can have it run code directly within the app itself. So you could for example get it to write its own Python script and then execute the Python script directly within this feed here within the chat itself. You can start building in functions that can be called and you can ground with Google search.
So you can get it to actually go and find new sources to try and validate the information because I believe right now Gemini 2. 5's cutoff date is January 2025. So grounding with Google search would allow you to bring in more up-to-date information.
Uh and then you can also choose how long you want the response to be. So that's just some of the basic settings here. And then one super important element is that if you want to keep this chat here, you have to click save prompt.
And at clicking save prompt will add it to your history on the side here. But what you can also do is you can open the settings here and you can turn on autosave. By turning on autosave, it'll just automatically save this each time you make a change.
But if you were to click save prompt, what this is doing is this is saving this into Google Drve where you could then access it later. So you could then go in and for example access this from Notebook LM and start asking questions in Notebook LM based on the conversation that you had in Google AI Studio with all of your code in here. So it starts to become a really seamless integration across different apps.
And what's also cool, just as a as another feature, if we go over to documentation, you can start to get into fine-tunings. So, not only can you use this to build applications, you can also use this to connect to artificial intelligence like the Gemini API, and you can fine-tune those models using Google AI Studio. So, you can bring in a data set of fine-tuning, which basically lets you bring in your own custom style or personalization, and you can use that to tune the Gemini model.
And all of this is done inside of Google AI Studio. So you can build an app that you can have your prototype going and already connected to artificial intelligence from the beginning. And then you can tune it.
You can fine-tune it to make it much more how you would like it to actually operate by bringing in something for example like your Obsidian database. So you can bring in your custom knowledge and then use that to tune how you want the AI to operate. But again, just to be explicit, I recommend checking out the terms of service and the privacy policy to make sure that you're not violating something with Google or with your own data to make sure that you're comfortable with how you're uploading things.
So, you can think of this as like a multimodal learning system that helps you build what you want to build or learn what you want to learn in a way that is super seamless. Again, I've been incredibly impressed with this. And if you're interested in learning more about what can be done with this, this is just the super brief overview before I get into some specific examples, but you can go to the documentation section here.
And there is a lot of information. There's so much that you can do here. So I highly recommend taking a look and for example, if you want to get into text generation, go read through what you can do here.
And you can always copy any of the code that you're getting in the documentation or export. for example, save this as a PDF and then drop this whole page inside of Gemini so that you can ask specific questions about what it is you're trying to do. So, I think that this tool is honestly a really powerful tool for anyone that's looking to learn or to build.
For students, you could use this as a study aid. You can use it to summarize your notes or textbooks. You could use this, for example, to build out a starter app that is literally just a quiz app for yourself where you drop in your textbook and your study notes that you've made and ask it to build a little application for you that creates flashcards and asks you questions.
That could be a really powerful learning tool. You could also, as a creator, use this for content generation. You could take in your blog posts and export social media.
You could use it for brainstorming back and forth based on your ideas and then honing them in so you get a better idea for it. You could use this for prototyping an app as a developer. You could test out different models and tune them.
You could bring in workflow automations for your business. You could build a custom chatbot or analytics tool or even an analytics application that takes in, for example, my YouTube analytics data and then gives me customized outputs that analyzes it and gives me insights and ideas. One of my favorite uses of Google AI Studio has been to build a dynamic image generator.
I use this image generator to automatically build what are called open graph images in my digital garden. Open graph images are the sharing preview images you see when you share files on social media or texting. By building out this custom image generator, I'm able to write in Obsidian, share my notes to my digital garden, and make those notes customized when I share them with other people.
It even pulls in images from the Obsidian node itself to build that dynamic image preview. Now, I'll talk more about how I actually build this out so that you can build this dynamic image generator for your own website if you want to, but I'll get more into that in another video. I just wanted to share this extra use case with you before we dive deeper into Google AI Studio.
Reminder that if you're finding this helpful, please consider subscribing. Now, let's keep going. Just before I go into showing you how I've been using the chat to actually get into a noode building system, I just want to quickly show you Claude because Claude 3.
7 uh sonnet is known as being one of the absolute best coding AI systems and it is true. It works really really well. I'm able to have a conversation with it and it can build out the code almost perfectly on the first go.
However, on this free plan, it becomes really difficult because after only a couple prompts, I've already hit my limit and then I have to pay $20 a month. Whereas, for context with Google AI Studio, it has a million token context. So, just for a quick uh context on what context is, a context window is the basic way that you use an AI model by passing information context to the model, which then subsequently generates a response.
So, you can kind of think of the context window as short-term memory. So there's a limited amount of information that can be stored in someone's short-term memory. And the same is true for generative models.
So the way I kind of think of it is that when you go to ask a prompt, you ask a question here, it will run and grab a specific amount of context and answer questions based on that particular window of context. Notebook LM, for example, I was able to drop in all of these files here. I was able to discover sources in my Fman's favorite problem video to learn more about problem solving and intuition and flow states and how this all ties together with my favorite and most interesting topics.
And it's able to go through and discover sources that it can then add into this context window. And it can have up to 50 sources. Each source can have 500,000 words.
So that means that if I take 500,000 words and times it by 50 sources, that's 25 million words. So, Notebook LM can answer questions with sources up to 25 million words. That's a ridiculous amount.
However, each one is limited to 500,000 words. And like I mentioned, it's currently not using Gemini 2. 5.
So, if I want to get into coding and add a huge amount of context here, I can't use Claude because Claude's context window is incredibly tiny for the free plan. So, that leaves me with using Google AI Studio. So I'll use Notebook LM for example to find different sources relating to the app that I want to build or the code I want to learn or whatever the thing is that I'm trying to do or build.
I'll find the best sources here using the discover source feature. And then I'll ask questions and improve my understanding and use this more as like a learning system using notebook LM to learn the topics. And then when I'm ready to actually build I'll get into Google AI Studio because it's just that much more powerful for coding.
So, for example, to build this app here to build this little embedter app that I was talking about where I'm able to go to Warpcast and open this mini app here, I can just paste this link in here. It automatically pulls in the thumbnail and creates this copy link and share on Warpcast button. And this is all an app that's running and able to run on my phone, too.
This is a specific feature to Farcaster, which is a decentralized social media platform that I've been using. And basically, I think it's super cool that you're able to share links and they celebrate links versus nerfing them or shutting them down like other platforms. So, it's cool for me to be able to go like this and then share specifically a YouTube video that someone can just click on and then watch from within their social media.
And I even added this button here where they can click subscribe and it takes them directly to my YouTube channel. This is a fully operating website here that's hosted on Versell and I built this entire app just using Google AI Studio. So, I'm going to super quickly show you the process that I did because it it did take a while for me to figure it out.
It wasn't the easiest thing in the world. It definitely took some time for me to debug and figure out how to get it all working, but the fact that I could do it is just so cool because I've never built an app like that before. Okay, so just for context to give you context, if I go here to the mini app website, and this could be the same for whatever type of software you're using, whatever software developer kit you want to include in the app that you're building or the website that you're building, whatever that functionality is, there's probably a documentation website.
So for example, if I go here, there's all of these different ways on how to load the app, share the app, interact with wallets, publishing it, authenticating users. There's a lot of information here. So what the Farcaster team did is they actually introduced a building with AI option where they created an LLM friendly doc so that you can use whatever model you want to to help build the application.
So they have this button here where you can actually just click ask in chat GPT and it will upload this entire document as context for chat GPT or you can go here and click full LLM doc and this creates a txt file that has their entire website all in one go that shows you all of the code. It brings in all of the software developer kit. So, this has a ton of context that the AI would want to use.
So, I can just go like this. I can click save as and then I can just save this to my desktop. Then I can go over to Google AI Studio and I can go find that file that I have and I can just go and drop this in and it'll run through and analyze this.
You can see that it's extracting all of the context from that particular file, which only took a few seconds and that's 22,000 tokens. Then I can type in what it is I want and I could say I'm going to give you context on the app SDK that I want to build. Reply with why if you understand.
So now it's going through and yeah, you can see here this is the full context window. I've now used up 22,000 out of my 1 million tokens and it already analyzed that entire website in just a few seconds. Now let's see what happens if I try to do the same thing with Claude.
Oh, there we go. My message will exceed the length limit for this chat. 22,000 tokens is not that much.
Remember on Notebook LM I can have 500,000 words. On Google AI Studio I can have a million tokens. This is only 2% of that entire amount and that already went past what Cloud could do.
So that's just a comparative example on how Google AI Studio can be that much more powerful for building out your apps. And then I could add in more context. For example, here's another folder that someone else had built that has a lot more context on how to use this particular SDK.
This is another 13,000 tokens. So I could say again, reply why if you understand. And I like doing this step by step because then the AI can recognize where the file comes in and it stores this as a question answer pair so that it can know which one to go back to to reference and it just keeps a little cleaner to do it one by one.
And then I could also bring in for example a PDF. And this is more context on how the frames the mini apps work in Farcaster. If you've never heard of Farcaster or you're not interested in it, that's totally fine.
The point is that every software that you use, every software developer kit that you use is going to have documentation. So if you can export the entire website, all of the docs into a single file or into multiple files, you can then bring in all of that here and it takes up barely no context. Then you can start asking questions about it.
Cool. There we go. So that would be all of the context here.
And now I could ask it for example to summarize what a mini app is, why it is beneficial, and outline briefly the steps to implementing one. Cool. So here you go.
It's now going through. It's explaining what a Farcaster mini app is. A full screen web application with HTML, CSS, and JavaScript that allows you to run inside of a Farcaster client.
So this would be like being able to run a full application inside of Instagram or inside of X Twitter. So, you're able to actually build apps and then have people use those apps directly within the social media feed, which is super cool. It brings in your social media context when you run it.
It allows you to connect to wallets if you want to. It brings in authentication, which is an incredibly powerful tool. And it can facilitate a viral loop where people can discover a mini app in the feed in the social media feed and then share that app back to the feed.
So, you can actually kind of bootstrap people using your application very quickly by using the mini app system. And then I'm not going to get into the specifics, but you can see here it's telling me an outline on how I could actually start to build this. So now I could go through and I could say generate the code for a basic miniapp website that takes in YouTube videos and allows people to watch them in the mini app directly through an embed.
So now it'll go through and in a moment it's just going to start pumping out a ton of code. So this is all happening in real time right now, too, by the way. It's pretty wild.
The more I've used this, the more my mind has been blown on how powerful this is and how good it is at actually getting it right off the first go. And then here, this is perhaps even the most important part after it gives you all of this code here. This just built the outline for an entire website that could be converted into an application with the index.
html file. It lists its assumptions and then it walks you through exactly what it did and why it did it. So I could say something for example like I am not a coder.
Please give me an analogy for what you just did. And this is also something where you could go and you could bring up system instructions and you could bring in a global instruction. So I could include in here for example I am not a coder.
Explain everything assuming I am not technical. So by adding that in here now it would start to answer every single question with that context in mind. Cool.
So your mini app is like a kiosk booth. You've just built this inside the mall and it plays your YouTube videos. It comes with the ability to connect to the mall's main intercom, which is where someone can open your kiosk and then recognize that you're in there.
So again, that's basically what this mini app is here, where I'm able to go over to Farcaster and actually just click play for this video directly within the app itself. And then I can share this with other people, which is again super cool. So that's just one example.
I could also, for example, go directly into Gemini and I could start loading from Google Drve in here to have new chats. I could bring up Canvas, too. See, I can go here and I can just click drive and I'd be able to go grab the file that I had saved from here and then bring it directly into canvas into Gemini.
And then, for example, if I switch over to 2. 5, I could run the canvas and get it to actually display the HTML that this just coded for me. So, let's go find that for a second just to show you.
So, let's copy this, go over here, and say, give me a preview for this HTML file. And then I just dropped in the entire HTML that Google AI Studio had given me. Cool.
There we go. So you can see this is taking the code that Google AI Studio generated for me and it's able to interpret it and then preview it. So that that's pretty wild.
So if I now go and grab one of my YouTube videos, let's grab today's Fineman video. I just published it. Copy link.
Go over to Gemini. Paste in YouTube video. Load video.
It looks like the button doesn't click. So I might have to tweak this back and forth a little bit. But that just gives you a bit of an idea on how this all works and how you can use Google AI Studio to build out your prototype.
and then you can test it how it actually looks in something like Google Canvas in Gemini canvas. So there's a lot you can do with this. I really recommend just starting if you've ever wanted to build something and try it but you don't know how to code and you're super confused.
You can use Google AI Studio as your coding tutor. You can as you're going through and coding this as you have this open in one window you can stream your code base that you're building in something like VS Code or Windsurf or Cursor and then you can ask questions to Gemini while you're coding the app. So, it ends up creating this really nice feedback loop where it's giving you code that you can then test out, but it also explains to you like a tutor what it is you're doing and how you can improve it.
And the fact that you're able to do all this with Gemini 2. 5 Pro is pretty wild because it is one of the best AI systems I've ever used before. One of the most powerful LLMs.
So, I'm excited to keep diving into this more. I'll show you how I actually go through with the flow and how I use this to build out a full application. Specifically, how you can convert your digital garden if you want to into something like a mini app.
just so you can see the flow of how I'm using this tool pairing with things like notebook lm and Google gemini canvas. Uh the next one is I have my digital garden at wandlutes. xyz.
I have a whole video that goes much more in depth on what a digital garden is and why you might be interested in building one yourself. But basically this just comes directly from Obsidian, which is my note-taking app, where I'm able to just click a button to designate whether a note is made public or not. And when it's made public, it pushes it to my website and then displays it here.
So, these are all of my notes inside of my digital garden from Obsidian. So, I thought it would be cool to convert this website into a mini app. So what I did was I worked with Google AI studio and I went to the docs here for digital garden and I grabbed specific components on how to edit the default website that gets built with the digital garden plugin in Obsidian and then dropped in all of the context from Farcaster and had Google AI studio explain how I could convert my existing website this digital garden into a mini app.
So now if I go over to Farcaster if I paste in my wands. xyz it pops up with the actual app itself. So, this was able to convert my existing HTML website that was built automatically using the digital garden plugin in Obsidian into an actually fully functioning application, which is pretty wild.
So, now someone can just click enter into my digital garden and they can explore my entire website here just on their phone within their social media feed. And if they want to, they would be able to add this mini app directly to their page so that they could go back and find my digital garden again. So, that's what I have on the side here.
I have my digital garden built in so I can explore it. I have a few other mini apps here like Paragraph. This is my newsletter platform.
So, there's just a lot that you can do here. The goal why I'm bringing this up is because I wouldn't have known how to do this by myself. I would have struggled and taken a very long time going through all of this miniapp documentation to figure out how to actually start coding something myself.
So, I could say something like this. I don't know how to get started. I know that I have a GitHub repository for my digital garden plugin that's built.
Please walk me through step by step how I can convert this into a mini app using VS Code and GitHub Desktop. So, it's going to go through, it's going to analyze this request and then compare it against all of the context that it has, the 53,000 tokens of context. We still have 950,000 more tokens that we can begin using here.
So, there's just so much that we can do here. Cool. So you can see it walks me through how to clone my repository, how to open the project in VS Code, how to set up Git, how to back up my website, and I can just go through and say, "Okay, now please do this step by step so I can follow exactly what it is you're doing.
" So I hope that gives you some context here. So, I'm going to work on building out a few more things here and testing out Google AI Studio a bit more, especially with how I can use it with Notebook LM to research the topics ahead of time to bring in the best sources that would help me understand what it is I'm trying to build and then export the context into Google AI Studio so that I'm able to give it a ton of context up front that helps me bootstrap the application or the idea that I'm trying to build out in practice. I'm honestly extremely impressed with this tool.
I'm very excited to dive into it more. Google AI Studio is by far the best way that I found to actually integrate directly and use Gemini 2. 5 Pro, which is one of the most powerful LLMs I've ever used.
It's become one of my favorite tools for learning, for building, and generally just improving whatever it is I'm trying to do to augment my life and my personal knowledge management system with artificial intelligence. So, if you're interested in learning more about any of the tools that I've included in this video, including Obsidian and Notebook LM, I recommend checking out my videos on YouTube. I have an Obsidian playlist that walks through my note-taking system and how you can integrate it with artificial intelligence, how you can build out a system that works best for you that's truly personalizing your knowledge management.
Because once you build out your knowledge system, you have access to it in something like Obsidian in a Markdown format. It makes it way easier for you to integrate that directly with artificial intelligence tools. If you want to learn more about the AI tools I talked about here, I have a whole intro series to how to integrate that directly within Obsidian, but also a bunch of different ways that you can use Notebook LM to augment your note-taking to boost your research and really just to learn far more effectively.
So, if you're interested in learning more about Notebook LM, I highly recommend checking out this playlist here because there's a lot of videos that I personally have found very helpful and I know many other people have as well. I hope that this video gave you a solid overview on the potential of Google AI Studio, of all of the major categories that you can use it for, while giving you a glimpse at how deep you can actually go with using this as an AI tool builder. It truly is an incredible tool for anyone looking to learn, code, or build.
If you found this video helpful, I would love if you would please consider liking, subscribing, and sharing with a friend. Word of mouth is by far the best way for me to grow my channel, so I really appreciate any support you can give me. If you're interested in getting a copy of my Obsidian templates that I use as part of my note-taking system that integrates with artificial intelligence, then also consider joining my paid membership as I make this templated kit available for paying members.
My members enable me to continue making these videos. So, thank you very much. If you want to go deeper into Notebook LM or Obsidian or integrating AI into Obsidian, I recommend checking out my AI learning playlist where I walk through in much more depth how you can actually augment your personal knowledge management system with artificial intelligence.
Stay tuned for the next video in the series where I'm going to walk through how I actually use Google AI Studio to code to build out those applications that I was talking about, walking you through step by step how you can do the same. Thanks for watching and I will see you in the next video.
Copyright Β© 2025. Made with β™₯ in London by YTScribe.com