How to Master the Art of Vibe Coding

4k views24543 WordsCopy TextShare
Mark Kashef
🚀 ALL Assets in the Video: https://bit.ly/3RYsWcb 🤖 Get EXCLUSIVE Content & Resources ➡ https://b...
Video Transcript:
Vibe coding is turning from an unheard technique to a must-have skill. Explain what vibe coding is. You don't need a team of 50 or 100 engineers. You can just have a team of 10. When they are uh fully vibe coders when they are actually really really good at using the cutting edge tools for uh codegen today like cursor or wind surf, they will literally do the work of 10 or 100 engineers in the course of a single day. And if your goal is to become a better vibe coder than the 99% of people out there,
then this video is exactly what you're looking for. Although it's a longer one, finishing this from start to end will give you an evergreen skill set you'll be able to use for years to come. You'll learn how to create apps using the best web app builders and how to integrate automations into those apps. And most importantly, you'll get a better understanding on what to look for when it comes to security, which is typically a topic that makes people's eyes glaze over. But when it comes to vibe coding, it's one of the most important things you'll
have to learn. Understanding all of this will make the difference between being able to build an app that's good for you versus building an app that's actually good for the world. Now, instead of just giving you a boring tutorial, I have structured this video to be a fivelevel video game to keep things interesting. By the end of this game, you'll be vibe coding at a level that people don't even know exists. Before we jump into the game, you probably want to know who'll be playing with you. I'm Mark Cashiff and I've been working in the
data science and AI space for more than a decade. And for the past 2 years, I've also been running my agency, Prompt Advisors. My team and I have delivered hundreds of projects in the generative AI space for companies in practically every industry you can imagine. While there's no official vibe coding certification out there, I have created tutorials and resources for the very platforms that let you vibe code like Bolt, Lovable, and Rep. So, I feel pretty confident in saying that I'm a decent choice to have by your side as you master the skill set. All
right, enough. Game on. So, because we're vibe coding, I had to throw on the purple ambient lighting. And typically I would wear these blue light glasses as well, but I don't want to distract you too much. But typically this with some headphones is my vibe coding attire, but for now we'll just keep the lighting as it is. So from a 50,000 ft view, this is the universe of our game. And we'll be playing the five levels like I mentioned right here. So, if we zoom in, so for level one, we have the Vibe Coding Launchpad
where in this level, we're going to learn about what Vibe coding is, some general concepts and gotchas to look out for, as well as an overview of the platforms, as well as all the different tools you have at your disposal to be a vibe coding expert. In level numero do, we have the prompting playbook. Each one of these platforms have a slightly different way that you should prompt it. And vibe coding in general comes down to how good your prompting techniques are. So here we'll take the nuances that will take you from a mediocre vibe
coder to a really good one. Now level three is one of the most exciting parts of the game because in this level we're going to build an actual demo using a bleeding edge API from OpenAI to build it on Lovable Bolt and Replet and compare the differences and nuances on interacting with each one of these platforms. And to make this super valuable, I'm also going to show you how to get off of these platforms and use that code elsewhere so you can deploy it to your own internal servers and own the code yourself. And in
level four, we're going to go over feature forge, which is where we're going to look at how to make your app look a lot more pretty, a lot more visually appealing versus a lot of the cookie cutter, out of the box apps you'll see all over YouTube. And last, but certainly not least, we're going to go over security shield up, which is a level where we're going to go through some common misconceptions around how secure these apps are and some tips on how to secure them to the best of your capabilities. There are three main
steps when you think about vibe coding. Step numero uno is you want to describe what you actually want to build. And typically someone will say and a lot of demos will show build me a to-do list app or build me an app that tells me the weather. This first prompt you give it or the first description of the app you provide is one of the most important. It literally lays the foundation for what comes next. Imagine you're speaking to a software engineer and you say something like, "Build me an app to track my habits." Right?
If you just say that, it's unbelievably vague and there's a million directions that that engineer can actually go in to build that app. So, taking that into consideration, you want to actually typically have a very comprehensive description that you don't have to necessarily just write yourself. You can use AI to help you craft the perfect prompt to set the right foundation so that the AI knows what's coming. In a way, you're kind of foreshadowing where you're starting and where you're heading. Once you dispatch that really well-crafted prompt and send that to the web app builder,
that's when the AI steps in and starts actually building out the code to create and render the app. On the face of this, this is super straightforward. But where we have problems is where you start having the AI hallucinate or make mistakes in building the app or different components of the app are not appearing the way you'd like. So then your description might become more lazy over time. So, as you keep vibe coding, you might necessarily have a disconnect between what the AI thinks you're saying versus what you're actually saying. And as you get down
the rabbit hole of actually building an app, you'll notice that you'll eventually ask to change one thing and then it'll change that one thing, but change the rest. Why does it do that? Well, all of these apps are powered by large language models, which contrary to a lot of mainstream news, is not that quote unquote smart. It is predictive, meaning it tries to predict the likelihood that it accomplishes your goal with the exact prompt that you gave it. So, if you're not handholding the AI and telling it, hey, change this, but keep everything as is
and giving it those intricate, detailed instructions, that's where you can run these problems where you fix one thing and it becomes a game of whack-a-ole where you fix two things, three things break, and then becomes this endless loop where you're not vibing anymore, you're crying. And the goal of this video is that you stop the crying and you have different tools and weapons at your disposal to know where to go, what to do, and when it's actually worth restarting the app to begin with. In terms of platforms, we'll be going over lovable, bolt, cursor, and
replet. And if you see here, I squeezed in my personal favorite on the development side, Windsurf. I'll show you how to use this to actually deploy the code of any one of these builders in a way that again, you own the code yourself. Now, are there other apps that you could actually build with other than these? Absolutely. The industry of text to web app is actually exploding right now and you have platforms coming up every single day. Now a very common one I'm not going over is called v 0ero but it's very common versell you
have root klein you have tempo labs you have base 44 you have ru code you have manis that can actually build web apps now and obviously you have github copilot and then you have tens of more apps popping up every single day so the tools that we're using might change over the next 6 months 12 months 15 months but the core principles remain the same all of these platforms use large language models as their foundation to begin with so if you can generally become proficient in one or two of these platforms. Kind of like learning
math. Once you learn enough math, you can learn any kind of math. The same thing here. Once you've vibe coded in different platforms, you start to get an understanding for what different platforms are good at and what all of them seem to struggle with. Now, the most overwhelming part of this industry is all the noise where if we go to the left here, you've probably seen these labels or headlines on a YouTube video or a thumbnail or some headline on an article where it says, "Game over. It's over. Who wins? We are cooked. Rest in
peace. Insert name of tool. Again, dn noiseise and don't worry about the tool necessarily. Worry about the functionality. Worry about how long that tool has existed. Because if you see the generative AI space in general, one tool that exists today could be gone tomorrow. So longevity is based on network effects, how big the tool becomes, how well integrated that tool becomes into other parts of a tech ecosystem, and lastly, how fast do these platforms ship. If you look at something like Lovable, over the course of 3 months, they went from being slightly innocuous or not
really well-known to becoming one of the best web app builders out there because they kept focusing on shipping all these micro features to make it as easy as possible for anyone to start building. So, don't worry about the hype, just worry about the process. Now, like I promised, I want to be able to equip you with as many tools and weapons at your disposable as possible while you vibe code. Because when things go wrong, which they inevitably will, no matter how expert you are at prompt engineering, no matter how good a developer you are or
no matter how much you know exactly what you want to build, you will run into errors. So, if you use these tools, it'll help you get out of a lot of these messes that are inevitably going to come up. So, if we zoom in, we have at our disposal six different tools. One is screen recording. Another one is using what I'm using right now, an actual arrow to point at certain parts of a screenshot to tell the AI, "Hey, this is exactly what I'm talking about." More on this later. We have also voice transcription, which
is something I use in pretty much all my videos and has changed my life in general, which is being able to use an AI voice assistant on my computer to help me write prompts with my voice. And what's interesting about this one is because voice removes the friction of giving instructions, especially nuanced detailed instructions. This will help you convey what you think or what needs to be fixed or changed very easily. Now, moving on to chat between Claude. These can not only help you write really good prompts for these platforms that are more detailed than
you could ever imagine yourself, but what they can also do is provide you the ability to do vibe planning where you can ask, you know what, make a checklist of every single thing that needs to happen for this app to work the way I want. And then you can give that checklist to some of these web app builders and some of them like Cursor or Windsurf will physically use that checklist and check things off as they complete it. or others will at least use it as some form of inspiration to keep the AI on track
as much as possible. Now, the next one is one of my favorite, which is using deep research either through Perplexity or Claude or using something like Tatvt Deep Research to go and have it research these platforms like Replet and lovable and help you create prompts and a plan that's tailored to the flaws and the pros of that particular platform. This is super helpful, especially if we live in a world where all of the names that I just mentioned of these tools change and now we have a brand new set of tools. And last but not
least, we have screenshots, which has always been helpful because as the old adage says, a picture is worth a thousand words. If you can screenshot an example or an inspiration of what you're trying to build, or you can screenshot where you have a particular issue in what the AI has put together, this will make it that much easier to do that. Now, let's briefly dive into each one. So if we take a look at the first one which is screen recording. This one is basically using something like Loom or TellTV or any screen recording software
to go through a software of your choice and then recording yourself talking through and walking through what that app looks like and what you really love about the user experience of that app or website. This is helpful because what you can do is and I'll show you a demo. You can put that video into a large language model that accepts videos as input. Gemini 2.5 Pro and then you can have it create and craft a prompt to replicate all the nuance things it saw on screen and the things it heard you say in the transcript
of that video. So if a picture is worth a thousand words, a video is worth a million words. So the next one's super straightforward where like you've been seeing so far, I've been using these arrow tools using this tool here called Demo Pro. You can use that in a screenshot to tell the AI exactly what you're talking about. So let's say you use something like lovable or bolt, you'll see that you have the ability to add an image as context. So what you can do is if you go to an actual image of your user
interface it's created so far. You can say something like, you know what, I think you're missing this part here on the foundations and you you're missing this part and we need we needed some form of search bar, right? And I put actually three arrows around the area of this button that is not working properly. It's yet another way to grab the attention of the AI so it knows exactly what to focus on. Now for the next weapon, I have the voice transcription assistant. Myself, I use something called Whisper Flow. Not affiliated, but you have all
kinds of tools that are out there right now called Mac Whisper. You have Aqua Voice. And what they allow you to do is not only have a transcription service on your laptop that you can use a hotkey to transcribe, but what's cool is it'll learn the way you speak over time. So, as you correct different things and different spellings of certain words, it will learn that and add it to its vocabulary. So, when you want to just yell at the AI to tell it why it's done something so horribly wrong, instead of saying this is
wrong, fix it, you can actually give it something a lot more verbose, a lot more detailed, so it has a lot more content to deal with and use to iterate further. This alone is one of the best nuggets of this video. If you can integrate this not in your vibe coding workflow only, but it'll help you do anything from Vibe emailing to Vibe marketing to Vibe Slacking. This is a super essential part of your Vibe coding stack. When it comes to using chat with T and Claude, these can be your superpower partners again to write
prompts and basically tell it, hey, act as a prompt engineer. Here is an example of what I'm trying to build. Can you act like you're also a lead product designer or an expert at wireframing and come up with intricate details to describe exactly what I'm trying to build? And maybe you provide screenshots of what you're trying to go for and they can help you as co-pilots. You know what's also helpful is within artifacts, which is a tool you can use in Anthropics Claude, artifacts can also render mockups itself. So you could even create a mockup
using anropic and using that as an input to put it into lovable bolt preplet cursor or whatever and then it has a bit of a cheat code or head start on what you're looking for. So if you can visualize exactly we're trying to build yourself instead of just giving a inspiration image it'll help you that much more. And the last thing I want to mention here is Google Collab. So, even if you're not a techie or techsavvy yourself, but you do know you want to talk to some service, maybe randomly an Instacart API or an
API that has services that you haven't worked with before, one thing I really like to do is take the documentation of that app or service, dump it all into Claude or into Chatbt and say, "Hey, can you write me a Python script that will use XYZ service? it'll help me do XYZ and then let's output in a way that I can run Google Collab. If you don't know what Google Collab is, it's basically like a Google doc except you can write code in that Google doc. So even if you are not a code person yourself,
if cloud is generating the code and you want to test whether it works, you can use Google Collab to paste that code and tell hey I want to run this in collab and then test whether or not it works to begin with. Why care about doing this? What's really painful about vibe coding is when you want the AI to understand what you're trying to integrate with, but it doesn't know how to speak to it because most of these models are trained as of 2 3 years ago. So, it has this static knowledge of the internet
and services and as well as documentation for different backends. So if you want to be able to modernize the stack and tell it, you know what, I want to talk to Instacart API, I want to talk to the Redfin API, you need to provide it some really good detail as to what it looks like, how to integrate it, and ideally how you should run it. So if you can run this code in Google Collab and feed that code and say, "Hey, this works. Forget about all your training. Use this as how you should connect this
API or service." It helps it understand that much quicker and helps to relieve yourself of going back and forth hundreds of times, especially if you don't know what it's doing. It'll just create more and more spaghetti code over time. On the deep research side, it's actually very easy to implement this, but it's also very potent, especially as all of these tools and the names will change over and over again. So, if you use something like perplexity, you can say something along the lines of act as a prompt engineer that's an expert at writing Xplatform prompts.
So this could be lovable, this could be bolt, it could be whatever you want. And what we'll do is it'll go and watch YouTube videos just like this one and other YouTube videos and read articles and read documentation from that respective tools website to help you craft a prompt that speaks the right language for that particular platform. So instead of you having to relearn how to vibe code over and over again, you can just rely on the AI to do a lot of the dirty work and the grunt work for you. And the final tool
you can add in your arsenal is creating screenshots. Pretty much all of these platforms, including Windsurf and Cursor as well, allow you to attach some form of image somewhere as context to provide the AI with either an inspiration image or a series of images they want to use to build the app. Or you can tell it, hey, here are examples where I don't like what you're doing or what you should do is this and this and this. letting you import that picture helps it better grasp what's happening especially when you're having issues with let's say
connections if you want to connect to a database like superbase and for some reason you want to enable something like a Google signin and that Google signin is not working even though you set up things in Google cloud the services are not speaking to each other sometimes a little screenshot my Google cloud say hey here's my credentials here's my superbase what's happening please troubleshoot and relying on these tools not just to build the thing, but help going back and forth with you. Using something like chat mode, which I'll get into shortly, is very helpful to
make sure that you dial in your feedback. Exactly. So, with these tools at your disposal, you're now ready to dive into and explore what the main platforms I mentioned have to offer and what are the nuance differences between them that'll help you delineate which platform you should use for what kind of app you're trying to build. All right, so for lovable, here is the cheat sheet guide. Overall, just like other platforms, it's using Claude in combination with some other LMS to actually power the creation of the code that's rendered in the UI. One particular thing
about Lovable is that it has a almost necessary need to integrate with Subbase, at least for now, to enable functionality. If you don't know what Subbase is, it's basically just a database, but it's a database that lets you create what are called edge functions. These edge functions act as a bridge to enable all kinds of functionality. So if you want to integrate an automation, if you want to call an LLM, if you want to call some external service, it will use superbase as its bridge to create a function between the database and this application so
that you can actually execute that functionality. For other platforms, this is not as necessary, but for lovable, the steps are typically have some big prompt outlining your big vision. Step two, you connect to Superbase right away and then you can run into and start building functionality to make the app not just a mockup, but something that's actually practical. some key features at a glance here and these are newer features right now which are multiplayer collaboration so multiple people can start working on the same app. You have now on the security front you have something called
security scan that gives you a bit of a TLDDR or summary of the different vulnerabilities your app might have. It's not comprehensive but it's definitely a good first step in the right direction. You have chat mode which is actually one of my most used functionalities of most of these platforms where you can basically take the AI aside and say you know what don't worry about building for a second let's just talk why is this not going well why is this not working what would I have to do in order for you to be able to
do XYZ so chat mode lets you take the AI on the side and whisper and have a practical rational conversation before you just keep building over and over again basically landing yourself in a possible endless loop sometimes Dev mode is something that's available on pretty much all these platforms now where if you want to intervene with the underlying code of the platform, you can if the last two you have visual edits where you can pretty much pick a component on your screen, click on that specific element and say, "Hey, don't worry about anything else. This
prompt is dedicated for that specific component." And last one is common to all platforms, but custom domain integration. So instead of having your app have preview.lovable.app app or whatever, you can switch that to a different URL of your choice that you can buy externally and then hook up to the underlying app. In terms of feature samples from the platform, you have the one that I mentioned which is called chat mode which is on the bottom right. You have visual edits here that allows you to click on a component like I said and then you can
switch things from the margin to the padding to the background color all from here without having to worry about prompting it yourself. And the last thing that a lot of people don't know actually exists on these platforms is a knowledge base. It's a bit hidden and not in plain view. But what you can say here is something like whenever you build an app, I want you to always use this brand color scheme or I want you to create it in this way or I always want you to remember that I want to use OpenAI or
Claude or Gemini in this particular way and here's exactly how to call that API. You can add all this knowledge here so you don't have to keep repeating yourself every single time. Similar to how I'm giving you a cheat sheet, you're basically giving the AI a cheat sheet on how it should behave and how it should function, which is very similar to something like chat GBT instructions where you tell it, hey, this is my name. This is who I am. Here's everything you know about me and here's everything you should always assume we're having a
brand new chat. So, examples of very basic prompts here that you could write to it is create a user authentication system with email verification and password reset functionality. This would be a good mid conversation prompt to tell it exactly what we're trying to go for. You can say in chat mode, find the bug in my login form that's preventing form submission and suggest a fix. So, this is a really good example of how you could take the AI aside, whisper to it this question, have a back and forth before it writes any line of code.
In terms of strengths and weaknesses, overall, Lovable is really good for beginners. So, if you have no idea what you're doing whatsoever, you'll probably get the best outcomes without focusing too much on functionality at the beginning with Lovable to at least create really cool mock-ups. Um, this multiplayer features very helpful. The security side, all of these are definitely strengths that I mentioned before. Unlimitation side, the main thing to call out is if you're on the free tier, you only have five messages per day. And you'll notice once you start trying to add functionality, this becomes
very expensive over time. So, typically, you're kind of prodded to really have to pay for the service to build out what you're trying to look for. In terms of use cases, just like the other apps, you can use this for pretty much anything, but the ones that I've seen the most success in is MVPs and prototypes on this platform are awesome. In the middle of a client call or a paid consult, typically I'll be able to take a transcript of the call I'm having and then put into chatbt and say, "Hey, build me a quick
UI to visualize what my client's trying to visualize or inspire me with." And then I'll paste that LoveB while we're having the conversation and then pull it up and ask the question, "Is this kind of what you're thinking?" And now it gives a new level of depth to your conversation where you're not just making face noise talking about words and where you want to go, but you have something tangible that you can start to really deconstruct and understand how to make better. You can also build websites and platforms. But when it comes to building the
more advanced stuff, you will have to have some more prowess and knowhow on communicating the technical jargon. Now, moving over to Bolt, Bolt is very similar to Loveable in a lot of ways, but it does have some core differences. So when it comes to what I mentioned before, Superbase, this database you would connect to to build functionality, you can integrate it into Bolt, but you don't necessarily have to. It has what's called a browserbased IDE with web containers. And what that means in plain English is you can create and add quite a lot of functionality
without having to build that bridge I referenced before in soup base. So you can pretty much with one or two prompts go from idea to an app that does something or calls an AI or uses this API feature from the service which makes it pretty unique. And one thing that's improved a lot over time is I used to have to wait two or three business days exaggerating for a response but now it's really really quick. And this is very helpful for beginner vibe coding because you want to be able to get a feedback loop very
quickly on what's working and what's not working versus waiting a bit longer maybe with lovable or replet that you'll see shortly where you'll have a bit of a struggle to wait 10 minutes to know whether or not your prompt was good enough. Some unique features of this platform is you have these little sparkly buttons here on the left hand side and this lets you use their mini prompt engineer to make your prompt from mediocre to a lot more refined and verbose. And while it's still good, I would still recommend that weapon I gave you, which
is that voice command, because its prompts make your prompt better, but its prompts are not necessarily knowledgeable about what you're trying to build. So, you'll notice when you use this feature, it makes it a much better prompt, but it might not be tailored to exactly what you're originally asking for. So, still a very good feature. And if you're beginning, you're not very comfortable with prompt engineering, this one feature will make you a way better prompt engineer. On the next side here we have chat mode right now which is very similar and works pretty much the
same as Lable where no code is built and you can just have a rational conversation and when it comes to showing things like uh dynamic reasoning and showing differences. So Bolt can do something where instead of rewriting the same codebase over and over again it will just try to look for the one component needs to change and change that which you might not care about if you're not technical. But if you're going back and forth tens, 20 times and you're noticing that certain things are not working, sometimes the AI becomes lazy where it rewrites that
same codebase, but it purposely removes some functionality because it's too lazy and wants to focus on the one thing you asked for. So that alludes back to our original image of that horse drawing where the head is appearing in one and not in the other. This will increase the likelihood that the AI doesn't forget to recreate the code as it was before. And then you randomly lose functionality that you didn't expect. And when it comes to making money, which is very popular amongst people building a SAS, saying, "I built this million-doll app that makes $5
per day." Um, you can integrate things like Stripe and then hook up a payment provider. And it used to be almost impossible to do this. I remember in November it took me a few weeks to actually create some form of backdoor to create a Stripe payment. But now with these platforms, it's as easy as creating a test account, hooking up a fake credit card to that test account, making sure it works, and then you can actually accept payment by building a product. So this is also in lovable as well. But in bolt, they have a
very native integration, which is super helpful. So in bolt you also have a knowledge base but what's cool is you don't just have a global prompt for how it should work but you can make a projectbased prompt where maybe for this particular project I want you to behave in this certain way but in general here are your main principles I want you to adhere to. So again these are all nuance differences but depending on the app you're trying to build you might need way more control or way less control in terms of accessing your code.
Just like Loable, you do have a dev mode. But what's cool is you can also open it in a cloud-based editor. So, if you want to go a lot deeper and you have the technical prowess to do so, you can go into an editor in the cloud that you'll be able to sync back and forth with the underlying app. In terms of a prompt sample optimized for Bolt, this could be an example here. So your initial one could be create a recipe tracking app. But the enhanced version would be create a tracking app with the
following features. User authentication, the ability to add, edit and delete recipes, recipe categorization, search functionality, responsive design, an option to mark favorites and create collections. So notice in this one prompt, it was unbelievably refined and concise, but there's so much underlying functionality that we're asking for in one shot. We'll go over this more in the level two section of this game, but for now, this is a good way to start thinking through how do you communicate with the AI in terms of use cases. It could do pretty much everything that Lovable can do. The one
difference here that I want to highlight is that it now can also create mobile applications. Now, mind you, you do still need a little bit of a technical expertise to get to the finish line and deploy that mobile app to the app store, but it is technically possible now since they partnered with a company called Expo. If you've been following the channel for a while, you'll know that I love Replet as a product. I use it in all kinds of platforms to be able to spin up a very quick service that I can hook up
to something, whether it be a web application, whether it be a custom GPT or an MCP server. So, Replet's definitely near and dear to my heart. And when it comes to Replet agent, overall, it's actually very different in the way it processes and creates some of the elements of the app, which make it very interesting and in some cases more secure just naturally. So beneath the hood, it's also using cloud which most of these do. What's interesting about Replet is that they are a very focused company on development cloud hosting. So they have all these
internal tools that replet agent can actually leverage like building an internal database that you can leverage instead of using something like Superbase. Everything can actually live within the Replet platform and you can have the AI help you do that because all those tools are technically at its disposal since they live in the Replet platform. What's really cool about their platform is that they actually allow you to have some form of software development workflow where as soon as you send your prompt, it comes back with a list of the to-do list of what it thinks your
requirements are and it comes up and extrapolates additional features you haven't thought of. So, it comes back with this checklist and you can say, you know what, just worry about the one thing I asked you for, don't worry about the rest. Or, you know what, that's a great idea. Let's add that functionality and that functionality as well. It also has tons of embedded integrations similar to Bolt and you can leverage those in the app itself to make it easier and easier to communicate with services that typically would be a headache if you have no idea
what you're doing from a technical standpoint. Looking at the key features here on the right hand side, like I mentioned, this is what the to-do list looks like. When you ask for a build, it will come up with a prototype as well as a checklist of additional features. So, you can add any one of these. And then once you plan, it even tells you an ETA of how long you should basically go and grab a coffee and come back to see the first draft. But because of that feature, that's why I put this little meme
up here still waiting. Replica can take a long time. So you can give your prompt and you have to pray for the next 10 to 15 minutes that when you come back, it's what you want cuz if it's not, you have to go back and forth. Now, overall, I'm sure this will get faster and faster over time, but if you wanted to know the status quo as of today, it's definitely not the fastest on getting you what you want. On the lefth hand side, it also lets you choose from different tune models. So, one will
be auto mode, which will pick which one of these is the best for the app you're trying to make. Otherwise, you can say, you know what, if you know you want to build a modern web app, it's going to use things like React and Node, which are frameworks that are very commonly used for, let's say, SAS applications. If you want something basic, you can say interactive data app where you don't care about the look and feel. You just care about the functionality, being able to demo it. A 3D game is where you start doing things
that are, you know, atypical to normal development where it has to use different frameworks and then you have just basic web apps which resemble the interactive data apps. So it does allow you a finer grain of detail when trying to build your application. Another cool thing is because it has all of these tools at its disposal that are internally built for replet, you can give something like an OpenAI API key or a cloud API key once and then it will store that in memory for every single app you build in the future. So unlike lovable
and bold, you'll be able to say, you know what, let's build an app with OpenAI. It will automatically know how to use your API key where it's securely stored in the cloud and just integrate it into the environment file of the underlying app. And as of this recording, they just added a new feature to enable something like single sign on with Google or X, which is typically an unbelievable headache to integrate sometimes with Bolt and Lovable to do it in one click using their replet authentication infrastructure. So, this is a perfect example of ignoring a
lot of the noise that you have out there and focusing on the updates that really matter for the type of app you're trying to build. All right. And when it comes to cursor, this one's a bit different from the other ones in the mix where this isn't primarily geared towards someone that's a non-developer typically. But what's good is because it's using an LM underneath the hood, it can be used practically by anyone, it just looks slightly more intimidating on the surface. What really separates this tool amongst the others is you have full flexibility on the
type of model that you choose. Now, what's cool is you can actually use Gro, which typically is used for a lot of amusing, entertaining tasks online. But when it comes to project planning and hyper creativity, it's a major cheat code. So the big thing that things like cursor and wind surf offer is that very granular flexibility. The actual initial visualizations you get or the designs you get might not be as pretty as lovable bolt or replet, but theoretically you can not only now build in cursor and own the code from the get- go, but you
can also deploy and keep going through that entire production process within one main workflow, which is super helpful. Now, if I was a complete beginner, I would probably still build on something like the lovable and the bolt because you have a very quick feedback loop to see your designs, how it's working, but I would learn the skill hopefully from this video to export that code, then bring it into cursor if I think I really want to put this into production. So, we saw this above in passing, but to get more granular with today's latest models,
4.1, 2.5, Gro 3, and all the cloud models, each one can be used for different parts of the process. So cloud 3.7 which is being used like you saw for replet as well as lovable can be used pretty much for everything from documentation to building um providing detailed explanations of the codebase then you can use 4.1 that has really good coding abilities for maybe architecture design or creating the task list that will be executed on what's helpful is you don't just have to use 4.1 and you don't just have to use claude maybe you could
set up your project using gro 3 and then you execute on that project using 4.1 to create an architecture diagram of what your app will look like or how the screens will be wireframed and then you can execute the actual building of the code following the process from here and here by the cloud models. So this ability lets you hand off the right task to the right AI which again is not a level of granularity that you have control over if you don't use something like cursor or winer. So, some more goodies with these tools
is that you can see a full breakdown of the structure of your project down to the most granular of files and actually manipulate those files really easily. And you also have the ability to quickly see any differences that are being changed. And most importantly, you can make it so that no change is actually made or committed without your consent. Whereas again with Lovable, Bolt, and Replet, things happen and then you only figure out something's not working once you physically click something and it breaks, which is not the best case to be in because if you
don't even know why it broke, let alone where that specific functionality lives, you're kind of on an island. That's why these tools will give you more granular control and they're more sophisticated in the different layers you have to make sure that your app doesn't suddenly break without you having any understanding as to why it happened. So when it comes to the functionality of cursor, you'll usually use something called agent mode. This mode lets you pretty much have the same interaction you would with the AI that you'd have in lovable, bolt, and replet where you can
ask a question and go back and forth, and they will also execute code when it makes the most sense. Now unlike chat mode that we saw in Bolt and lovable you have an ask mode where this is where you can bring the AI to the side ask a few questions ask it in this case to make a checklist that cursor will then follow to execute each and every part of the process while documenting what it has and hasn't completed. Now the reason I put this meme here again is not necessarily because cursor is slow per
se but because it asks you for permission for everything. So if you don't want to ask permission to run every single command on your computer you want to set it to auto run to make sure that doesn't happen. And on the right hand side, you can see here all the different models you have as of today at your fingertips. And you can even control whether or not you're using a reasoning model, which is helpful because if you have a much more complex task, which is super helpful because if you have a more complex task, then
using a reasoning model makes sense, but it doesn't make sense to ask a reasoning model to just center a div on a page. That's probably not the best use case for your money as well as the actual agent capabilities. And one question you might be asking is if you use something like a cursor, how do you actually visualize and see what the program looks like? Now, Windinsurf has a cool within browser terminal that you can visually see what's happening that's in beta right now. And in cursor, what you can ask it is, hey, can you
start some form of server where I can see this? And it will initiate something that's called a local host your computer. It will have a different port depending on how many other apps you have or how many versions you have. And then you can just put this in your browser and see what the app looks like right away. So you can have that instant feedback loop. You just have to take an extra step to be able to see it. So as a summary, you have a few different modes in cursor. You'll primarily be using agent
mode which lets you do everything from writing the actual code to executing functions in your computer's terminal to install libraries, GitHub, commit to GitHub. If you don't know what those words mean, GitHub or commit, it will help you understand not just what to do, it'll do it for you. So if you give it permission to GitHub, it'll let you push your code into a repository that you can keep committing to and basically create version history so that if you need to ever roll back just like you have a roll back button in Bolt and Lable
as well as Replet, this will be your roll back essentially. In terms of use cases, this one I won't glaze over. It's really good when it comes to advanced API integration and implementation as well as documentation generation, creating artifacts basically of everything and every component in your application, especially if you want to hand this off to a team of developers or your developers to take into production. Very helpful. And the last thing here is something called code refactoring. And in plain English, code refactoring basically means that the AI wrote so much code, maybe thousands of
lines of code that it thinks it can do a lot more succinctly or more concise. Now, typically on the face of it, this will happen a lot in lovable where it gives you some form of suggestion saying, "Hey, do you want me to refactor your code to make it more concise?" Their inclination if you're less technical is to say, "Sure, less is more. Why not?" But what's happening is you're giving the AI control to cut out functions that it might have forgotten about that are super important. So when things stop working now, you have a
real conundrum where you will spend 5, 10, 15 messages trying to get back to where you were because it perhaps cut out very important parts of the code. Which is why I typically recommend if you ever see the word refactoring and you are not a technical person by background that you avoid doing that unless you understand exactly what you are refactoring. If you were to do it in cursor or wind surf, at least you have a full list of everything that's being refactored so that if you see that it's trying to over summarize or oversimplify
parts of your app by cutting out essential parts, you can stop specific parts from being refactored or changed. And this is helpful because you have a full paper trail of what is going to change before it actually does change. And then you regret it and you end up rolling back and repeating this process over and over again and then saying vibe coding sucks. So while most of you will gravitate towards the first three platforms, it's super helpful if you sat down for a whole weekend and just understood how to use these tools at least at
a basic level. So to finish off level one, just to summarize three different areas. So in terms of development capabilities, Lovable is really nice on the front end. Back end can sometimes generate what's called spaghetti code, which is different code that's pasted together that works but isn't necessarily the best way to actually create that code. In terms of a database, you're kind of stuck using Superbase, at least for now. Can you connect it to other databases? Absolutely. But just natively out of the box, it's not there. Mobile, it's not really there either. It can maybe
create mock-ups for you that would look good for mobile, but creating a mobile app, still not there. Deployments, one click, similar with Bolt as well as Replet agent, very easy to deploy. And the one other thing I wanted to point to was obviously towards Cursor or Windsurf where the back-end capability of creating those files and structuring them in a way that a junior developer would. This is your best bet at being able to accomplish that. But when it comes to deployment, which I will show in this video as well, it is more tricky. You do
have to take a few more intermediary steps to actually get to this point where you can do the equivalent of clicking a one button here. But again, the value is with Lovable and Bolt, when you click on that deploy button, you're technically renting cloud space to use this URL and it's not safe to provide that URL to the world because you can do all kinds of calamitous things to it. In this world, when you deploy it, it's deployed on your own terms. That's the key differentiator there. In terms of integration features, lovable is decent. Again,
you're using Superbase primarily to be that bridge. Bolt is also very strong because especially on the payment processing side, it has that native Stripe integration which Replet recently added as well. And then on the authentication, I would technically now give this four stars with Replet agent because with this Replet off, it does make it infinitely easier. I tried it out yesterday. And when it comes to overall integration, obviously Cursor and Windsurf will take the cake in most of these categories. The payment processing though because it's not well integrated itself natively, you're going to have to
figure that out. So this one would probably be the hardest. I put this as the hardest compared to the former three. And the last category, which is performance and optimization, obviously you're going to have cursor and windsurf win most of these categories as well because it all comes down to how well can you structure your code in a way that it has more futureproof ability. Now it comes to the first couple tools I want to highlight on the resource usage. So with lovable and replet agent, you'll be burning a lot of credits/tokens with some fairly
basic requests if you don't make each and every prompt as valuable and concise as possible. So on the resource side, cursor and windsurf are a lot more economical if you were to take a token per token comparison, especially a message per message comparison. So with that, we now culminate in level one where you now have a decent understanding of the lay of the land and the foundations of these different tools, how they work, what their underlying brains are, and ideally where their nuances and pros and cons are. And with this knowledge, we can now level
up to level two, where we're going to dive into how to actually speak to them or prompt them. All right, so this is probably the most important level of the five levels because you're only as good as your last prompt. And when it comes to these apps, like I've said before and in a ton of my videos, the first prompt is typically the most important foundational prompt where you give some form of foreshadowing to the AI to tell it, hey, here's where we are and here's where we want to go. Let's focus on the core
shell, but know that these four components or these 10 components are coming down the pipeline. If you think about it conceptually, imagine that you have a contractor building a house for you and all you said was, "Hey, build me this house that should have 10 bedrooms and five bathrooms. If you never said that you wanted a pool smack dab in the middle of the house with a courtyard, they're not going to design the house or blueprint the house in a way that will accommodate for that. So, if you make that work order at the beginning,
the likelihood that you get what you're looking for is very high." Exact same concept here. If you pivot a very major component of what this app should or shouldn't do halfway through, a lot of the code that's been put together was kind of optimized towards a certain goal. If you change that goal very drastically, in some cases, you might need to restart because you'll start jumping into different kinds of code that are mismatching with original code. And now you get that spaghetti code analogy that I keep referring to over and over again. So, similar to
an actual developer, and I say this from experience, I will spend 20 to 30% of my time just planning my project, not writing a line of code until I have all my user stories, until I have my wireframes, until I know why I'm building what I'm building. So, the more context that you give me, the better the outcome you get with an AI project. Exact same thing with this. If this is your mini junior software engineer, then you want to give as much context as possible without all the noise and the fat but focusing on
the concise must need to know details. So before we get into the nitty-gritty, it's good to just review what we saw in the last level. So I made these little playful cards to represent how each and every platform works at a glance so we can keep that in memory while we're going through this. So replet agent decent on the visualization really good at being uh autonomous and thinking through the problem and planning and it's really good at iterative feedback. On the cursor side we saw visually that it's definitely not going to be the best out
of the box but logic autonomy and token thrift meaning being responsible with how many tokens are being used to generate a result it's really good at. With Bolt, we have a balance of really decent on visual, decent on logic, and autonomy. And it does give you options to be thrifty if you so wish to be. And last one, lovable has the most lovable visualizations for now. The logic is mid-tier. Autonomy, you do need to handhold quite a bit and give it a really good foundational prompt. In token thrift, it's quite a spender, especially at scale.
Will this change for all of them? 100%. But as it is today, I would probably use Lovable out of the box to get things going and then get my way and make my way to something like Kurser or a Windsor. For prompting techniques, I generated some plaques for each one again to make this as not dry as possible. When it comes to replet core things, you want to focus on the goals and the user actions. So, what are the goals of the app and what should it functionally do? Those are some of the most important
things. The agent's really good at adding additional features because again it has all of these internal toolings at its disposal. So if you want a database, it already has a database built in. If you need to host it on a server, Replet is literally a cloud hosting platform. So it can support that very easily. If you need that server to scale, you just have to move a little barometer to the right and you can scale accordingly. So you have a lot more granular control here, which is why your prompt should primarily focus on the high
level as well as the intricate details of the user journey and the user flow. Bolt, you want to be very direct and hyper concise. You'll notice also it's super useful with Bolt to actually tell it if you can which files it should be targeting. So let's say you have an open AI key or an anthropic key and it lives in a N file which stands for an environment file where typically you'll put any form of credentials you want to refer to. So you don't have to explicitly put it into the bar of the app which
is unsafe. If you can tell it, hey, edit this thing in the.n file instead of looking at all the other files and scanning where this change most likely has to occur, then it it will also most likely not do that thing incorrectly. So being explicit is helpful here. In terms of trigger words, it seems to like commands and trigger words just like normal LMS do, which makes sense because it's using cloud primarily where you say something like implement this, create that, draft that. Using those command words is super helpful when it comes to specifying endpoints.
What this means is let's say you had the service you wanted to use like grock with a Q which is a super fast way to run inference or run a model using a open source model like llama out of the box this is using claude and cla doesn't know what grock is right cuz as of a few years ago it's last snapshot of the internet you have to tell it that hey here's how you call gro and here's how you implement it asking for my API key so you could add it to an environment file
so what you could do is go to perplexity and say hey can you write me a small python snip snippet on how to call the gro API with full functionality and put a placeholder for my API key and then it will go and spit something out. I will take that thing, put it into bolt and say, "Hey, build a connection that will send this specific text in this box when the user sends send to this service and I'll just give it this pasted code." If you want to add some more documentation, I also find it
helps it be more directional in understanding where we're going as well as how this endpoint actually works so you can navigate it especially if you want to make changes later on with lovable because it's a very visually oriented platform. It benefits from speaking to in terms of UI elements which stands for again user interface. So you can say I want a header with this kind of design, minimalistic design. And this is literally where you can get AI to help you to say hey act as a lead designer or a senior designer at insert company whose
designs you like. Let's say you are a senior designer at Apple. Build a minimalistic dashboard in the style of someone that would have worked at Apple or Amazon or insert your inspiration here. And then you generate some prompt that will go down into more intricate details on what the UI should look like because typically I'm not a designer by trade. So there are words or ways of describing things that I've not properly learned how to articulate. An AI will help compensate for that by focusing on those micro details that will help you get that much
closer to your end goal or end outcome in terms of what it should look like. And when it comes to using something like chat mode on lovable, I find this to be essential for now where anytime you have a brand new very technical requirement, you should go into chat mode, talk about it, and once it's clear that it's playing back or paring back what you expect as a functionality and how it's going to accomplish that functionality, then you approve it, then you have a higher likelihood of success. A really good example of this is if
you ask it, hey, implement OpenAI. A lot of times it'll implement GBT4, which is almost a 2-year-old model. If you were to go in chat mode and say, "Implement GB4." Oh, and by the way, tell me which model you're going to use, it'll tell you straight up, hey, I'm going to use GB4. And that's your opportunity to say, you know what, you probably don't know this in your training, but there's this new model that's called GBT4.1. Here's the name of it. If you want to call by an API, use this. Swap it out for GBD4,
but play that back to me. And then in chat mode, if you keep it on, it'll say, "Okay, cool." So instead of GD4, I'm gonna call this new one. Sound good? And then you say, "Sounds good." You implement the plan and it goes and does what it has to do. A bit more work, I know, but it's a good quality of life hack, especially if you want your lovable experience to be indeed lovable. And finally, if you're dealing with cursor or wind surf, you want to be as particular and specific as possible because these are
development made tools. So when you talk about what kind of libraries you want to use for your components or what kind of models you want to leverage, you want to be very explicit. And when it comes to specifying the model, again, it's helpful because if you're using Gemini or Claude or OpenAI right now, they all suffer from the same thing, which is they're stuck at their snapshot of the old internet. They don't know about a lot of things. Now, you can leverage things like web search to supplement this, but out of the box, you want
to be as thoughtful as possible. And last thing here is to also generate either a checklist of all the tasks you should adhere to throughout building the code or any rules or constraints you want to respect before you start the project because again you're laying the foundation for how the house should be built. And taking that analogy further, if there are permits or rules and regulations you need to be aware of, then you want to say that upfront as well. If we look at this comprehensive table here, it puts pretty much everything we went over
into perspective. And if you're worried about digesting every single part of this all at once, don't worry about it. I'm going to have this table along with specialized cheat sheets for each and every platform available on the house in the description below. So you can just sit back and just observe and learn. You're not leaving this video without understanding all of these intricate details. So one thing I like to focus on here is thematically you'll see that lovable you typically always want to focus on the visual side like we said before. So in terms of
primary focus, talk about the visual design. In terms of level of abstraction where like how do you focus on what the user journey should look like? Describe it visually. If you want to give it some feedback on error handling, describe visual functional error, ask agent to fix. If you talk about ideal prompting style, it's visually descriptive, collaborative. So thematically, you want to speak with your eyes when it comes to lovable. Whereas when it comes to replet like we saw before you want to provide as many highle objectives as possible are very goal oriented and you're
always focusing on the big picture. So you can imagine replet agent as a project manager with a Jira board. And if you're not familiar with what Jira is it's like a task management project management platform where you have what's called epics which are very large sprints of work done over time with subtasks and underlying tasks. In an epic, you'll usually describe, hey, here's the overall goal of this project. And then within that, you'll have a bunch of subtasks on how do you get to accomplishing that end goal. That's kind of the mentality you want to
have if you're using Replet agent. It's very focused. It's very procedural, and it's very sequential. With Bolt, it's very receptive to not just specifying and telling it what the error is, but always supplementing it with not just fix it, but fix it so it does this. Fix it so it does this specific thing. and right now you're not doing it or you're not doing it in this way. So being very intentional in the way you're providing feedback. Whereas with lovable typically you'll get a try to fix button that pops up and you'll click that one
to three times and after three times you probably are dealing with spaghetti code syndrome where you have to maybe roll back or just actually take the code from dev mode throw it into an AI or a chatbt and say hey what's going wrong here? So you can give it some more granular feedback. And finally, when it comes to cursor, you have different tools at your disposal. You have terminal commands. You have rules you can generate. You can switch models on the go, like we said before. And it's basically like flying a plane. You have all
these different buttons and nodes and levers to pick from, which can make it the most powerful of all of them. But if you are intimidated, if you are starting out, you probably don't want to overwhelm yourself, yet until you get more at ease with using these other platforms. Now, if we go above a little bit, I have a bit of an Easter egg for you. Actually, a bit of a naughty Easter egg. But if we click on this GitHub linked, this was leaked a few weeks ago where someone was able to expose the underlying system
prompts, which is basically the prompts that determine how these different tools that I mentioned behave to the world. So, if we take a peek, where it's helpful for us is if we go into, let's say, lovable. Let's go here and we click on the lovable prompt. You're going to see how lovables function to think. And if you understand how these models think, then you can use this to your advantage to understand ahead of time what are the blind spots they it will probably have based on the type of app they want to build. So if
we take this as one small example, it says here, you are lovable, a code editor that creates and modifies applications. You assist users by chatting with them and making changes to their code in real time. And then it provides some more details and it provides these principles. Create small keyword small focused components less than 50 lines. Right? The next one is use TypeScript. Follow an established project structure. Implement responsive design. And then when it comes to component creation, create a new file for each and every component. Use this specific set of components when possible. So
we find that a lot of lovable apps look the same. it's probably because they're using the same set of components out of the box. Why should you care and why is this helpful? If things are not working or looking the way you expect it to and you have something like this to give you some information on how it expects to behave, then when you write your initial prompt, you'll be more mindful of telling, hey, I know you're biased towards using these components, but I actually want you to use those kinds of components. And if we
navigate something like Replet's underlying system prompt, you'll notice some really important things here, which right here kind of confirms my initial beliefs, which is it prioritizes Replet internal tools first. So instead of saying, "Hey, go build on Superbase or let's hook up a Superbase instance," it'll focus on creating its own virtual environments and not using external tools. And if you scroll down, you'll notice this very interesting instruction down here that says, "If a user has the same problem three times, suggest using the roll back button or starting over." So, this kind of confirms a belief
I had by accident, which is a rule of three. If you're encountering the same thing three times, most likely you are in spaghetti code land or your instruction is not good enough or it's missing knowledge about what you're trying to implement to begin with. So I'll leave it to you if you want to go through the rest of them to better understand how each and every tool behaves because if you know that then you know exactly how to reverse engineer and speak to it in a way that's optimized for that particular platform. Now that we
understand the foundations of what vibe coding is, the nuances of the different platforms, as well as how to be a prompt whisperer for each and every type of platform, we're now ready to look at some MVP apps and how they were built and different techniques that you can use to build functional apps without necessarily having to do it the hard way. And by the way, before we keep going the video, if you're finding this kind of content super helpful for you, I run a whole community where all I do is provide tips, tricks, exclusive content,
and specialized offers to get thousands of dollars off of these AI softwares. So, if that sounds super interesting to you, check the second link in the description below. All right, back to the video. All right. So, if we go to the bottom here, let's assume that we use this prompt, which is pretty straightforward, and we said, "Build me an app that will create a mermaid diagram on demand using a natural language command sent to something like OpenAI." Okay, if you don't know what mermaid is, it basically lets you create diagrams or schematics of diagrams on
the fly. I use it all the time internally for client projects just to show the workflow of how a system works. So when it comes to thinking through anything like a system design, an operation flow or anything that has a sequential set of steps, it is amazing. So what I wanted to do was gauge how each and every app would actually render this prompt and how it would look and how it would be different in terms of steps between each platform. So if we zoom in to lovable, this is what we got initially with lovable.
You got some screen of death that said some weird vite error. Vite is one of the frameworks it's using underneath the hood. Once we got past that error, we then connected to superbase, right? To build that bridge. Once you connected to superbase, we just said, "Hey, this is how to call the OpenAI API for the endpoint that I want or the model that I want." So, this is an example of where I went to perplexity said, "Hey, can you output a snippet of how to call GB40 or GB4.1?" And then I took that snippet, went
into chat mode, said, "Hey, we want to make the mermaid diagram, but here's a type of model I want to use." It took that and after one or two tries we got something like this where when we said something like how will AGI happen it then starts to map out what that looks like and that's kind of what mermaid does. It maps out the entire process on bolt surprisingly it was actually one of the fastest at being able to go from concept to actual production without me even specifying anything around openai. It was actually using
GBD40 I believe by default and in this case I asked it to create a diagram. It generated that diagram. It was functional within two or three prompts and very quick compared to loable. When it came to replet I'll show you how this actually looks like in the demo app itself but it didn't just come up with generating the diagram but it also suggested to create example prompts that you could click on to immediately execute it and show the user how to use the app. So if I were to actually just exit for a second and
go into replet and just move this over, I can say something like, can you build a process on how a factory for producing microchips would look like operationally? All right, we'll use a little voice assistant there. Then we have that transcription. We click on generate diagram. It's going to send this to OpenAI to create a structure, take that structure to the Mermaid backend, and then come up with something like this where we can go through the entire process and download the file if we so choose. And again, for just one to two steps, this is
pretty good. This a year ago used to take me 10 to 15 steps and a bunch of custom code. So, if I click on another one like draw a sequence diagram showing a payment processor, how that works, and click generate diagram, it's fairly responsive and it's quick. And you can see here it's going down to a very intricate level. So if you don't even know what mermaid is, this alone is a hack in just understanding how to use it. And one little plot twist is sometimes I create mermaid diagrams to feed in as images to
a lovable, a replet, a bolt or whatever to tell it, hey here's how the system should work functionally in terms of the user journey. So you can do even agentception where you use the agent to build a diagram to give back to an agent to build you something else. And last but not least, we did this in cursor and after three tries and most of the tries were honestly just installing libraries that I needed on my computer. It created something very similar where it generates a diagram. It didn't go the extra step on the visual
side, which we kind of already expected, but functionally worked pretty much right away. All right. Now, in terms of building these tools and making them functional, yes, you can rely in some cases on the AI figuring out how to make certain functionality work, especially if it's as simple as, hey, call this API that is wildly known, let's say, Anthropics API, Gemini's API, or OpenAI. But if you want to do something more intricate or something with five, six, 10 different steps, you'll find that you'll be spinning over and over again if you're not on the technical
side to be able to make that function work, comma, effectively and consistently. So some of the cheat codes to being able to get around that is instead of going back and forth with Python and Superbase, you use workflow automation tools like naden, make.com, or Zapier. All of them can accomplish the same thing in different ways. And if this is the first time you're hearing on any of these platforms, just know that they are basically cheat codes to create sequential flows where you have some trigger, which in our case will be receiving data from the app
and then taking that data and running it through a pipeline of different steps, which usually just means you're putting it through an OpenAI step, you're putting it through a perplexity step, you're maybe transforming the data in some way, you're storing the data somewhere, and then you want to retrieve it in some way. So there's this entire intricate process and I'll show you two examples of these workflows to better get your head around how this could look like architecturally. Now on this side here we have some weird diagrams if you're not familiar with them. And if
you know what a web hook is, by the way, you can skip through to the next part of the video. I want to make sure that this is clear for everyone watching because if you understand what this icon means and how to use it more importantly you can build all kinds of apps because understanding workflow automations this one thing is like 60% of the battle. So this icon again is called a web hook. If you have one web hook you can think of it as an ear functionally where it's just listening in for data that's
being sent from a server or an app. So in our case, if we wanted lovable to be able to send a request and go and send it to one chat GBT step to do something with some form of input and then another step to check it or refine it or summarize in some way, then it would be listening in for this data and then executing that workflow. And by the way, if this doesn't make any sense to you, I'm going to make a very basic app just to show you how this would work. Now, why
do I have a boomerang icon here? Well, when you have two of these in the same workflow, you have one that acts as an ear and one that acts as a way to kick it back to the app because ideally if you send information to this ear and the workflow automation runs somewhere in cyerspace, that's great, but it's not showing up back in your app where you need it to be. If you add a second web hook, this can create a response. And this response is basically saying, "Hey, we ran this entire automation. Here's the
asset that you need to show on screen in the app." So, this is typically distilled and delineated. So let's say you have a a series of prompts, it'll just be the output prompt. But in the series of data you're receiving from lovable or bolt or replet, you have all this additional what's called metadata, which is what device was it sent from? What was the IP address? What is the originating address of lovable and all these additional things that could be used, but for most cases, at least your cases in vibe coding, you don't really need.
It's more so noise. So once you dn noiseise what was collected by the ear what it heard and you focus on the distilled portion you want to send back. This is when you have the same web hook act as a boomerang. All right. So I just pulled this up in lovable and I made a very basic prompt saying create a very basic app where someone could talk or describe their dream application that they're trying to build. Then we'd have a big blue button with glitter that's responsive that you could click that would send that request
to a web hook of some sort. Okay. So then we get something like this. Obviously, I wasn't very thoughtful about the way it would look. It has a placeholder for the web hook, which we don't really need. We have this send my dream. And once we click this button, assuming it's functional, it will send information to a web hook. So let's first make an example in make.com. We're going to have two more examples later on using something called naden, which works very similarly. It just has some nuances in the way it works. So, if we
click on web hook, we'll click on this thing that says custom web hook. And then when you click on this, we're going to create a brand new web hook. We're going to call this web hook demo. And we'll click save. You'll see here now it's just spinning endlessly. What it's waiting for is to understand what kind of data we're dealing with. So, it understands basically what to look out for every single time it gets a trigger. So, if we go back into Lovable and I say, you know what? Here's the address where you can send
this request to. Let's go back and I'll I'll give a response here. Okay. So, I'm going to give you a web hook URL where I want you to route all of the requests to. We don't need a web hook URL within the view itself. All we need is just the describe your dream application text box and to make sure that when we click on send my dream, it will send the request there. So I'll just put that there for now. If you remember, we have to hook up superbase and loable to make this functional. So
before I even send this request, I'll keep this on my clipboard and we'll just connect it to one of my databases to enable functionality. It's now connected. In like 5 10 seconds, it should be functional. There we go. You're now connected superbase. So now I can send this request after I give it the actual URL. So let's go back to make.com. We'll take this copy paste right there. Send the chat. And because it's not that complicated of a request, I don't really need to go into chat mode. If there was some back and forth or
I'm trying to say, "Hey, this is what you send and this is how you should receive it back." I might go in chat mode to make sure it understands exactly what I'm going for. So now we have a UI that just has this text box and this button. When we click this button, assuming we have something here, it should send a request to the ear that's listening for that data. So, let's say I want to build an app that creates uh world peace. Sounds like I'm at a beauty pageant. So, oh, look at that glitter.
I'll do send my dream here. And now it says it's been sent. I'll know if it's been sent if it says here successfully determined, which it happens to right here. So, if I click save and I run this again. So, if I say something different here, like I want to create an app that solves world hunger. Okay? And then we send that over again. You'll see now we've received a bunch of data. So, what I'll do is I'll go to output bundles right here. So, we can take a better look at it. So, you'll see
if I zoom in, we have the dream description. So, that's what it's labeled it as in lovable. We have the actual text. Then, we have a bunch of other information. And this is what I referred to before as meta data where we have the timestamp of when it was requested or sent, the source which is actually the lovable app itself. This is the URL of that app. And then this essentially is all we need to carry out the next parts of the automation. Because now that it understands what it's listening for from lovable, we could
take that and theoretically say, you know what, let's create a GPT step here or we use OpenAI. So let's do something like that. Let's just use a model like uh let's say 04 mini or 03 mini. All right. And then we can say something along the lines of you know take the request we just received and then let's do something with it. So in here we can say add a message of some sort. So this case we just say user and if you click on text now we can reference what we got from that ear
which is all this data and we can say something like the following. extract the main request from the user here. All right. And we put this here. Um, we want to create a plan on how to execute this person's dream app step by step. Okay. So, if I send that over. All right. And I can run. Oh, let me just run this end to end. So, let me just execute this process by clicking run once. And let's run this again. send my dream. Now it's been sent over. The ear already knew what to listen out
for. Now it's giving it to the next step, which is sending this to OpenAI to get some form of response here. So if we open this up and we go to result, you can see it's now come up with a step-by-step request on how to accomplish that person's dream of solving world hunger with an app, if that's even possible. So that's one clear-cut example of how the ear can be used to listen in for a piece of data or pieces of data and you can leverage whatever piece of data that you need to execute it
externally from lovable bold flet or what have you. And just to close the loop, pun intended, all we'd have to do if you wanted to run it back to the app is create one more web hook, which is called the web hook response where all we say is the following where we say, you know what, send a 200 status telling it that it was successful. And here's what we should return back to the app. Right? So then we could say we want to return back the result from the GPT step. And then when we click
save, if we were to run this again, this will run and send the response, but Lovable doesn't know what to listen out for. It doesn't know that a boomerang is coming back to the app. So, we have to now tell Lable, we are going to send this request when we click on this button, but wait and listen for the request response that I want you to actually show on screen. And this is where you go back and forth with lovable until you get to the point where it's actually doing what you say. So, with this
understanding now, you can manipulate and use these other platforms I mentioned, Zapier, Make, Naden, whatever makes you happy to be able to now export functionality to make it that much easier. Because if all you have to worry about is sending a web hook and getting a response every single time, everything that happens in between, you need to worry about less, right? You don't have to go back and forth with the lovables of the world 10, 15, 30 times. You can just worry about having it listen in for the right response and sending the right data.
And then everything in between is a lot more scalable and you can make more progress quicker if you know how to use these underlying platforms. And the last thing I want to review before we go into a build that I put together in lovable bolt and replet is going through this vibe planning or nonvibe planning. Now like I said before as a developer myself I spend 20 to 30% of my time planning a project before a line of code is written and in the same way you should do the same thing but you can expedite
that planning with one of these tools. Now, I've already spoken about perplexity and you probably know how to use chatbine plot, but one I'm going to actually highlight is this one called Mannis because it's a superpower at planning, especially apps. So, I'll give you a bit of a preview right here. So, I have these instructions I put together and I'll blow them up just to make it a lot more readable. So, if you read here, I said the following system instructions goal, create two threeprong build kits. Now the reason why I said three prompt is
because typically like I said you have a foundational prompt you have a prompt let's say for lovable that always says connected superbase and the third one that actually integrates some form of features or enhances the UI so then I said you know what build me three prompts for lovable bolt replet and cursor prompt number one create a foundational shell prompt number two data plus automation integration so assuming that there's some form of web hook we're sending to and then three is UI polish Now, does it always work out that I can copy paste number 1,
2, and three in that order, and we have no errors whatsoever? No. But it does provide a really good foundation. Now, I tell it that I'm trying to build something on NAN for the first product and something on make.com for the second product. I just wanted to show that to you to see the differences between the different workflows. So, now I tell it how I want it to return the files. I want three prompts for each product. One that's going to be an AI archetype finder and another that is an AI solution selector. And down
here I give it some requirements as to how it should structure the prompt. Structure this as some form of request to post and edit. You have to actually specify your sending request. So post would be the method to do that. And then I tell it exactly what to expect as a response. And for three I say talk about styling and that's an example what you should go for. And then for kit number two I tell it we're going to be using something like mermaid over here as a part of the app. And I want you
to go through and imitate what this would look like for make.com. Now, I did run this on high effort mode, which is eye wateringly expensive to do, but once it goes through the process, you'll see how many rounds it went through to create underlying files. We have a bunch of files that's put together such as a task guide for building in cursor building in replet building in bolt as well as a to-do list it made for itself the underlying prompts and what's super cool is it created the automation template for us. So if we go
to final output here you'll see that it's put together the prompt that we should paste into the different platforms. Prompt number two is where it specifies the actual structure of the web hook that it should expect and send the data to. And then number three is very detailed on explaining styling. And this is the difference here between me coming up with a prompt for styling and AI coming up with a prompt because read this detail. Ensure all UI components, buttons, progress bar, input fields, etc. are styled consistently consistently with the overall theme. And then pay
attention to hover states, focus states, transitions. See, these are the words that I don't have in my vocabulary other than hover maybe. But being mindful to say them is the hard part. And at the bottom here, this is the underlying Easter egg where theoretically I can actually copy this basic edit workflow that has a web hook that you just saw in make.com. I can copy this, go into naden, and get ready for it. If I paste this, we now have a web hook that's set up to receive the data from Lovable and then send that
to OpenAI and then just adjust it and send it back as a boomerang to the actual service. Now, is this ready to use right away? No. You have to just quickly select the model. But what's cool is it writes the prompt for you. So, even for this step, based on my goal for that app, it already embedded the prompt. Same thing here as well. You have to specify the prompt. Sometimes you just have to double click it and double check that everything looks as expected. And you obviously have to set up the web hook. So
when it comes to setting up these web hooks compared to make.com, you have to make sure that these are set to post. And let me zoom in a bit cuz it's a bit uh hard to read. There we go. And then we specify the path, what kind of response code you have, and this is the URL you're sending to is assuming that it's in test. If you want it for production, like it's ready to go, you can use a production URL right here. And then once you're ready to listen with your ear, you click on
listen for event. And this is where you'd send a request from Lovable. And then out here in the output section, you'd see all the metadata like we saw on make.com that you receive from that lovable app. Now, to make the boomerang work effectively initially, if I stop the listening here, you can set this to respond immediately um as soon as you receive the web hook. Once you've determined the structure of the data you're receiving from lovable and it looks like it's flowing fine, you want to switch it to this which is using respond to web
hook node. Assuming that you have that response web hook at the end that will allow you to go from using those prompts that manis put together pasting it getting up and running really quickly and the second prompts ready for you. All you have to do is just place in the template it's put together the web hook URL and then you're off to the races. So in terms of a productivity hack for vibe coding, this is one of the most gold secret tips that you could ever get. For the second one with make.com, it does not
draft out a JSON for you. And the reason why is because make.com is closed source versus any which is open source. So it probably was able to deep research the any documentation since a lot of that is actually open whereas make.com doesn't have all its latest templates and structure of the JSON that powers the modules or automations out in the open. So it more so here tells you what different modules you need to put in sequence which is still helpful which is you need some for web hook trigger which we saw. We need an openi
module which we also saw and then if you want to respond back you want to put a web hook response module again. So it provides you the step by step on what to do which if you don't know what you're doing as much you haven't worked with these things as much is a really helpful co-pilot to have while you vibe code. With yet another hack under our belt, we're ready now to look at the demo app that I've already put together in different versions and sequences. So, if we scroll to the bottom here, the app
we're putting together is using, like I said at the beginning of the video, a bleeding edge service from OpenAI that lets you upload an image and have a very realistic and highquality output, a transformation of that image or a version of that image. So overall the app the way it works is you have some original image. It's going to be sent to the image API from OpenAI and we receive back an edited image that's very similar to the original image. So theoretically it's going to look like this where you have a before where you upload
an image. So loable replet bold has to know to expect an image of different types. Let's say PNG.hic etc etc and then deal with that. And then I want it to have buttons that say uh suggestions where you know let's make it an anime or oil painting or cartoon or photorealistic and at the bottom I want an actual text box that if I don't want to do any of these I can give it some form of custom instruction. What makes this nuanced is that we have to tell the lovable bold replets of the world to
wait for this response and the fact that you won't technically get a picture back as a response from the server. you'll get a bunch of code or what's called base 64 in this case that is representing this image. They have to translate and render into the image in the actual user interface. So there are quite a few mini steps here to make this work. And what I'm going to do is show you how we implemented this in NIDEN and how we relied on Replet and Bolt to actually figure it out without any to show you
that we can also access these services in third party providers without going through the root of automations if it doesn't help us. So now we'll dive into the demos I prepared to go through all the process I went through to replicate this functionality in different applications. All right. So whenever I'm trying to build out a more complicated type of application that I haven't done before, I try to build a mini version of it. So initially I just try to create this dream creator where all I would do is send a request to generate an image.
Step one, if I can generate an image and I can make that work, then I could probably edit an image as well. But just because this is easier to conceptualize and easier to execute, I wanted to go down this route. So the initial prompt I provided was create a beautiful app that I can enter an image I want to make and it will send it externally and retrieve back the image for me in the UI. Unbelievably basic, but I tried to be lazy a bit on purpose to see how far I could go. I then
connect it to Superbase because it's lovable in this case to enable some more functionality. So this is all connected. We're good to go. I then tell it that I want to send a web hook and then I want to send whatever I describe here to this web hook. And then in the next step I showed it that I was able to receive some data here in NAN. So you can see I received the data about the image I'm trying to create. And then this was the output metadata. Now it looks very small on the screen
but trust me this is metadata with an ID of the actual data. But what I didn't receive is the underlying image translation. So this is metadata about the image but it doesn't contain the image itself in that B 64 language that I mentioned. So, I needed that transmitted. So, I then went back and forth quite a few times until we got this bad boy here where we got this here where all of this random code actually represents an image. So, if you were to translate this into a PNG or a JPEG, it would actually show
a an actual image, but in workflow automations and code in general, you won't see an image passed on from node to node. It'll be represented in this way. So, once we got it to send that data, then this was a lot more straightforward. What ended up happening was for some reason I would send the request but we're not actually receiving an image even though I could literally see in the automation up here that it worked perfectly. So I told it that you're probably not waiting long enough to actually render the image. And like I said
before because this is more complicated. I used chat mode right here to make sure that it we could have a back and forth before it actually implemented the plan. Now in terms of what this looks like in edit end itself. If we hop in, it's actually a three node process and two of these nodes now you're familiar with one ear to listen for the data and one boomerang to send it back. What we have here in the middle is just if we open this up, we have OpenAI an actual command to generate the image and
all we're doing is referring to the prompt that we receive from the web hook which is the ear right here. So whatever this gets, we just pass on as a prompt to create the image and all we do is respond with the first asset we receive and send that back to the UI. So given that, it's pretty straightforward. So if I say something like create an image of a person in a hoodie, preferably a yellow highlighter hoodie, creating a tutorial for the world. All right. And we send that over. It's now going to send that
request to and then this is going to generate the image and it should respond back here with the boomerang back to our app right here. And then we should see it on screen once it's done. So usually it'll take around 30 to 40 seconds to complete and then it should throw it back in here, take that code, translate it in a way that can render and recreate and piece together that image in a way that you and I can see it visually. And there we go. We got the highlighter hoodie down perfectly, but how to
start a blog is definitely not the caption of this video. So, one really cool thing here is it actually threw this guy's hoodie and himself in the camera. And that's why this image API is so cool. It's way more robust than anything we've ever seen before. And the applications are typically endless. Before we move on to the more complicated version of this app, I wanted to show you this technique I always use, especially if I have to go back and forth with the AI quite a bit. What I'll do is typically ask the AI for
advice to create a prompt that would have prevented me from having to go back and forth to begin with. So this is called reverse meta prompting and I use it literally every time at the end of an app that took me a bit more time than expected to build. So now I'll say I need your advice. I want you to take on the persona of a prompt engineer because obviously you and I went back and forth quite a bit in this conversation and I feel like I could give you way better instructions to avoid all
the back and forth we went through. So, if you knew exactly what I was going for and understood how to steer me in the right direction, basically help me write a prompt to avoid this happening again. So, then it thinks and I used chat mode for this and it comes back with a response at the very bottom here that looks like this. So, then it puts together this prompt right here that says, "Create a React application called Dream Creator that generates AI images based on text prompt. Here's the URL it should connect to. Here are
the core requirements. Here are the components you need to make it actually work. Here's what the image generation process looks like, which is awesome. It even tells it, you know, send a post request, the web hook URL. Here's the different components you need to worry about. This is the part we were missing, right? Translating that image in a language that the computer can understand. Then it goes through error handling, UI design, how it should look, and again goes through technical details. So now I'll always put this in some form of Google Doc. So for reference,
so if I ever want to build an app like that, I can use it as a foundation. But wait, there's more. You can not only ask it for that, but you can say, you know what, I want to build a more complex app. Since you made this prompt, can you help me make another prompt? Again, our goal was to upload an image and edit it. So now I'm telling it, hey, I want to be able to give it a picture of my room and make it a pretty picture, right? So how do we make it
so we can have an app that edits these photos? Now, it doesn't know about the edit portion of the service, but what it does create at the bottom here is a prompt that I use in my next iteration or step in lovable to build that app we're going for, which is the editing app. So, it goes through all the functionality and the application overview and what it should look like as well as how it should display the images side by side. One lefthand side on the original and one on the newly generated or edited version.
So what I did is I copied and pasted this as the first prompt in our next lovable app that really helped speed things along. All right, so this is the finished product of the second demo, which is super cool because I can upload an image here. Let's say we take this room and we say make it way cleaner but keep my bed and drawer. Okay. And if I click on transform image, this should send it to NADN which I'll show you that workflow as well. And you can see here it's going to take around 1
to 2 minutes to take this really ugly room and hopefully transform it to a pretty room. And you can imagine this specific application can be used for all kinds of things like real estate staging showings where you can have your messy room and keep it as it is. We'll take a picture and we'll just render it as if it's brand new. And voila, we get the transformed image here that we can actually download and blow up full screen. And you can see in the first image, might be harder to see, but we had a person
with their peace sign up, I think, taking a selfie of some sort. Not sure if we preserved the selfie. We definitely preserved some semblance of the peace sign. We have the same room with the drawer and the bed now well put together without the clutter, without the mess. And we even have some of the stuff from the original drawer, which is super cool. So, that is now complete. And the question is, how did we get here? So initially what we did here like I said before is just copy paste that prompt that lovable 1.0 put
together for us in its full detail and then we initially got uh some errors and then we connected to superbase to hook it up and then we basically created a nitin workflow. This edit end workflow had to be more complex because now we're dealing with editing. So I'll peek into edit end to show you an overview of that process. And if you want this template and the prior one, I'll make them available to you in the first link in the description below. So don't worry about having again to keep up with how I built this
step by step. But we have again an ear that's listening in for data. In this case, our data is an image. And then we go and just extract and prepare the image to be edited using some code that I didn't write myself. I just had Claude write it for me. And then we send that to the edit image part of the API. So, if we double click this, we have this section that's called images and edits. And what I did is I just asked Perplexity, hey, can you go look up the latest version of the
OpenAI API documentation for image gen and tell me how to call this? And one really useful hack I'll give you is there's this new thing in NAN called import curl. And if this means absolutely nothing to you, that's fine. curl is basically a summary of this entire request in one big blob. So if I give you a actual demo of what this looks like if I say can you go read the latest documentation from OpenAI on image generation their latest GBT-1 model and come back with a curl request on exactly how to call that API
with full detail. All right. And I send that over. It's going to come back with a blob. This blob will allow me to actually copy it, paste it, and autopop populate a lot of that hard work that would typically take you minutes to hours if you're not as familiar with. One, creating an HTTP request, two, understanding how to navigate these API documentations, etc. And just like that, we get a curl request here, which you can see little hints and clues that it's calling the model. It has a prompt already uh placed in has the size
it seems like that's a parameter the quality of the image any response format and background and we can take this and go into any and just double click let's add a new node here just to show you a demo of what it would look like and say http request we can say import curl I can paste this blob that we got from perplexity and taa you now have everything set up for you right all the little parameters that would be a headache and all you have to do is just add your API I key here
to this request. So, it's as easy as that. And these are the tiny quality of life hacks that take you from a mediocre vibe coder to an expert vibe coder. So, we have the actual edited image here. We end up getting the new image here. I just added this for myself so I can physically look at the image here in N8. So, I can click on view and see. In this case, I actually made a boomerang. I asked it to make it into an anime boomerang. And finally, we just send back the response here. Like
you can see if you go to show data and view this is what we end up sending to the loable app. So there's no difference between here and here. This was just for me to verify what was happening and test that was actually working as expected. Now to avoid overwhelming you if all of this stuff is brand new. I won't go through the code that was here and here. So I'll let you look through it and peruse at your disposal by going through this template. Now after I made this automation and it worked perfectly. Was
it all sunshine and rainbows? Absolutely not. I actually spent the next hour and I won't show all the messages because there was some vibe raging happening with lovable but basically the biggest issue I was encountering was this one which is okay I managed to actually edit the image in NAN which is awesome. I want to be able to return back the response web hook and render it in the actual UI. When we transform the image now it should be listening for the web hook response itself. This probably takes around a minute and a half depending
on how quality high quality the image is. When I click transform, it should wait for the response to come back and basically bring back the base 64, which is again the code that represents the image and render it within the view. And this one thing took a long time to get right. Let's say 7 to 10 prompts. There's nothing special in my prompt other than the fact that I said the automation finished in like 30 seconds and it returned the image perfectly, but you're not getting that FYI or that response. So, I kept going back
and forth until I was finally able to return the response here. And instead of having some generated custom stock image you would just throw in, it generated the right image. Now one additional thing I did here in editen is I added a response code which is typically optional but I find that with make.com this whole process is a lot easier because it naturally has response code as a part of the requirements. Listen in and wait until you get this response code and giving it some form of detail that you can get when you go to
executions is you can see how long this automation runs. So on average 47, 53, 42. So I can tell it wait for a minute and listen in for a minute. It's called kind of polling for a minute until you receive some form of data and then grab that data and render it. And after going back and forth a couple times, we were able to get this part of the process here. So, while a lot of the love, pun intended, was given to this automation, a lot of the vibe coding or again vibe raging was in
making it actually return back the asset that we generated. And this is a golden example of when it makes the most sense to actually do the reverse metaprompting and going into chat mode because you'll understand very quickly what you could have communicated better or what you could have been more detailed to get to this specific outcome. And doing this over and over again will help you become a better vibe coder. And even if you don't understand any software development principles, it'll start giving you some exposure and intuition and instincts as to what to look for.
When it comes to bolts, what I did is I took the reverse metaprompt that I got from Lovable and I just used that as the input so I can now expedite things. And this process was way faster. And then in the next 10 to 15 seconds, you should see an edited version of this robot pop up that's in anime format. And what's cool with Bolt is unlike Lovable, we didn't have to connect to Superbase. All I did was feed it this prompt along with the API documentation snippets and then we were able to actually get
this result fairly quickly. You can see here original and edited. This looks actually pretty cool. And if you go to the bottom here, initially all I did was I gave it that prompt. We got a weird unknown error. And then it seems like it was sending it over because again, we weren't using any den. We're just using a raw request to the image API and it just wasn't able to render it and show it within the view. So you can see here at this point we had something that looked like this. If I zoom out
a bit, let me do that again. There we go. So you we had a original but no edited but I could tell it was doing something. So what I did is I went to perplexity and I said can you give me some documentation about this API that I can feed as context? Like I said before, I fed this as context and then I didn't write any of this myself. Obviously, after had the additional information after literally one try, we were able to get this where it understood which URL we were sending the request to, what
model we were talking to, the different fields it was expecting, and how to handle the API response. And then we got this. So that was way easier and way faster, but we wouldn't have been able to do it that much faster if we didn't learn from our mistakes or the journey in lovable. For Replet, we did the exact same thing where we loaded a reverse metaprompted version of the app and then it went and thought for a while and it came back and asked me for my API key and to confirm it and then went
through the process of putting everything together and initially it still didn't set my API key. So, I put a little button here that said set your API key. I wanted it embedded in the app. And then if we go here, we had the same thing where we would have a before, but the edited wasn't working. And then with this one, I told it the same thing probably a few times. And here I pasted the API documentation I got from Perplexity to tell it, hey, here's how to implement the thing. Go read it. Go learn it.
And after that, we initially got this where we'd have an image returned that had nothing to do with the original. So it was very clear this was a stock image being created or that we were sending somehow the wrong request. And after a few errors like this where we got unknown parameter response format even with that API documentation I gave it the curl request I showed you before again that summarizes how this works and then tried a few more times and then eventually we got this response that gave us something like this. So that when
we upload an image here we get the resulting image here just like everything else. So out of the three different options between lovable and bolt and replet, we were able to probably get the best and fastest outcomes with Bolt. Now that we've seen some MVPs, I did promise you that I'd show you how you could own quote unquote the code and remove it from lovable or replet or bolt or what have you and actually export it to GitHub. And again, if you don't know what GitHub is, it is a repository where you can shift that
code in a place where it becomes like a two-way communication. you can update that repository. That repository can speak to lovable as well as whatever servers you have connected to it. So what's good about it is you can now tell lovable, you know what, don't worry about hosting the code on your end. Just refer to my copy of it. And then now you have full control over that underlying codebase and you can deploy it wherever you want. So to make this more tactical, let's actually walk through a real example. Let's bring our app here that
we made into life. So all we have to do is go to sync your project in GitHub and click on connect GitHub and then you'll see at the bottom here we have project store your project with two-way sync and what this means is two-way sync is that when you transfer your project to GitHub it will always refer to GitHub now as the source of truth. It will no longer be hosting the code itself which end of the day is really good for you from an IP owning standpoint. So, if you click on transfer project, it'll
ask you to sign in. And if you don't have a GitHub account, you can just set one up for free pretty easily. Once you have that and you click on it, it will connect it. It'll tell you it is now connected. It'll give you some code here. Okay, this code is the part where I might lose you. So, I'll go as slow as possible. Before we go in there, let's actually see what this project looks like on GitHub. So, if I open this up, you should now see a repository. It's called Art Prompt Studio. And
these are all the files that make up my underlying file. So you can see here these are all the actual files from lovable for this code that actually make it work from the superbase functions to the underlying components to the packages we used etc. And what it has at the bottom here is a readme which basically gives you a summary or a tlddr of how to interact with this repository if you've never actually used it before. Now if you wanted to export this to something like a cursor or windsurf, I'll show you how straightforward that
can be. Now, this comes with a caveat. What's not as straightforward is to get GitHub on your computer, you can go to Windsurf, and I recommend Windinsurf over Cursor because Windsurf can directly interact with your terminal in a lot easier way if you're a beginner than figuring out with cursor. So, in this case, I can say something like take this URL we got from lovable. So, let's take this. Okay. Now that I have this synced up and you have some keys that are uh basically your access codes in terms of setting up all the credentials
you need like the keys you need to access between GitHub and your actual desktop computer to talk constantly. There's a bit of a manual setup there. So what I'll do is to avoid boring you in this video. I'll make a mini loom in the first link in description below so you can review it and set it up. Once your winds surf is actually speaking to your GitHub, what you can do is say clone this project and then put this URL right here. It will start thinking and you can see I'm using claw 3.5 sonnet. If
I blow this up a bit bigger, it'll look better. Let's move this command right here. It's going to ask me permission to now clone this from my GitHub. I'll click on accept. And now it's going to do its process of basically copy pasting that entire set of files you saw there in here. Now, it's going to ask me if I want to run this locally. I'm going to say sure. Let's set it up and run locally. And then it'll take a bit of time to analyze all the files in there and set up all the
repository on the left hand side. It's going to read the read me that it laid out to understand how to interact with it. And then it'll make sure that we have all the what are called dependencies or all the underlying packages or softwares or frameworks we need to make the code run. And it took around 30 seconds and now it's actually ready to run locally. And what's cool about Windsurf is you can click on this open preview button and within the actual UI you can see our handy dandy image transformer that we put together which
looks just like it did in lovable but now it lives in here. And what's cool is we can now technically deploy it elsewhere. So if we wanted to deploy it on our own cloud server we could use something like render which is a pretty cheap cloud platform. And then if we sign in right here we can create a new project. Let's go new web service and then we could actually select our GitHub which is called Arc Prompt Studio that again we got from here. This was pushed to GitHub and then at the very bottom it's
going to ask you for only one requirement to install this repository in this cloud. So now that we host it and we own the code so I can literally screenshot this, go into say I want to actually run this on an external server. Can you tell me what to enter here as an input? And you can see how ubiquitous this voice assistant thing is. It helps you pretty much everywhere. And then we send this request. It should come back with whatever the response is that we have to copy paste which is literally npm start or
in this case it says I noticed that you have a package. There's no explicit start and defined in this case use this. So npm run preview. So I'm just going to listen to the AI and then go back here. Click and do this run preview. We'll click free and I'll do deploy web service. And now technically by the end of this I will now quote unquote own this URL that links to this app. And it took around 4 to 5 minutes but now the service is live. You can use this URL or the ones they
provide you to look at it. And now you essentially own that part of the server and your code is yours and it's deployed in your environment. And now if we go back to Windsurf, you can do something like let's make the entire interface more red colors. Okay. So now I can carry on where we left off with lovable. Apply these changes as it would normally. And then if we want to quote unquote commit these changes to GitHub, we can. And because Lovable is now synced to that GitHub as a source of truth, if we go
back to Lovable for whatever reason, we'll be able to see those exact same changes. There we go. We got now a red version of this. And what we're going to do to test it is just say, can you commit this back to GitHub? Okay. And we'll click on accept all here. And we'll click send. You'll see this is not too intimidating overall. Once you get past the initial step, I'll do run command. And if you don't know what GitHub is, right, it'll help you write those commands for you. You just have to tell it what
your end goal is. So I'll do run command. It's now doing this on my behalf in my terminal on my laptop. So I'll click on accept again. Now it should be pushing to GitHub. There we go. Done. And now it's done. So if we go back into our original GitHub here and we refresh, we should see there's 22 commits right here. Right. So you can see right here updated color scheme to use red tones. It's even diligent enough to label what it did properly. You can see here, this is everything that Lovable did. Here's what
we did, right? And if we go back into Lovable now that it's connected to the service and there's two-way communication, this should pop up as red as well because now it's speaking to and relying on our GitHub as its core foundation. And what do you know, you have the red here as well. So now this is synced in a way that we own the code in our GitHub that can speak to our windsurf that we can also host externally in render. And now you see this whole uh pyramid scheme of love that we have here
that's all working together in tandem. All right. So that was level three, the real climax of the video in terms of implementing everything we spoke about in practice. Now level four is going to be a lot leaner and more to the point. So in general, we saw before Manis' power being able to delineate not just different files and prompts, but prompts dialed in for different platforms and really good at articulating design principles as well as planning. What we can also do is use this other platform called 21st.dev that I've spoken about before that basically gives
you cheat code prompts to implement really pretty designs without you having to unnecessarily understand how to articulate it yourself. So on top of our meta prompting tip, you can go to what's called 21st.dev and you'll see that there are a series of featured designs. So, let's say you look through and you're like, I really like this website, the way it's rendered. What's super cool is it has copy prompt that if you click on, you can say copy prompt in general or optimized for a specific platform. Some will actually be compatible with lovable. So, if you
click something like that and you do copy prompt, see this one's in general optimized for most of them. So, let's say we want to copy paste this. I'll go to lovable and we'll say I want to build a platform selling AI powered chairs that send IoT information using this kind of theme. You know what this kind of vibe just to be thematic here. And what I'll do is I'll paste that's not what we want. Want that prompt. Where was that thing here? Let's copy the prompt. There we go. and we'll put this in quotes. All
right. And then we'll send it over. And what it should be able to do is use that style instead of components and font without using its out of the box pre-programmed library or components, but using something a lot more creative. All right. And would you look at that? We have the identical style and quote unquote vibe, but for our specific website. So, let's actually blow this up and look at the website. We have the identical style here using a very pretty combination of the font and this underlying animation. We have this little fluttering gradient here.
As you go down, you have a picture of a chair that's not AI powered for sure, but uh what's interesting is it's consistent. So, I would imagine typically you'd have some chair here, some chair here. It actually is creating a custom product for it, which is awesome. premium memory foam. So, it's really taking into account and obviously we're missing a photo here. Still impressed that it came up with that to begin with. And this is a website that typically I would pay for, right? So, let me see. Did it go the next step and create
the blog and guides? No. So, one thing with Lovable specifically is it usually won't go to the next step here and say, you know what, um, make the sub pages. But now you can start working on those subpages as well as your next command. And now you have a nice try for free and sign in and get early access. All this is set up in one shot with one prompt because of that tool. So this is almost this is free to use for the most part. So use it to your heart's content. And what you
can do is take certain components here. So if I wanted to take these buttons and say, you know, make the get early access the blue button style that I have here. There we go. Let me put this in quotes and we send that right away. You'll see that it'll change hopefully this specific one. I could have done a pointy option and a screenshot and sent it over, but it should get the idea overall of being able to do that. All right. And just like that, we now have this updated version of the button using the
component we took from 21st. And just like that, two prompts, we have a website tailored exactly to what we're looking for. For the last trick on this level, I wanted to show you something that I dedicated a whole video on that I'll put the thumbnail on screen right now just in case you want to watch it, which is how to use a screen recording to also inform and create a prompt for design. So, let's say you went to astudio.google.com. Gemini is one of the only models that actually accepts a short video as context. You can
upload a file here and actually upload an MP4 file. So, imagine you went to a website that has a very clear-cut design. Let's say Straa, right? And you sign in. Let me just make a new account, whatever. And then I'll sign in. Hopefully, there's not a million steps. Now, I won't go through the onboarding here, but in general, you could see the whole dashboard, training, maps, challenges, the whole orange vibe here. I could screen record this using loom or telet TV or whatever screen recording software you use and then walk through the platform tab by
tab or rating what I see what I like etc. And then if it's between 3 to 5 minutes, I'll download that file from Loom or whatever screen share software you want to use. Upload that as context here and say, "Hey, build a prompt. Act as a prompt engineer and build a prompt that will create an app for XYZ use case using this style and vibe from Strava that you see oriented and shown in this video." Then I'll do grounded with Google research to make sure that it knows if I say optimize it for replet or
optimize it for cursor that it creates a prompt optimized for that platform. So major cheat code whole video on it I dedicated. So definitely worth watching and it's a really good another trick to have in your bag of tricks. And that was level four which is meant to be as succinct and to the point as possible. And now we'll get to the final level which is level five. Now, this is most likely the level that I'll lose most of you. But do your best to hold on because if you do all this hard work and
you deploy something without doing it privately or to your own server and you just put it on X or LinkedIn and say, "Hey, try my new app." Bad things might happen. And the goal of this level is that bad things don't happen or at least not as bad. Now the reality is you have tons of bad actors or trolls or both absolutely salivating at the idea of millions of people deploying code or URLs into the world with minimal protection or minimal knowledge of the underlying code to begin with. So user data API keys all kinds
of underlying code is at risk when you deploy these as is at least for now. This might change in the future where there's really good built-in security. If you go to Lovable, there's something called security scan that before you publish gives you a bit of a preview as to any vulnerabilities it knows about, but typically it's still not comprehensive, enough. And to make this section not as boring as possible, I added some memes to make it as entertaining as I possibly can. So, this is a real story off of Twitter where someone by the name
of Leo was bragging about their really cool cursor app, right? And shortly after, I think what two days after it says, "Guys, I'm under attack. Ever since I started to share how I built my SAS using cursor, random things started happening. Maxed out usage on API keys, people bypassing the subscription, um, insert expletive here, and then all of a sudden bad things happen." If you go to a prior tweet a while ago by lovable, you'll see someone responded saying from replet just a heads up that your superbase API keys exposed in every request which could
have some disastrous consequences. Now while that's probably patched in some way, more and more what are called exploits will pop up as time progresses. And the reason why I put this screenshot here is I am not immune to my own advice. So, I deployed a private app to my paid community with basically access to a bunch of our lead flow that we get to get some intelligence as to what people are selling to help them better equip themselves for what AI solutions they should productize. I left it unsecured cuz I just had this internal trust
that would be only shared in the community. And I woke up to a series of these, not one, but multiple. And it amounted to around $1,000 of API keys. And that could have been avoided taking some pretty easy to implement measures. One of them adding actual authentication to make sure that no one who didn't have an email of the paid member in the community could access it to make sure that no IP address could hit more than one request over and over again within a short amount of time. And these are all things that could
have just communicated in plain English and had Superbase take care of. Now, they wouldn't have been necessarily quote unquote comprehensive enough still, but at least way better in adding more speed bumps and friction to the process. So, our first meme here really communicates a really important point where when you vibe code, sometimes you spend tons of credits and it creates tons of code to make a change that actually takes one shot with a normal developer. So, in this process of creating spaghetti code, you are also introducing all kinds of weird gaps and vulnerabilities in your
app. Now, the next one is a lot more visceral. So, it's 4:59 p.m. on a Friday, perfect time to deploy straight to prod. And because of the maybe lack of awareness, you cause all kinds of issues for your team or team that would be of actually managing all the fires that happen as a result of not actually dotting your eyes and crossing your tees. And the last one really summarizes the core vibe of this video, which is vibe prompting is awesome. But uh when it comes to building real production code, you can go from vibe
prompting to vibe firefighting very quickly because not only do LMS hallucinate, they sometimes create packages or frameworks in let's say Python or JavaScript that don't even exist, they create like fake ghost packages. And these ghost packages might not do anything, but as you add more and more layers of code, you would create all these different back doors to break into your app that are not necessarily something you'd have expected. Now, if you're using Superbase with, let's say, lovable, there is a section called advisor where it tells you errors you have, some warnings, and general info
on areas of vulnerability where maybe you have a blind spot. And what you can do is you can copy those blind spots as markdown and use it as a prompt in chat mode and say, "Looks like we have these errors. Let's try to patch them up in the best way possible." And while again it won't be necessarily 100% comprehensive, you'll be able to make your app that much more secure over top. And there's this option called security scan and lovable specifically that says no security issues found assuming there's nothing that's a huge red flag. But
the thing is the devil is always in the details. So even if this says things are good, they aren't necessarily good. Now this is not a prescriptive idea, but in general when it comes to using LMS, this is the hierarchy of when to use them because Gemini is practically free to use theoretically. It's probably one of the best APIs to use if you are not uh wellversed with security because even if you rip a 100 requests or a thousand requests, not ideal that that happens without your consent. But from a cost standpoint, you'd probably have
less damage there than using something like GBD 4.1 or 03 or claw 3.7 sonnet. So this is very much a patchwork solution. But if you want to create a safer environment if you're starting out and you're testing out and you're unsure, my personal experience is that using Gemini models is safer on the bill side. Now, similar to the other sections, I have a cheat sheet here of what to worry about with each different type of platform. So with lovable we just saw it runs a quick security check with Bolt. What's really good about it is
it stores secrets really well inherently in a in an environment file. With Replet because it owns its own tools and it stores all the information in those tools. It has privacy mode. It keeps all those secrets in one place. You can reuse them over and over again without worrying about having to paste them in the chat. So Replet is very safe, let's say, of the three here because a lot of the infrastructure lives within Replet itself. So that's one key consideration. Even in something like cursor, if you are not well-versed in actually writing code and
reading code, you can have rules that you don't think about or even an AI doesn't think about to patch for you. At the end of the day, all of these will have flaws. So you do need typically a real developer or a real security expert to review your code to make sure that it's battle tested enough to actually be scalable for real world access. Which is why for now the majority of vibe coding in my books is for MVPs and prototypes. you still have to do the developer stuff to get the developer outcomes of production
level quality code that is not necessarily 100% robust but way more robust than a lot of this LLM generated code. Now I have even a more detailed list of all the vulnerabilities, key security features, mitigation strategies that I'll also link in the first link of the description below so you have access to it and read through it. And one thing we're doing in my paid community is we paid for a white hat hacker to go into some apps we put together and create an even more battle tested full report. So security is really important and
it's important to dive deeper into understanding all the nuances of the security frameworks. And with that we've now reached the end of our five levels and the end of the game. So right now you are definitely ahead of 99% of people when it comes to vibe coding in general. From understanding the nuances of these different platforms to how to prompt them with different hacks to how to use deep research to help you cheat time and learn new platforms without learning them from scratch to learning about security to learning about design hacks as well as different
productivity hacks along the way to make you as productive as possible. Now, this video alone took me three and a half weeks to plan, design the images for, and build the demos for. So, if you appreciated this and this was helpful for you, please let me know down in the comments below. It helps the video. It helps the algo let it know to push up this video. So, also give a like and sub the channel if this is the first time of us meeting. And lastly, if you are a business owner, business leader, or entrepreneur,
and you love this kind of content and my way of approaching it, think about joining our paid community and joining hundreds of other business owners who love building and doing things like this every single day. I'll see you the next one.
Copyright © 2025. Made with ♥ in London by YTScribe.com