This is everything that I know about how to use Claude for writing. Claude is, in my professional opinion, the best large language model family out there for creative writing in particular, but for other forms of writing as well. So, let's get into it!
First of all, what is Claude? There are two things that I could be referring to when I say Claude. The first and most common is that Claude is just a family of LLMs (large language models) that come from a company called Anthropic, which is a competitor to OpenAI, the producer of ChatGPT and all of the GPT models.
There are multiple models in the Claude family, which I will get to, but I might also be referring to Claude as the chatbot, also known as Claude Pro if you have the paid version. While I will be showing off that chatbot today and using it for most of the demonstrations in this video, understand that you can actually use some of the Claude models inside other third-party platforms, such as Novel Crafter, for example, which is my favorite tool for writing a novel with AI. But to go ahead and talk about the chatbot first, let's talk about the pricing for Claude.
This is the equivalent of ChatGPT for OpenAI and looks very similar at the pricing level. Here in the free version, which is pretty generous, you can access Claude for free. If you want to take a look at it, you can talk to Claude on the web or through the iOS or Android app.
There are apps for Claude on both iOS and Android included. You can chat with it about images and documents, which we'll get into, and you get access to Claude 3. 5 Sonet.
As I'm recording this video, 3. 5 Sonet is the most advanced model that they've released. Now, by the time this video goes out, that may have changed, but for right now, this is a pretty cost-effective model that also happens to be way more advanced than what we had previously.
If you're in the pro version, you get everything that's in the free version, and you also get access to Claude 3 Opus and Haiku. Haiku is a very cheap model that's not quite as effective as the Sonet and Opus models, but it does the job. Opus is a little bit more advanced, although, as of right now, the 3.
5 Sonet is actually a little bit better than Opus. So, we're waiting for the 3. 5 Opus, which I assume is coming very soon and might actually be out by the time this video airs.
Most notably, in the pro version, you get higher usage limits compared to the free version, so you'll be able to continue using it for longer before it throttles you. It throttles you on both plans, but I have rarely come into contact with the throttle in the pro version because it usually has much better usage than you would get on the free version. It also gives you access to projects to work with Claude around a set of docs, code, or files.
I've actually found this to be a really useful feature; recently, I've been using it a lot for some of the nonfiction projects that I've been working on, and so that's a really useful tool there as well, and you only get that in the pro version. Plus, you get priority bandwidth and availability, along with early access to new features. I have found it to be faster than the free version, so a lot of good things!
The pro version is $20 per person per month, and as of right now, there is no annual option or quarterly option; you just pay monthly. Last but not least is the team pricing, which is $30 per person per month, but you can have multiple people on a team. You get everything that's in pro, plus even higher usage limits than what you have with pro.
You can share it with teammates, so you can have things that go across your team, and you also get central billing and administration. So, that's it! That's a pretty basic pricing structure, and most of you are just going to be using the pro version, unless you're really pushing it all the time and want to upgrade to the $30 per person version, or if you have a team that all of you want to be using Claude, you can use that version.
Now, let's move on and talk about the specific models that Claude actually uses. As of right now, we are kind of in this waiting period. I even considered putting off the filming of this video to wait until these new models had come out, but right now we have Claude 3.
5 Sonet as the most advanced version. However, before 3. 5 Sonet, there was a 3.
0 Sonet, and we had Claude 3. 0 Opus, Claude 3. 0 Sonet, and Claude 3.
0 Haiku. Now, we have a 3. 5 Sonet, which is better than all of those 3.
0 models, and yet this one, the Sonet model, is the middle-of-the-road model, so to speak. It has the best balance of creativity and reasoning and all of the things that make AI smarter while also being reasonably cost-effective to use. The Claude Haiku models are known for being even more inexpensive—by far the cheapest models that you can have, which is useful, especially if you're working outside of the Claude Pro chatbot.
However, it's not quite as effective, not quite as good at reasoning, and all of the things that the beefier models are good at. So, there's that opportunity cost that you get there. The Claude Opus models are supposed to be the highest and definitely the beefiest, the most advanced AI models that.
. . They have, back when we were comparing CLA 3 Opus to CLA 3 Sonet, it was definitely the better one, although only by a marginal difference.
So, if you were really worried about the cost of CLA 3 Opus, then CLA 3 Sonet usually was good enough to work out. However, recently they released CLA 3. 5 Sonet, and it turns out 3.
5 Sonet is actually better than CLA 3. 0 Opus. Hope I'm not confusing you here.
Right now, as of today, CLA 3. 5 Sonet is definitely the best model of this entire family. I’ve done a whole bunch of tests on it.
A lot of people have done tests on it, and I’ve definitely found it to be the best model out of all of these. However, as it says here on their webpage, CLA 3. 5 Opus is coming soon, and CLA 3.
5 Haiku is coming soon. So, I think it's only a matter of time before we get some really amazing models, especially with the 3. 5 Opus.
If we look at the jump from 3. 0 Sonet to 3. 5 Sonet, it was a huge jump and even went above what the 3.
0 Opus was. So, we can expect the 3. 5 Opus to actually be even more complex and more robust as a model.
But as I'm recording this video, it's not yet out; it may be out by the time you actually see this. And so, if that's the case, then go and go to town with that model and see how good it compares to 3. 5 Sonet.
To sum that up, we have this guide here on the Anthropic website that says Opus has strong performance on highly complex tasks such as math and coding, Sonet balances intelligence and speed for high throughput tasks, and Haiku offers near-instant responsiveness that can mimic human interactions. They also go in order of cost: in order to run Opus, it costs a lot more on the servers; Sonet is a little bit less, and Haiku is definitely not very much at all. I’m actually excited and hopeful that we might be able to get Haiku on some actual devices, so you could run CLA 3 on your iPhone or something like that, because that’s probably something it can handle.
So, what can you actually do with these models? Now, I’ve already created a video about ChatGPT, a full course on ChatGPT, and much of the things that I talk about in that video are applicable to this video as well. So, if you’re interested, I’ll go ahead and put a link to that video up above.
But Anthropic also has a really nice summary of all of the things that you can do with Claude on their website. So, let’s just go through those really quickly. You have text summarization; obviously, it’s a very good thing that many models are good for.
Content generation, of course, allows you to create blog posts, emails, and of course, if you’re following this channel, books. I use it quite frequently for books. In fact, the CLA models are the ones I use the most frequently when it comes to writing books.
Data and entity extractions allow you to get structured insights from unstructured text, like reviews, news articles, and transcripts. I use this quite a lot to help with the analysis of a book or different articles that I’m trying to put together—that sort of thing. You can use it to ask questions, to act as a tutor or something like that.
You can use it to translate text, and you can use it to perform text analysis and recommendations. You can create dialogue and conversation; it gives a couple of examples here of interactions in games, virtual assistants, storytelling apps. But I also find that while it’s not perfect and usually an experienced author could probably outperform Claude on its dialogue and conversation, I’ve found that it actually does far better than most of the other LLMs out there at making natural-sounding prose and dialogue.
And then, of course, you have code explanation and generation, which is not something I cover so much on this channel, but it is something that you can also do with Claude, just like a lot of other LLMs out there. Additionally, one of the things that the Claude models can do is they have vision capabilities. So, unlike ChatGPT, which can actually generate images—something that Anthropic doesn’t do—you can still upload images, PDFs, or things like that and have it analyze them for you.
So, let’s take a brief tour of what the Claude chatbot actually looks like. This is my version, so we are on the Pro Plan for ChatGPT, and so it might look a little bit different if you’re using the free plan. But all in all, it’s using a similar layout.
You have your chat window here. You can choose which model you are using, but it’s set by default to go to 3. 5 Sonet, and you can’t even access the old 3.
0 Sonet anymore; it only has 3. 0 Opus and 3. 0 Haiku.
But if you want the best of the best, 3. 5 Sonet is probably the one to go with. Then, you just type in your query here, and it will give you an answer over here on the side.
You’ll get a list of all of your recent chats. Here, you can also star any if you want to access them later or if you want to just access really important prompt conversations that you’ve had; all of those can go here in the starred region. You also have access to projects if you’re on the Pro Plan, which I will get into in just a little bit.
So, if you want to use the vision capabilities… Or attach a document to Claude. You can do that here by adding content, and it'll just pull up a window for you. Then, from there, you can grab anything that you wanted to analyze.
So, for example, I'm just going to pull in the cover art for one of my book covers right here and just say, "What do you see in this image? " So, it started by analyzing the text on the page, which says, "It appears to be a book cover for a fantasy novel. " It's cool that it got the fantasy elements right: titled "In Creation: Heart" by Jason Hamilton.
It's the eighth book in a series called "Roots of Creation," and it is described as an epic fantasy adventure. All of that is true; it all comes from the text on the page. The cover art features a dramatic fantasy scene.
In the foreground is a woman with dark hair that has a streak of red. She has her eyes closed, and there are glowing symbols or tattoos on her face and arms. She's wearing dark leather-like clothing with straps and buckles, suggesting a warrior or adventure character.
Behind her is a large dragon-like creature with glowing orange eyes and what appears to be scales or feathers. The background has a mystical green glow with swirling energy or magic effects. There are some floating dark objects that could be leaves or debris, adding to the dynamic feel of the image.
The overall color scheme is dominated by greens, oranges, and browns, giving an earthly yet magical atmosphere. The composition suggests action and adventure, fitting for a fantasy novel, and it pretty much got this all spot on. The only thing that it didn't really get right was the feathers on the dragon; there aren't really any feathers here, but everything else it got really well.
So, it's really good at analyzing these images. So far, everything I've shown you has been done here inside of the Claude Pro interface, but there is actually another way to access Claude, and that is through Anthropic's console, which is very similar to OpenAI's Playground, where you can also play with these models in a way that is very similar to using it inside the chatbot like Claude Pro or ChatGPT Plus. But instead of paying for it in a monthly subscription, you pay as you go.
So, let's go and look at that. This is the Anthropic console, and if you want to actually work with Claude in the same way you could work with Claude Pro, you can go here to Workbench and you can put in your system prompt here, your user prompt, and then actually go through and just chat with it like you would in a chatbot. Now, there are some limitations to this.
I don't think you get quite the same level of interaction that you can get with Claude Pro, for example. Inside of Claude Pro, there are some very good design features; that's not something you get here. It's a little bit more tech-savvy inside of Anthropic's console, but you can definitely do a lot of the things here that you would do inside Claude Pro.
There are things that you can do inside of the console that you cannot do inside of Claude Pro. For instance, if you select this model settings button, you can adjust what the actual settings of the model are. So, first of all, you can select which model you want.
You have access to pasted models that you don't have access to inside of Claude Pro, for instance, the old CLAE 2. 1 and 2. 0 models and the even earlier CLAE Instant 1.
2 model. There, you can access those if you want, but let's just say you're going with 3. 5.
You can also adjust the temperature, which affects the creativity and the output of the model. You can do that here. You can also select the maximum number of tokens to sample, so if you don't want it to have like its full maximum token sampling, then you can do that here.
So, there are things like this that you can tweak that you can't do inside of Claude Pro. But to go back to the dashboard here, there's another thing that I think is really cool that, as far as I know, is only something you will find inside of this console area, and that is a prompt generator. We talk a lot about prompt engineering on this channel and in many other AI channels.
It seems like prompt engineering is such a big deal, but a lot of people aren't really good at prompt engineering or creating effective prompts. So, Anthropic has actually created a prompt generator to help with that. So, let's go here to generate a prompt.
This is also under the dashboard in the Anthropic console. Select "Generate a Prompt," and then you describe the task. This is basically where you just put in your prompt as you would normally put it into this space.
For example, "Write me an email headline for a new release about a book called '10,000 Words an Hour,' which is all about writing with AI. Say writing fiction with AI. " That's a simple prompt, you know?
And we could enhance this. Maybe say instead of "Write an email headline," we'll say, "Write 10 email headlines. " Now it's going to generate a prompt that'll be even better.
And here it goes: "You are tasked with creating email headlines for a new book release. Your goal is to write compelling attention-grabbing headlines that will entice recipients to open the email and learn more about the book. The book you are prompting is titled.
. . " and then you just select the book title.
Here, and so this is where I would put my actual book title. The book is about, and then you just put the book topic here. When writing the email headlines, follow these guidelines: - Keep each headline concise; ideally under 50 characters.
- Highlight the unique selling points of the book, etc. - Use action words that create a sense of urgency or curiosity. - Incorporate numbers or statistics when relevant.
- Appeal to the target audience. - Avoid clickbait and misleading information. - Vary the style and approach for each headline.
Generate 10 unique email headlines for this book release. Present your headlines in a numbered list, with each headline enclosed in a headline tag. For example, <headline>Your headline text here</headline>.
Now, this is a much better prompt than what I had. I would still probably tweak this to make sure it fits the objective I have for this particular prompt, but this is a great way to really take your prompts to the next level. I would encourage you, if you want some rabbit holes to fall down, to start comparing the prompts that it gives you with the original prompt that you provided.
Compare that on different AI models and just see how the quality of the output looks. That's one of the things you can do there. The last thing I want to go over in this basic overview section is just the projects and artifacts.
If we come here to projects, you can see I have a number of projects here already. But you can come over here and just say "Create Project. " I'm going to say something like "Article Generator" and just briefly describe what you want to achieve: "I want to write articles with this project.
" Then you say "Create Project," and now we have this project setting area. If we're going to create articles here, we would start by setting some custom instructions, and the custom instructions are project-specific, so they will only apply to chats that are generated inside of this project. But you can say anything you want here; you can say "Write in this type of tone," "Make sure you have bullet points and headlines," and all of these things that would be consistent with an article.
I'm not going to go ahead and put a full prompt in here right now, but that's something that you can start with. Then you want to add knowledge. There's no knowledge added here yet, but you can add PDFs, documents, or other text to the project's knowledge base that Claude will reference in every project conversation.
So let's go ahead. Let's say we were writing an article about, "I don't know how to write a book with ChatGPT. " If I want to reference some of the other articles out there, I can come here and look.
Number one on the list is written by Yours Truly back when I worked for Kindle Press, so we could just take all of this text and go here to our project, say "Add Content," and then say "Add Text Content. " I could say "Example One," and then paste in the text there. Once we've added that to the list, you can see it here under "Example One.
" It will show that it's reached 2% of its knowledge size used, and so that's really useful to know that it has a certain context length window. We don't want to use it all up, but this is going to go up as you go. I'll show you some of the other projects that I've been working on.
I have one here called "From Zero to Published," and that's because I am working on a revamp of a book I already wrote about how to write a book and publish it when you have zero audience and zero sales, and just go from that to being a published author making actual sales and building an email list and that sort of thing. That book, by the way, is available for free to those who are in my membership, so definitely go check out my membership if you haven't already to get in on that. You can get it for free with your 14-day free trial, but I'm working on a second edition essentially of it, and so I've started pulling in some of the chapters.
As you can see here, I've got several chapters that are listed here so that it knows how I like to write. In the custom instructions, I have added a very simple custom instruction; I didn't really want to put too much in here. I just said, "Write in a similar style to the provided chapters.
" So these chapters are all here; I wanted to write in a similar style to that. If I am stuck on a chapter or if there's a portion of the chapter that I want to get the AI to help out with, I will just come here into the chat and say, "Write a chapter about," and then I'll just say the topic of the chapter and then provide a rough outline of what I wanted to cover inside of the chapter. It'll spit out the chapter for me, and then I just go through and fix it up to ensure that it's up to my standard of writing.
It's actually been a pretty effective way for me to be writing non-fiction like this, especially now that I already have a bunch of chapters finished. Now, if your book were to get too big—notice I'm already at 9% of the knowledge size used, and I only have five chapters here—eventually, if your book is over 100,000 words long, it's going to get really big. Not going to work as well for you, and you will probably run into Claude's thresholds a little bit more, but for a simple non-fiction project like this one, I'm finding it to be quite useful for helping me write those projects.
This is just a writing example. If you have a lot of data that you want to parse through, like spreadsheets and things like that, you can pull those in and access all that data. You can look at it all from kind of a macro view and help you actually write things that make use of that data.
So, it's a very convenient tool to be able to analyze a lot of data that you want, just as background knowledge as you are writing your project, whatever that is. All right, next we're going to get into some prompting techniques so you can actually take advantage of Claude to its fullest capability. So let's do that next.
Now, before I jump into some of these prompting techniques, I want to refer you again to my full course on ChatGPT because, in that course, I go through a number of prompting techniques that are still relevant here. Things like the FITS formula for a really good prompt, the fractal formula for creating long-form content, and a number of other things—Chain of Thought prompting. All of that is relevant and available in my full course video about ChatGPT on this channel, but I am going to go through a couple of techniques that are really relevant for Claude in particular, although some of these techniques will also work in other language models as well, and some of them are just universal advice that I would give regardless of what LLM we're using.
So, let's dive into that. The first is, as I would say with any model, you need to be specific. Specificity is going to be the big thing that makes a difference between output that's just subpar and output that is amazing.
A lot of times I get people that come to me and say that the output from AI is just horrible and that if you use it verbatim, it's never going to be as good as a human. First of all, that may be true, but it's not going to continue to be true forever. Second of all, often I can look at the prompts that these people are using, and I can immediately identify some issues with the prompts.
A really bad prompt is going to result in a really bad output; that's just the way it works. You can probably get results that are way better than what you had before just by tweaking your prompt. So, a couple of ways to make sure your prompt is more specific: One of the things you might want to do is include all of the context that it needs, so make sure it has all of the information it needs to make an accurate decision as it writes the actual text.
You might also want to make use of things like bullet points, which actually do help the AI understand what you're trying to do better. So, I always list my outlines as bullet points and things like that, but on the whole, you just want to make sure it has everything that it needs in order to create the output you want, as well as very clear, specific guidelines on what you want to see. To kind of go along with that is my second tip, which is to use examples.
This is especially true if you have any kind of formulaic type of writing that you want to do, like you might do in copywriting. A great idea for copywriting or for any similarly related task is to give it an example. Say, "Here is an example of the kind of writing that I'm looking for.
Do something like that, but for this other thing," or you can give it a formula like, "Say I have a bunch of headline formulas, and I'd like you to take these formulas and make some headlines with them. " That's going to do much better for you than if you just said, "Write me some headlines. " Another one that I really like is using XML tags.
This is something that Anthropic actually recommends you do themselves, and so it's something that I highly recommend you do, especially for longer form projects. XML tags look something like this: If you have a bracket here, we say "instructions," and then a bracket with a slash here, providing instructions. Those of you who are familiar with some coding languages like HTML will understand the idea that I'm going through.
This acts as kind of like a little box. If we take the instructions here, we can just say, "Write me some headlines based on the following templates and subject matter. " So now we have these instructions inside of these little brackets, and this is kind of just showing the AI that this is where the instructions begin and this is where the instructions end.
So, we might put this at the end of a prompt, and if we wanted to put the subject matter, we could say, "Okay, subject," and then we would put an ending to the word "subject," and inside the subject box, we can put something like, "the book is about," or we could say, "the subject is about the new release of a book called '10,000 Words an Hour,' which is all about how to write fiction with AI. " And so suddenly now we have the subject, and then above this we can put a box for templates. So we put the starting tag for "templates" and then the ending tag for "templates.
" Again, these are just XML tags. Actually, I'm going to grab this part of the prompt from a super prompt that I've already developed, which has a ton of these templates. I'm just going to grab this whole thing and stick it in this section for the templates, and that was such a big file that it ended up pasting it here as its own thing, which is fine, but it still has those XML tags around it.
So now all I have to do is run this prompt, and it's going to give me a whole bunch of prompts—a whole bunch of headlines that I could potentially use for this book. It's actually given me some pretty interesting ones here that might be good to use. These headlines are going to be way better because I've used these advanced prompting techniques than if I just asked it to give me, you know, 20 headlines around a particular topic.
All right, the third tip that we have here is for longer-form prompts, and I've actually already demonstrated this to a certain extent. But let's say you just have a ton of data that you want it to sort through. For instance, let's go back to this article that we had about how to write a book with ChatGPT.
Copy that and then come here to Claude. First of all, we want to make sure that all of the long data is on top. This is a recommendation from Claude and Anthropic themselves: put the long background information, everything you want it to know, at the top of your prompt, not at the bottom.
So I'm just going to go ahead and paste this in. Now it has that information in. We also want to make sure we're using XML tags, so we can add the instructions here.
If I had more examples, I would probably put them into other XML tags as well; but we can just say something like, "Extract the key points from this article. " Sometimes, when you're going through a lot of data, you actually want it to be able to cite its sources, and that's to help you understand that it's not just making this up; it's pulling from a specific spot in the data that you gave it. That'll help you actually go back into the data and find what it was referencing.
One of the things you want to do is ask it to quote the data that is relevant. So I can add that into the prompt here and say, "Pull quotes for the most relevant key points from the article. " Now, if I want to verify that the information it gave me is actually accurate, I can go to the quote that it gave me from the article and just do a normal text search for the quote that it provided.
There, I'll be able to look it up. Okay, this is an actual quote; this is accurate, and so forth. It's like having just a really good reference guide included in the output.
This will be especially important if you have a very large volume of data. Right now, we're just working with one article, but if you have, say, a whole book, or even large selections from multiple books, or multiple articles, or multiple papers—like if you're doing an academic paper, for example—and you want to make sure you're pulling key points from all of these other studies, but you don't have the time to go through and read all the studies, you can ask the AI to quote bits and pieces for you that are relevant to the argument you're trying to make. Then you can just search by text and quickly find those spots inside of the studies.
So let's just go ahead and see what that looks like for this particular article. It gave me the key points of the article here, and then down here it says, "Relevant quotes on Chat's ability to follow directions," and then it quotes from the article. I can say, "Okay, it says, 'While many AI tools can process instructions…'" I'll just copy this, and then we'll go to that article and do a CRLF to search for it, and yep, there it is!
So it does a good job of pulling those quotes. That's the bottom line of how to use Claude. Now, of course, there are many other things that I haven't covered here, such as my FITS formula, the fractal technique, Chain of Thought prompting, and all of those things.
All of that you can find inside my ChatGPT full tutorial, because it doesn't really matter if you're using ChatGPT or Claude; it will work for either one. So go and check out that video next, and I will see you in the next video. There are multiple and inexpensive options.