As someone who's built a career around writing, I think that AI is going to shake the foundations of the writing world. I got two big thoughts. Number one, if you're a writer and you're completely ignoring, completely dismissing AI, I think you're out of your mind. Okay? But also, there's going to be major room for writers to succeed. It's not all doom and gloom. Really good writers are going to be are going to be all right. But the question is, how do those two things square together? And if you're a writer, how should you be thinking
about AI? Well, that's what this episode's all about. What I'm going to do is I'm going to show you how I write with AI, how I think with AI, and how to make your writing AI proof. I got my notes here and I'm just going to talk about it. These are all the things that I've learned over the past few months since AI really kind of kicked off that existential crisis in me. And I'll just start off with a little bit of backstory. So, I ran a writing school called Write a Passage for like six
years. Taught thousands of students, some of whom became really successful. And the whole point and purpose of the school was you'd come in and if you wanted to build an online audience, you would start writing consistently, publish an article, say every week or so, build an email newsletter and then become a domain expert for whatever it was that that you knew a lot about or were really passionate about. And a lot of people did that. Then other people got really excited about telling their personal stories. There was some some idea, some story that they had
always wanted to tell in their life. and write a passage became a place where they could do that. So ran that thousands of students and I was really focused on the curriculum. And what's funny is we ended right of passage in November. It's now been less than 6 months and back in November AI was less than 10% of the curriculum. Now I can't even imagine it not being the lynch pin of it. both because of how fast things have changed, but also because of how much people are talking about AI now in a way that
they weren't just 6 months ago. That's how fast things are changing. And I still remember, you know, I still remember being in my apartment. It was late at night. I had this old couch that I bought on Facebook Marketplace and somebody had recommended an article to me from Steven Johnson, an article in the New York Times, and it was about GPT3. So, this is like a long time ago, like this is eons ago, like the dinosaur age of AI. And the article was about how LLM worked and how you could basically predict sentences. It actually
has one of the best opening paragraphs of any article I've ever read. I remember reading it, sitting there, and that was like when this first when the light bulb sort of came on for me. I was like, you know what? Maybe there's something here. And it was the first time that I began to understand sort of at a high level how do LLMs work? What is going on here? How are they predicting words and working at a technological level and then I had a mentor who runs a multi-billion dollar company and his whole goal for
the company in 2023 was to get them to be AI first. And I would sit there at dinner with him and I really admire this guy from a business perspective. And we'd be sitting at dinner and say, "Oh yeah, AI this, AI that." And I was like, "Dude, I don't know. I don't know. I'm kind of a skeptic. Like, if you actually use these things, they're not that good. They hallucinate all the time. I don't know. It's just no better than than what a human can do. It's not even close." He's like, "You got to
watch the rate of growth." And I really believe that this this technology is going to start growing really fast. So I'm going to have thousands of my employees to start really focusing on AI. He put in a hiring free so that the companies they couldn't even grow. They had to get more efficient with AI. I was like, dude, you're crazy. And then fast forward a bit, I began to see glimmers of it in 2024, but it was really at the end of that year. I did a trip to Argentina. And it was sort of my
first time taking serious time off in in a while. And I when I travel, I just love to learn about wherever I am. So what I would do is as I would spend the day in Buenocidis, I would, you know, be in an art museum or I'd see a statue and I'd basically have a question and I'd put it into GPT and then it would get the answer in the background and then at night I'd come home and I would read all the answers of questions that I'd asked throughout the day. So because of that,
I could basically file ideas into chat GPT and then at the end of the day have like a whole summary on the things that I needed to know. And man, it was so cool. I feel like I learned so much from from doing that. And I was like, whoa, I now have a tour guide with me at all times. The AI answers were more useful for me from a learning perspective than hiring actual tour guides for basically all the tour guides I hired. I probably hired four or five tour guides down there for various things.
There's only one tour guide that was better than what the AI would have given me, which is pretty crazy. But it's not just that. The models themselves are getting better and cheaper at a really fast rate. So for starters, the amount that I'm engaging with AI is up like 10x in the past year. But there's also a lot of competition for who's going to make the best model. And you'll notice because there's so much competition, prices are falling, and people are competing like crazy to basically try to have the best model that you can go
to. I remember it was 2023, GPT4 came out, and it was so much better than the other models. It was like Lil Wayne has this line, I'm resting in the lead, need a pillow and a cover. It was like they are so far ahead. And this was wild. Throughout the year 2024, 18 different companies came out with a model that was as good or better than chat GPT4. And actually, that's not obvious. I have the Jordan Peterson in me. That's not obvious, you know, because here's the thing. It could have been one of those things
where one company, AGI, they're just going to hold the keys to the kingdom. And actually, there's a lot of different models that are really good. But all this is to say that if you're interested in writing and you're just outright ignoring these advancements or you're just dismissing them, I just think you're a fool. I think you're a fool. And we are on the precipice of a new paradigm of writing. Once again, that isn't something to be terrified about. I really don't think so. At least for writing. There's there's there's societal issues. That's another conversation. But
GPT4.5 is like routinely making me laugh out loud. So, I want to show you this joke that I found. This is a very niche example, but it made me laugh so hard, me and some of my friends, and it's a description of Tyler Cowan's life. So, Tyler Cowan came on the show recently. He's an economist at George Mason, but he's also just like a quirky and funny, very distinct guy, and there's this meme, I think it's like downstream of 4chan or something, but it's like be me. And you'll see it's like a funny formatting. And
this guy put in, hey, do like a be me joke about Tyler Cowan. It is hilarious. It is so funny. And now here's the thing. If you're looking at it and you don't know Tyler Cowan, you're like whatever. But if you know Tyler Cowan stuff, this is freaking hilarious. And I looked at this. This was right after ChatgBT4.5 came out. And I was like, you could have given me a week and I could not have come out with something that funny. No way. And this is a crucial point. AI is going to be really good
at super niche humor. So, if you're interested in in in some strange and esoteric thing and you just want to get like funny takes, AI is going to be really good at that because the chances that some comedian is also interested in that are going to be pretty low. Obviously, this is a one-off example, but William Gibson, the science fiction writer, has this line that I think about all the time where he says, "The future is already here. It's just not evenly distributed. And I think this is how the future shows up. You see little
glimpses and little glimmers of the future and then it you just have to think in your mind, okay, if that becomes higher resolution, what is it going to look like? And I think this is a good example of something that is way funnier, way better than most of the things that I see AI doing, which still actually aren't that great. They're okay, but like I can see the writing on the wall when I look at something like this. And the point is, I think it's not just coming for humor. I think that what it means
to be a journalist, a researcher, an academic, a full-time author is going to be rewritten. It actually already is being rewritten a bit. Like Ethan Mollik, who is a writer, I think he's at the University of Pennsylvania, he said that the past 18 months have seen the most rapid change in human written communication ever. As of September 24th, 18% of financial consumer complaints, 24% of press releases, 15% of job postings, and 14% of UN press releases showed signs of LLM writing. These are just the official stats, right? Cuz that's showed signs of LLM writing. Like
the number of people who are probably using LLMs, but then you don't see it showing up in their writing is probably even higher. We're seeing a major change. That's just in the past 6 months. We're getting 24% of press releases. But here's the thing. Absolutely. Humans are still better at writing than AIS in a lot of ways. But I think that's about to change. I'll tell you just about my own life, okay? It's got to the point where a full-on half of what I read probably is basically generated by an AI. So like what am
I talking about? What am I reading? Well, I have a lot of conversations with LLM. So I'll go back and forth all the time. you know, if I have a question about how something works in the world or I'm interested in, for example, last night I went out for a Wakan Mexican dinner and I love Wajaka cheese. So, like something that I might do today is like, teach me about Wakan cheese. What creates the texture? Why does it come from Wajaka? What makes it so distinctive and unlike other cheese? It used to be that I'd
probably Google something like that. Now, I just talk to an LLM. But the thing that I really use LLMs for is deep research reports. So if you go on chat GBT at the bottom, if you're on the pro or the professional or the advanced or the professional version, whatever the two paid versions are, you'll see that there's a little deep research button that you can press. And those deep research reports are so good. I'm generating a few of them every single day. And here's an example of of of how I use it. What it does
is it uses a kind of more advanced model under the hood called 03 and it'll sort of scan the internet for information. And the way that I use it is I live on Ladybird Lake in Austin. So there's this lake in the center of the city and it's March now. So the weather's getting nicer, the leaves are beginning to come back and you can sort of feel the world changing as we move into springtime. and I have this 20 25 minute walk between my apartment and my office where I work and I want to enjoy
that walk more. So what I did is I said hey deep research I want you to basically make me a report on the flora and the fauna in this area and I'm particularly interested in how the nature around me begins to change as we move into springtime in Austin, Texas. Super specific. And what it does is it'll go off and it'll take a bunch of really good pieces of information and it'll come back and it'll deliver me a full report that say two or three thousand words that I can read that's really tailored for my
interests, my curiosities at the moment, exactly where I live and took me what 20 seconds to produce the prompt, 5 minutes to basically wait for an answer. And now I feel like I know, you know what I mean? It's like it's right there and I don't have to go scan through a bunch of Google reports or read something that's generic. It's really really tailored to my interests. And look, this is the first version of the software. Deep Research came out only a few months ago and it's already at that level. And you'll notice if you
use it that the writing itself actually has some voice and some style in a way that chat GPT definitely didn't have a year and a half ago. Like half of the things that I read are AI generated. And because of that, I'm just reading fewer things that humans have made because time is finite and more of my reading is going to things that computers have written. But I don't just want to talk about where things are now. I want to talk about where things are going. Once again, I don't believe that all writers are screwed.
I don't believe that we shouldn't teach writing anymore. Um, actually as I was prepping for this, you know, I obviously used AI a bit, but like the process of working on this outline and putting my fingers on the keyboard and really thinking through things, a lot of ideas came to my head and were c crystallized and clarified in a way that they never would have been had I not sat down to do the writing. So, it's still the case that writing improves your thinking and I still think that'll continue to be true. But here's what
I do believe. But at the same time, the number of people who can gain an audience and make money for their non-fiction writing because they're able to just outperform AI is going to fall. It's going to fall considerably. It's going to continue to do so. If you want to make money as a writer, your writing is going to have to get better and better. So, here's my heristic for what kind of non-fiction writing will last. Like, if you're focused on that, what should you do? The more that a piece of writing comes from personal experience,
the less likely it is to be overtaken by AI. So that's personal writing, that's things like biographies, that's things like memoirs. They ain't going to go away anytime soon. And there's a few reasons why. One major thing that we get from from writing is connection. It's connection. Like writing is a kind of antidote to loneliness and connection, humanto human connection. Like love is one of those things that just has this infinite ceiling. Like if you can really connect with somebody and people feel like they're there with you. They're in your mind as they're reading your
stuff, man, you got such a bright future as a writer. That's not going to go away. Like David Fosterall said that a major reason why we read is to countenance loneliness. To have the thought that wow, there's somebody else who feels this way like I do. I thought I was the only one. We really want to hear personal narratives. So, if an AI, and I'm going to talk about this more later, but if an AI can help you write your personal narrative, great. I'm I'm I'm totally a fan of that. That sounds cool. I I
personally have no issues with that. I have no issues with that. But no one wants to hear a personal narrative from a computer. Like, that is a completely hollow thing. And if you're like, a personal narrative from a computer, like, why would you ever want to read that? I don't care how good it is. Yo, I completely feel you. that that that sounds like a completely soulless enterprise. Okay, I hear where you're coming from there. And this really became clear to me when I read this testimony of how the co-founder of Wikipedia became a Christian
a few a few weeks ago and I was reading it and I was just riveted. I was connecting with this guy and I'm reading this piece about the the ways that he changed his mind, his life arc, his his emotional journey. It is the most moving thing that I've read all year. And that is exactly the kind of writing that just isn't going to go away with AI. Not at all. But it's not just the personal narratives and you being able to tell your own story. It's also you got to be asking where do I
have data, facts, information about the world that the LLMs aren't going to have? Because if you have that, it means that you can write something that LLMs can't possibly replicate. So obviously, you know a lot about your life and that's why biographies and memoirs and personal stories will continue to to work out and not be copied by LLMs. But what about things that you know about the world, right? Because there's a lot of stuff in that dimension, too. So I'll give you some examples from my life. For example, like I've lived in Austin for almost
5 years now. So, there's things that I know about the vibe of Austin that an LLM isn't going to be able to replicate. I just know those things because I live here and I'm always talking to people and I have this general sense of the culture of Austin, the people who live here, how the vibe is changing and I can speak to that with a lot of specifics in a way that LLM there's no way they can. Another thing is like like I was saying earlier, right? I ran right of passage for for six years
and we ran almost 200 live sessions. like I know how to run a Zoom live session with a few hundred people very well in a way that there's just no way that an LLM is going to be able to have an answer on that that I have. There's just no way. I have that experience. And then the other thing is just things that are more cutting edge and up-to-date. You know what happens with information is it gets shared in small tight tightly connected social circles and it gets shared through sort of whispers and voice and
then it ends up getting shared in conferences through sort of more formal settings and maybe podcast and then later on it kind of ends up in books and it's only later that the LLMs would end up actually having that information. And to get really concrete here, there are things like I'm really interested in the YouTube algorithm cuz that's what we're focused on with how I write. And there's things right now of how to grow on YouTube and how the YouTube algorithm works that I'm just talking to friends about at dinner. And I know those things
in ways that the LLM just is not going to know. And so all of those examples, Austin, the intricacies of a Zoom live session, talking to friends about the YouTube algorithm, they come down to two E. the two E of being differentiated from LLMs in your non-fiction writing which are experience and expertise. If you have a lot of experience in a particular domain, if you have a lot of expertise, you just know a lot about something and you write that you put that onto the page and you do it well, you're going to be all
right. You're going to be all right. So the question is, okay, David, you're talking about what I should do. How is your writing gonna change? And I'll tell you this, I'm definitely going to focus on experience and expertise, but also my writing is going to become more personal. It's going to become more opinionated. I want to just do bolder work. And the piece that I'm writing right now, it's a long piece right now. It's probably at about 11,000 words, is the story of how I became a Christian. And one thing actually just to go back,
I read that testimony from the co-founder of Wikipedia and I said, "If I'm feeling this about that, well then I can definitely do the same thing in my own writing and I can trust that it's going to stand the test of time." And I encourage you to do the same thing. Just start thinking, okay, as I'm reading a piece of writing, do I think that this is going to continue to last or will become obsolete? Am I really moved by this style of writing? And if so, well, hey, maybe consider doing that kind of writing
yourself. And for me, what I like about the the story of how I became a Christian is it's that personal story. It's deeply emotional and it has a super spiky point of view, which is how in the world did I go from being raised Jewish, basically being an atheist, living in New York City, which is like the mecca of materialism. How did I go from there to then believing that Jesus Christ is the literal son of God? Believing in the historicity and truth of that story and then thinking that he's my Lord and Savior. That
to me was probably the biggest change of mind I've had in my entire life. And how in the world did I get to a place where I thought this guy, God became flesh, died on a cross, and was resurrected 3 days later? I mean that idea seemed absolutely crazy to me 10 years ago and now I actually believe it. Like that is a story worth telling and it's the kind of thing like AI is not going to make that obsolete that story if I do a good job. There's just no way. There's no way. There's
no way this piece right it's deeply personal. It's deeply opinionated. It's a story about what I've been through how I've changed my mind. extreme sorrow and pain and how I think the world works. How I think what do I think the story of reality is and it's just about the greatest and most intimate story that I have to tell. And people will often say, you know, if they're critiquing AI, they'll say, "Oh, you know, you know, the best human writing is still so much better than the best AI writing. It's still like the best writers
are so much better. So like why in the world would you even read AI?" Like David, why are you reading all this AI generated writing? You could go read a bunch of other things. And here's the thing, they'll say it's the best. So the best is sort of talking about quality. And I've realized that there's really two kinds of quality when you sit down to read something. Okay? So there is the objective quality of a book. That's the first one. And this is what everybody thinks about. So we'll take a book like Gibbons, the decline
and fall of the Roman Empire. Okay, people tell me amazing book. They'll say the writing is is is really strong. It's been super influential. Sure, you know, maybe you got some facts wrong, but David, you got to read this book. Or they'll talk about The Power Broker. That's one of my favorites. The Power Broker by Robert Carrow. It is 1,344 pages. But David, I know it's long, but I'll tell you, Robert Caro is such a good writer. He does insane research, all that sort of stuff. And look, I get it. The writing quality is great.
But actually, that kind of objective quality is only half of the equation. The other half of the equation is how tailored is a piece of writing to your interests? Like, I read the Power Broker when I was living in New York because I was living there. kind of help me understand my environment. But if I was living in Austin, Texas, there's just no way that I would read that book. It's not tailored enough to like what I'm focused on right now. And with the Gibbons example, I haven't read The Decline and Fall of the Roman
Empire because I'm just not that interested in the Roman Empire. You know, it's like I get that the writing is good, but like how how much the writing aligns with like what I'm curious about right now, that is the other half of quality. So quality is one half the objective quality of a piece of writing but the other half is how tailored is it to your interest and this is the thing about chat GPT. I agree with the critics that the writing quality the objective writing quality isn't as high. You're absolutely right. You're absolutely right.
Completely agree. But what I love about reading with chat GBT and having AI generate writing for me is that it's perfectly tailored to my interest at all times. We're talking about the Ladybird Lake thing, right? I could go read the best book or article that has ever been written on the flora and fauna of the city of Austin. I bet there's a good book. Maybe it's even won a pullet surprise. But actually, it's going to be overkill. What I really just want to know is tell me what I need to know about the floor and
fauna between where I work and where I live in the in March, this exact moment right now, so that my walk can be more enjoyable, right? It's so much more specific. It's so much more tailored to my interest. It's actually shorter. So, I don't have to like sort through a bunch of information. So, it's on the tailored to your interest dimension of quality that AI has massively improved. And here's the other thing. I'll uh I'll give you some more hope in terms of why writers will continue to be just fine. Whether you're writing with AI
or you're writing without AI, the core skills that you need to succeed as a writer are exactly the same. They're exactly the same. What are those core skills? Taste. The ability to discern what's worth keeping and what's not worth keeping. Look, whether you're writing on your own and you're generating a bunch of words with your own fingertips or you're working with AI and the AI is generating sentences and paragraphs, whatever. The vast majority of what ends up being generated, you actually end up removing. It's just that AI is going to end up generating more stuff
and you're just going to end up cutting way way way way more. That's it's true that you're going to cut the majority of what you write, whether you're writing yourself or whether the AI is doing it. And a lot of what writing is is just putting a bunch of words on the page and you sort of have it there. You sort of have the marble and then you begin to sculpt. And Michelangelo said that what you need to do to make a great statue is just remove everything that isn't what that final statue should be.
And that's true whether AI is writing or you're writing. So much of it is just taste and discernment. That's the first thing and that'll continue to be true. So much of what AI will produce will be not quite right. It'll be junk. And guess what? It's the same thing with me. So much of what I produce is just absolute junk and nonsense and clutter. And I just I get rid of that stuff. So that's the first thing. Taste is the first core skill and that will continue to be important. And the second thing is a
spiky point of view. That is a unique insight, a unique belief about what you believe to be true about the world. And man, if you can have that something that is like this is my bold take, you know, like my take on Austin is that Austin is a mediocre city but a good place to live. Or this is the famous Peter Teal interview question, right? What do you believe about the world that's true that very few people would agree with you on? those sort of secrets, he calls them, those will continue to be valuable in
an AI future because AI isn't going to be able to produce those nearly as well as human beings, at least for a long time. So just like in the past, the past 20, 30 years, just like in the present, just like in the future, if you have a distinct an idiosyncratic take about how the world works, where the world is going, you're going to be completely fine. But that was true. It is true and it will be true. You know what I mean? Past, present, future. And I'll give you a concrete example. So, I have
a friend who thinks that the future of education is going to be school and there's not going to be teachers. Instead of teachers, you're going to work with coaches. And instead of those teachers who would basically lecture you with information, those coaches are basically going to motivate you. And since you're not going to be sitting in lectures, what you're going to do is you're going to be learning through AI and you're going to be learning through apps. And he believes that because of that, because it's so much more effective, because it's so much more efficient,
that the kids are going to be able to learn everything they need to know, everything from K through 8 in two hours of learning per day. And then his model of schools, then you can spend the rest of the day learning life skills, public speaking, making friends, all that sort of stuff, riding bicycles, whatever it is. Kids can learn to ride a bike in second grade. They learn to swim and they can do that with with the rest of their day. And this this is a crazy idea. Like I've seen him pitch it and people
are like that is the most insane thing I've ever heard. There's no way I'm gonna send my kids there. But for him, it's not some like pie in the sky theory that he just like made up someday. He's been thinking about this for 25 years. He runs a school. He already has data to show that it works. They got the school. It's called Alpha School in Austin. And most people think it's insane. But I think he makes a lot of good points. said he thinks that he's right. He's like, "This is the conviction that I
have on the world." And look, if you have that kind of conviction in anything, there's nothing to worry about. That is what great writing has always had. And AI is just not going to be able to do that because AI's, at least right now, and for the time being, very trained on the consensus. So, this is a major white pill for writers that if you have good taste, if you have that spiky point of view, you're going to be just fine. You're going to be just fine. And look, I would even say that those are
the skills that used to be true. Those are the skills that will be true. It hasn't really changed. And as I've been thinking about quality, as I've been thinking about the kind of writing that is going to work in this age of AI, I've been asking like, is AI going to be more like chess or is there going to be like music? And what I mean by that is with chess, what has happened is the AI, the computers are already better than the very best humans. They're really good. But you have people like Magnus Carlson
who are huge celebrities. Huge celebrities. And people don't really watch the computers play chess. They watch the human beings play chess. They want to see the human drama. They want to see the rivalry. They want to see two people duking it out. Try to basically checkmate the other person, right? So, when it comes to chess, people care less about perfection. They care more about humanity. So, that's one path for AI. The other path could be music. And what I mean is like when you walk into a club, right, you go to the bar, you know,
you step out on the dance floor and there's a song playing. You're like, damn, this song slaps. Like, you just know it's good. You turn on the radio, you hear a banger. You don't stop in that moment and say, "Huh, I wonder how this song was made." You know, did they use Ableton? Was this uh was this an electric guitar or was this an acoustic guitar? Um is there sampling? You know, what's going on there? You don't do that. You're just like, "Yo, this song is sick. This song is absolutely filthy. Let's get down." And
like, if it's a vibe, it's a vibe. And you just keep listening, right? And maybe later on you'll be like, "Okay, how is that song made?" And I was uh thinking about this and I learned that sampling in the past used to be kind of taboo. People were like, "Oh, this is theft. This isn't real music." And then what happened was you had the Beasty Boys and you had Dr. Dre. And then I remember Kanye West was the guy who used a bunch of samples and now sampling is like completely normal. You're taking songs that
were made in the past. are sort of taking that beat or a section from that song, bring it into your song, and it's like completely fine, right? There's actually a bunch of songs, old songs that I found through samples of new ones. Like there's a song called Runaround Sue by G Easy that I used to listen to all the time when I was in college. It's like the most like college dude song to listen to ever. And I didn't realize that was like an old song, you know, that you'd play on record and whatnot. It
was like when my mom was growing up or whatever. And the point is that sampling used to be the kind of thing that people are like, "Nah, you can't do that." And now it's just completely normal. Completely normal. And I think AI is going to be like music. I don't think it's going to be like chess. Right now, we're in this time of those early days of sampling where if you're using AI for your writing to help you with your writing, people are like, "That's not cool." And a lot of people are like, "Whoa, that's
really not cool. I don't like this. And look, I get it. I I I completely get it. This is this is sort of a strange moment right now. But you know what? I'm pretty confident that in 15 years, of course you used AI for your writing. It's just it's just like a piece of technology. Of course you used it. And like sampling, you know, sometimes you sample a song, sometimes you don't. But people don't have like a moral aversion to sampling now like they used to. And I think AI is the same way. Right now
we have that moral aversion, but I don't think that'll be true. And this is the key point that I want to make that the only thing that'll matter will be the objective quality of a piece of writing. Let me repeat that. The only thing that's going to matter is going to be the objective quality of a piece of writing. Cuz look, just as when I walk into a club or some bar and I hear the song, I don't care how it was made. I don't care who made it. All I care about is that the
song is a vibe. And I think it's going to be the same thing with writing. I don't care if AI wrote it for you. I don't care if AI wrote it with you. I don't care if you wrote it by yourself. I think in 10 to 15 years, that is the only thing that'll matter. See, I'm not interested in the best writing that only humans can do. I'm not interested in that. I'm interested in the best writing, period. Period. whatever produces the highest quality writing. Yo, sign me up for that. Sign me up for that.
Now, once again, this for non-fiction. But you know what's funny? It kind of reminds me of special effects in movies. Like, one thing that really, really bothers me kind of more than anything is if I'm watching a movie and there's a special effect and that special effect doesn't look real. Like, it looks fake. It looks like a special effect. And now I'm like taken out of the the fantasy, the world of the movie that I'm watching. and I'm like, uh, okay, they just did special effects there. It completely kills the vibe. And it's the same
thing with AI writing. I already feel like an aversion to it. Now, if I read a piece of writing and I'm like, AI definitely wrote that. It just has that sterility and all those cliches that AI writing has, it just infuriates me. It gets me so mad cuz I'm like, you don't have the taste to just do a good piece of writing and now you're outsourcing to AI and you're just like, oh, AI can do it better. And it kind of just bothers me more than anything. But for me, now this is my opinion, if
I read an amazing piece of writing and it's just it's captivating, it's compelling, the story is good, the takes are strong, and someone's like, "Yeah, you know, I used AI to help me refine my ideas." I do not have a problem with that. I do not have a problem with that. I think I'm in the minority now. I think I will be in the majority in 10 to 15 years there. Okay. But I do hate when writing was just so clearly outsourced to AI. And here's the other thing. Right now, everyone's talking about AI slop.
Everyone's talking about it. And slop is a great word, by the way. It's just like such a good descriptive word. And everyone's talking about it. Everyone says that right now AI is the beginning of slop, but in some way it's actually the end of slop. Let me explain what I mean. So let's just zoom out. For the past decade, we'll start with SEO, then we'll talk about online writing, the kind I used to teach. So for the past decade in the world of SEO, what you would do, say that you wanted to do like a
recipe for baking cookies, okay? What you would do is you would try to get to the top of the SEO ranking and you can try it. What's really annoying about looking for a cookie baking recipe is that there's all this story. I'm like, I don't need the backstory. I don't need all of these photos. I don't need to hear that it was like your grandma's story or whatever. Just like tell me how much sugar I need. Tell me how much cookie dough, how many chocolate chips, what is the ratio? Just give it to me straight.
But the problem is that the incentives of the SEO industry are to increase time spent on page because that's what Google search rewards. And because of that, you just need to add all of this freaking nonsense. And the internet is just polluted with it. Like if you go search like tell me interesting things about London, the results are not going to be interesting. They're just like all of these travel sites with like here are the 10 sites that you need to see. It is to me the very definition of slop and it is all over
the freaking internet. It's all over the place. I would way rather have the chat GPT output that already exists without any of the bells and whistles. And here's the point that the incentives of writing were completely misaligned. What was best for Google was to basically make money from ads. What was best for the creators of the recipes or the travel sites was hey we need to rank high on Google so we need to serve Google because of that we need to add a lot of slop so that the time spent on the page which is
like a crucial metric for Google increases and then the people people like me people are like you we're just like guys can you just give us the dang answer you know what I mean so you had this total misalignment and I think that is the epitome of slop That was the SEO world, but how about the personal writing world? I used this strategy completely. And the strategy was you're going to pick a niche. You're going to write consistently about that niche, hopefully publish something every single week. You're going to then constantly tweet about it. And
then under your tweets, you're going to link to your email newsletter. And then you're going to send an email newsletter consistently. And you're going to publish that email newsletter every single week. No matter what, you're not going to miss a week. And even if something isn't like the best thing that you can possibly produce, well, hey, you know, being consistent is super important. So, go publish every single week. That's what has worked on the internet for the past 10 years. And hey, that's what it rewarded. I saw that strategy. I took advantage of it. It
really served me and a lot of my students well. And you just sort of repeat that cycle constantly and it worked. But here's the thing. Distribution was king. If you just had distribution, you had enough email subscribers, you would be fine. And what mattered more was publishing consistently than publishing the best quality piece of content. And how good your distribution was was often more important than how good your content was. Like I published a newsletter every single week for 5 years and I wasn't always entirely happy with it, but I just did it because that's
what worked on the internet. You know, you could say, "Dude, David, you are kind of contributing to slop." And here's the thing, that age is gone. like my chances of succeeding with that strategy in five years are so much less than my chances of succeeding with that strategy five years in the past because your writing just has to be really good now in the age of AI. And I get that a lot of slop is going to be produced, but hopefully, and this is my hope, that we have good algorithms, and that's basically what algorithms
do, right? They ignore 99.9999. Insert however many nines you want. They ignore that much content to only give you the very best stuff that's tailored to your exact interests. Now, quality is king. And I think content's going to be king because if you publish something really good, that's the only way to rise up above the AI. Here's how I define slop. I define slop as when simply publishing or getting something done is more important than the quality of what you publish. And I would say that so much of the online writing going back the last
10 years was slop under the definition I just gave. and he ain't succeeding with that now. And the reason is that the bar for the kind of writing that people are going to read is already increasing and it's going to continue to increase because now you're not just competing with other human beings and like the scale of the internet and other human beings, you're competing with computers and computers can produce information so fast. And now what I want to do is I want to move into how do I actually write with LLM? Like what do
I do? Okay, I want to just distinguish between two kinds of writing. There's AI that writes for you. There's AI that does writing with you. And I've yet to meet a good writer, a good writer that I admire and respect that thinks that AI can do the writing for you. Not one. Okay. But I know a lot of serious writers that I admire. They write with AI all the time. And that's what I do, too. So, here is an example of what I mean when I say writing with, not for. So, I grew up in
San Francisco. I now live in Austin, Texas. And I love that city. It's so beautiful. It has so much potential. The city by the Bay. I mean, Tony Bennett has a song. He says, "I left my heart in San Francisco." That's how I feel. I love that song because I feel this like deep emotional connection to that city. But the city's been just destroyed and tarnished by bad politics. And I could say, "Hey, ChatGpt, I want you to write me a piece about how San Francisco has been destroyed by bad politics." Yo, what it's going
to produce is going to be absolute nonsense. It's just not going to be that good. Now, it could do a deep research report that could be fairly interesting and maybe I would do this like, "Hey, walk me through exactly how that happened. Let me understand the history." But there is a opportunity for me as a writer and this is what I mean when I talk about writing with AI. Maybe I would team up with the deep research to say, "Hey, can you give me some of the historical background?" But really what I would do is
I would tell you stories. I would tell you stories about like in seventh grade, my mom came home and she was covered in blood because she had been walking home from the train to her car and a guy came up to her, threw her to the ground, took her purse, and then they left and completely drove away. I could tell you stories about how when I was in middle school those years, I was, you know, this kid, uh, super innocent kid at the time, and I'd come out in the mornings, we'd go to school, and
the windows would just be shattered because people had broken into our cars. I could tell you a story about how I went to the Oreium Theater on Market Street one year and I picked up a syringe as a kid and I was like, "What is this thing?" And my dad was like, "Put that down right now." And I didn't learn why there were syringes in the middle of San Francisco until I was in high school. I could tell you stories about how a few of my friends from high school in San Francisco, they had to
go to drug rehab because the drug situation moving in and through San Francisco is just so bad. And my point is this, AI is not going to give you that. Those are stories that I had to learn from my experience in San Francisco that I think will give that piece so much more life and help you to see the heartbreak that I feel about how San Francisco's been destroyed. AI, I can't give you heartbreak when it comes to writing with chat GPT. Here's what I do. A lot of the way that I start, like I
love just speaking out ideas. I I I love doing it. So, what I'll do is I I don't really like typing. It hurts my fingers and, you know, I feel like I'm going to get early onset arthritis or something like that. And what I'll do is I'll just go on walks and I'll just speak out my ideas. And what I do is I have a prompt and I can share it in the description. I have a prompt where what I'll do is I'll say, "Hey, I've just spoken something out. Can you turn that to an
outline?" Or what I'll do is I'll say, "Hey, can you turn that into pros?" And I'll create the ideas. And then I'll have AI kind of help me out. But then it's not just that. I'll say, "Hey, what ideas that I just shared were particularly interesting? What stories do you need more information on? What transitions were unclear?" And I'm sort of instantly getting feedback so that I've just finished my version one. And then I can get the things that are unclear, the things that I need to share more about. And then I can just do
a V2. And the AI, the transcription is just so good, is a thousand times better than Siri. And then what I'll do is I'll ask the AI based on what I've shared, I'll say, what are the weakest points of my argument? What are the most boring parts? What are the best parts that I should double down on? What transitions weren't clear? And what more would you need for a story to really work? So, it might say, "Hey, you know, you told that story about your mom. Tell me about your mom. What does she look like?
What was she wearing? How old was she at the time? What was the night like?" Right? It was really foggy. Okay, interesting. The fog is like a really good motif for that. And maybe I wouldn't think about that. AI is really good for that. Not because it's like the best editor in the world, but because it's going to be instant, it's going to be fast. It's going to be free to work with, you know, and I can just start being in dialogue. And then what I'll do is I'll start to actually sit down. I'll begin
to write the piece with with AI. So, I've been doing this for the piece that piece I've been writing about Christianity really thinking through character development. That's something I want to get better at. And I'll say, okay, what do I need to know for what makes a good character? So, tell me from like the theory of writing of Hollywood and movies and literature, what makes for good characters? And so, I might get, okay, these are the four things. And then I'll say, "Okay, based on these four things, I want you to interview me about these
characters from the piece I'm writing, Brent and Brian." And through that interview, it'll kind of get my brain going and get words on the page. And I find that the back and forth is so much more generative than me trying to do it all on my own. So now I have AI as a thinking partner. Now, this is a prediction and I'm not quite sure what's going to come from it, but I do think that AI is going to breed new kinds of writing that we've never had before. And I credit my friend Justin Murphy
with this idea. But we were talking about the Renaissance and how new technologies that had come out at the time actually led to the change in art. So, like you ever go to a museum and you're in the medieval painting section and like everything looks super flat, like there's no depth and perspective and those are like 13th century paintings and then you look at like a 15th century Renaissance painting that was made in Florence and you all of a sudden you see that perspective shift and clearly something happened to change how people view the world
and at least were able to paint. Well, what happened is there were a few technological innovations that actually led to that. So the first was the camera obscura and that that allowed artists to basically trace images that they would project and then they were tracing and then that led to what they created. So that was the first one and then the second was the perspective grids. So there was a guy named Leon Alberti and what he figured out he was an architect who's sort of this polymathic guy but he figured out that you could use
these perspective grids and you could use them to draw and then you could show perspective in the painting. And now you could ask man you know is this guy cheating? Like we used to paint all by ourselves. Dude, you are completely cheating. But like now we look at a painting we're like I'm just happy to have perspective. I'm really glad that we had technology that allowed us to make that development. And I think that something similar is going to happen with AI where right now we're going to end up having these technologies that change how
we think about writing, how we see writing, and it might even lead to changes in the kinds of writing that we produce. I don't know what those are going to be, but I think there's going to be some interesting things. Actually, one just came to mind, which is some stories that you can tell that are almost like Mad Libs where you can change different things in the stories and then you can personalize those stories. might be like kids stories. You can personalize those stories for the interest of the kid. So, say that one kid is
really interested in the Denver Broncos football team and another kid is like really interested in ballet and she like loves ballet and she's from St. Petersburg, Russia and she wants the setting to take place there. The little boy wants the setting to take place in Denver and you could have the same general story that all the kids in the class read, but then that story is tailored for all the kids' interests. So the idea that you produce one piece of writing that everybody reads might change where you can change different things inside the piece of
writing to really tailor it for what people are interested in. That's what I mean. I think we're going to get some changes in the kinds of writing that become popular. And so what I'm working on when it comes to writing with chat GBT is I'm working on a custom project folder for my writing style. And I'm basically trying to say here's how it's structured. The first section is sort of at the top. It's what do I want my writing to be like and what don't I want my writing to be like. Let's just really describe
it super clearly. That's the first section. Lay out say 10 bullet points for each, right? And what I've done is I haven't just thought about this, but what I've done is I've taken my best writing and I've said, "Hey, I want you to actually analyze this for me, AI, and tell me what are the things that are going on in the writing and I want you to describe it. And then I'll take the best descriptions that it gives me and feed that back into it." And I'll use AI and work with it to really compress
it so that I have the clearest, most succinct descriptions of what I am and am not going for. So that's the first thing. And then the second thing is from there training data on all the things that I do and don't want. So say that I want my writing to be interesting, right? We use a very cliche and bland example. Well, I'm going to put in a lot of paragraphs that I think are particularly interesting. And then I'm going to actually describe why I think that paragraph is interesting and say, "Hey, I want your help
getting me to write like that as you give me feedback." That's exactly what I'm going for. and training data is going to be more and more important. The way that I'm taking notes is beginning to change because of AI. Right now, it used to be that I would take notes for me to read. Now, I'm increasingly taking notes for AI to read. And what that means is that when I just had my own notes, I wanted a bunch of different small notes and then I could search and I didn't have to spend a bunch of
time scrolling. With AI, it's very different. Rather than having a bunch of pages with a little bit of information, I want a few pages with a lot of information on them. And here's why. AI can do the searching. It can do the scrolling, no problem. And the context windows for AI are getting bigger and bigger and bigger. And here's what I mean by that. So, if you say, "Hey, I want your help with my writing." I might now be able to put in 30 40,000 words of training data. In two years, I think I'll be
able to put in 2 million words worth of training data and then 10 million words worth of training data. And it'll easily be able to parse that. So, I'm just going to have these giant documents for different parts of my life that I'll just be able to plug into the AI and it'll be able to to read the whole thing for me. And if I have a bunch of different documents, then I always have to copy and paste that. So, the way that I'm taking notes is beginning to change. And I'm starting to think about
how do I write for LLM as much as my future self when it comes to taking notes so that I can really begin to work with these LLMs and give them context on how I want to write, what's going on in my life. meeting notes. Like another thing is maybe I put all my emails that I've really thought were good into a single document and then I say, "Hey, I want your help writing emails. Do it in the style that I've shared." And it's, you know, 100,000 words worth of emails that I've written. And that
leads me into how I think with LLMs. So, when I'm discovering new ideas, I found that jamming with LLMs is more useful than talking to basically any person in my life, save for like a few people who I'm close to who give really good feedback. And look, I'm not the only one. Microsoft CEO Saki Nadella, he is no schmuck. He is a smart guy. Here's what he said. The new workflow for me is that I think with AI and work with my colleagues. That gets to the heart of it. You think with AI, you work
with your colleagues. You're going back and forth. You're working through problems, strategy with the different AI. And on the same wavelength, I have a friend in town, super successful guy, probably has 2,000 employees working for him. The guy was talking about earlier, right? And you know what he said to his executive team? He said, "It's gotten to the point where talking to an LLM for an hour is more useful than about 70% of the conversations I have with you." And that's his executive team. These are talented freaking people. This isn't to say that his executives
aren't competent. It is to say that the LLMs are already at the point where if you're really good at prompting, and it has a lot of context. He's really good at bringing in strategic context into the chat window. These LLMs are really good. And you know what? I've been having a blast with. Oh my goodness, it's so fun. So Grock is the Twitter AI. And if you go into Grock, it has all of these modes at the top. Like it has assistant and it has storyteller, meditation, unhinge, sexy, whatever. But my favorite one for thinking
is argumentative mode. I've started playing with grock to really think through through ideas with. And what I'll do is I'll take something I have very high conviction in and I will say, "Hey, I want you to challenge this. I want you to argue with me and I want you to point out my core thesis and tell me why it's wrong." and we'll get on voice mode and we will just start arguing. Like the other day we were arguing about something here and she was like, "What? So, you're just going to get LM to do all
your thinking for you because you're talking to LLM? Like, you are so freaking lazy. What the heck is wrong with you?" And I'm like, "How dare you call me lazy? It's not that LM's giving me the answers. It's that LLM is helping me ask good questions and then I can find answers in my mind." And the core thing is that the answers are like these little chambers in my brain. And the LLM is like shining a spotlight in different corners of my brain and then helping me find these little treasure boxes of insight that I
would have never found on my own. What the heck is wrong with you? And so we go back and forth. It's so fun. And that is what I'm using LLMs for. And a lot of the people who you end up talking to, they'll be too agreeable or too disagreeable and they'll sort of stay in that lane. What's really fun about arguing with the LLM is you can get it to act exactly how you want it to act. So like the other day I was getting really annoyed with it because it was just being too argumentative.
So I was like, "Hey, can you just tone it down a bit and actually just be more supportive here and I'm giving you these ideas. Can you just help me find and shape the best ideas that I have? Let's work together and then a few minutes I'm going to ask you to argue with me." And so that's what we did. And then what I do is I get to the very end and I say, "Okay, based on this entire conversation, I want you to do this. I want you to summarize what are the key points
that I made. What are the key points of push back that you gave me? And what questions should I think about for next time?" And I'll just scroll to the bottom of the chat window and I will just have a full summary of what we spoke about. And look, this might sound weird, but it's actually kind of refreshing to be in like an extreme argument with somebody when I just know it's not going to impact the relationship, right? Like if I did this with a friend, we're just fighting with each other like that, it'd be
kind of a thing, right? But with the AI, it's like whatever. So I can go harder, they can go harder. It's good fun. So where is all this going? We've been talking about the sata Nadella thinking with AIs, working with human beings. I was talking about those long context windows. Now we're talking about arguing with the AIs. And one killer app of LLM is going to be memory. Humans are really bad at memorizing things. We're really bad at it. Our memories are not good. So, funny story from uh from last weekend. So, I've been, you
know, seeing this girl and uh it's one of our, you know, fourth, fifth date and uh she had a tough week. So, I was like, "Hey, you know, what would make your week really good?" She's like, "Hey, you know, I just want to get dressed up and go to dinner." I said, "Great. We'll go to a steakhouse in town." So, I'm like, "All right, this is going to be a great night. you know, pick her up in the Uber. We're going and I'm dressed in the nines, she's dressed in the nines, everything's in a great
spot. And uh we're walking up to the steakhouse and I'm like I finally tell her, "Hey, you know this where we're going?" I say, "Hey, this is my favorite steakhouse in town." She goes, "Uh, wait, you took me to a steakhouse? I told you twice. I don't eat red meat." And it was just like instant just like change in vibe. And I was just like, "Oh my goodness, you're so stern." I was super embarrassed. Felt so much shame. I was like, "Dude, you've got to be kidding me." And all this is to say, humans don't
have good memories. Humans don't have good memories. The rest of the dinner ended up going fine. Took us like 20 minutes to get over it. But humans don't have good memories. And we forget things all the time. And AI is going to be really good at remembering conversations from weeks ago, from months ago, from years ago. and it'll just have that context at all times. If you're comfortable doing this, I recommend using Granola AI to basically record your work meetings. And what it'll do is it doesn't record the audio. It'll just get a transcript, but
what it'll do is it'll take that transcript and give you a summary at the end, which then allows you to um right afterwards, you can ask questions about what it is that you spoke about. So, for example, we're in a meeting on Tuesday of this week, and I spoke out the intro for this video in that meeting, but then me and a guy I work with, we sort of forgot exactly what the intro was, but I went right into Granola. I said, "Hey, can you tell me what was the intro that I said that we
should do during the meeting?" And it just zip, zip, zip, zip, zip, spelled it out for me. But what's going to end up happening is you're going to be able to say, "Hey, I had lunch with my friend Sarah 2 years ago. I haven't seen her in some time. Can you remind me what it is that we spoke about at lunch? And look, certain people are going to be really comfortable with this. Certain people are going to be not comfortable. That's a personal preference thing. But the point is AI is going to be really good
at helping you to remember things. And it's not just memory, but it's context. AI is going to be able to look across a wide swath of people and ideas and give you context on what's going on. So, it's not just going to look at your own goals and your own notes, but it's going to be able to look at, hey, what are all the memos that have been written inside of a company? What are all the emails that have been sent? Say that you run a company of thousands of people. I think that this for
better and for worse, AI is going to consolidate power for people at the top. And here's why. So AI is going to be able to basically scan every single email and see exactly what's going on and give instant feedback on what people are thinking, what they're writing from the perspective of the CEO. So I got a friend named Daresh Patel. He's got a great podcast about AI and he was writing about Google and Sundar Pinchai, their CEO. And this was his prediction. Okay, I'm going to read read you this quote. Human Sundar simply doesn't have
the bandwidth to directly oversee 200,000 Google employees, hundreds of products, and millions of customers. But AI Sundar's bandwidth is capped only by the number of TPUs you give him to run on. And I'm adding this, but he basically means like computer processing power. All of Google's 30,000 middle managers can be replaced with AI Sundar copies. Copies of AI Sundar can craft every product strategy, review every pull request, answer every customer service message, and handle all negotiations, everything flowing from a single coherent vision. A company of Google scale can run much more as the product of
a single mind, the articulation of one thesis than is possible now. Now, that's a crazy prediction of the future, but I can sort of see it happening, right? where your job as the leader is to write and write and write and write and make your thinking and strategy legible and then you have basically an army of computers who are reviewing everything and making sure that the entire company is aligned. I texted that to one guy who runs a big company. He said, "Man, that would be my dream." And other people are like, "Wait, what?" Like,
that's crazy. And I think it speaks to something bigger where AI is this unique technology in that people who are managers, they have totally said yes to this technology much more than kind of rank and file people who just do the work. And I think that part of the reason is that working with AI if you're a manager is actually super similar to the way that you've always done the work. But working with AI if you're kind of a rank and file person is completely different. So here's what I mean. If you're a manager, what
is your job? Your job is to basically set a vision, describe that vision, delegate that vision, get a response, have that response not be what you want, give feedback, go through cycles of iteration, and then eventually get something that's pretty good, and then pass it on. That's exactly what you do with LLMs. It is the exact same motion. And that's why I think managers are like, "Oh, yeah. I've been doing this for years." But the other thing is when you're actually working with people, there's going to be drama and you got to manage a large
team. You got to do one-on- ones. For people who don't like doing those things, they're like, "Oh my goodness, this is amazing. I get to delegate a lot of my work in the same way I've always done it, but I don't have to deal with the one-on- ones." And I think that's why so many managers are like, "Yeah, I'm fired up about this technology. I see it. I get it right away." Now people think of the LLMs as sort of one big behemoth like Grock, Chat GBT, Claude, whatever it is. They sort of assume that
they're all the same. And 6 months ago, that was more true than they are today. And I think that the models are going to begin to diverge. Here's why. What's happening is there's crazy competition at the model layer. And when you get crazy competition, one way that people respond is differentiation. So what you had with for example Anthropic the company that owns Claude when they released their 3.7 model which came out in February of March of 2025 they said hey we're really going to focus on coding for this model and that's really where the improvements
are going to be. GPT 4.5 was really focused on more qualitative parts of of writing. Grock Grock is really focused on free speech and even upto-date stuff like the chatbt cutoff for knowledge is like September 23 or something like that and then it sort of uses search for more recent stuff. Grock is much more up to-ate. So like if I need something up to date I go to Grock not chatgpt and I think that the models are going to get more and more distinct over time. And when it comes to tracking what's going on with
AI, talk to your friends, especially ones who work at big companies about how they're using AI, a lot of the coolest things are happening in major companies and they're staying kind of in private. Like I heard this through the grapevine that Google allows anyone who works there to basically do searches with a unlimited size context window. And I'm sure that there's some really cool things that that come from that. And I'm just talking to different friends. They're building internal AI tooling that I'm sworn not to talk about, but a lot of the AI progress that's
currently happening is happening in back channels. So, just ask your friends what's going on and just be like, "Yo, sh I won't tell, but if you want to email me, that's fine. You can tell me, but don't tell anyone else." And look, a lot of this future is already here. Actually, this morning, a friend sent me an article that said this. It's from Nature Magazine Scientific Study. It says AI generated poetry is indistinguishable from human written poetry and is rated more favorably. Now, this sort of a crazy prediction. I got no scientific study to back
this up. It's just something I'm kind of feeling right now. But I think that AI is going to actually allow us to communicate with animals. I think that there's all of these deeper forms of communication that are happening in the world and we're going to really understand that at another level and we might be able to chirp with the birds and stuff like the ravens outside a little cardinal that's by your window every single morning. I think we might be able to start talking to them and it's one of those weirder second or third order
effects that might come from AI. And so it's not just that these models are going to have increasingly rising IQ, but they're also going to have different personalities, which is going to be great. I want completely completely different models. I want the different models to just be like be as diverse as people are. You know what I mean? And when it comes to thinking with AI, I see it as already useful in a bunch of different ways. So, at this point, I think it'd be insane if you have a medical issue to only talk to
a doctor. I think talking to chat GPT or some LLM is really a no-brainer thing to do at least to get a second opinion. And once again, that is a great place where you'd want to talk to a bunch of different models to see, hey, are you getting variant different perspectives or the same perspectives and how does that compare to what your doctor is saying? At the very least, you can ask your doctor way better questions when you show up, which will help you get a better diagnosis. I think that's a no-brainer. When I was
in Buenos, Iris, I was trying to understand sort of the immigration patterns of the city. And this was one of those moments for me when I was like, whoa, you can do this. And what I did is I asked it to make me a table from 1890 to 1920 in Buenosirus during the early boom. How much immigration came in all those decades. But then also I want a table of all the different places that people came from. So Italy and Spain and then when it where in Italy, Southern Italy, Northern Italy and I couldn't believe
it. I couldn't believe how much easier it was for me to just look at that data. Google wasn't giving me good information. If I wanted a book on that, I just don't in Buenos. I don't know how I would have gotten that. It was so easy with with chat GBD. Now it could have hallucinated there. I will completely acknowledge that it definitely could have hallucinated. So, here's the thing. I wasn't doing this as like a scholar, right? I wasn't taking what chat GBT gave me and then instantly, you know, writing that and sharing it. Hey,
this is like the gospel truth or something like that. No, I was just trying to get a general sense for what's going on. And I had enough context where I was like, "Okay, if it was telling me that a bunch of people came to Argentina, came from China, like I know that that's obviously not true." And I was just kind of trying to get a general picture of what was going on. And AI is really good at that. But look, hallucinations are definitely a thing. I don't think that they're nearly as big of a problem
as they used to be. They were a big problem two years ago. Now they're less of a problem. and still sort of in the cultural awareness. I think that people think that they're a way bigger problem than they actually are. But do not do not just take something that an LLM gives you and pass it along for fact. Like you really need to be careful there. Okay? And Benedict Evans, he's a technological analyst. He speaks about this really well. I love his synopsis of LLM. says that LLMs are really good at things that don't have
wrong answers, but really bad at precise information retrieval. And I think that's right. So, like, if you're stuck on a sentence in your writing and you're like, "This sentence just isn't quite right. Can you give me like 10 ideas for how to improve this sentence?" Like, there's no right answer there, right? That's sort of a taste thing. It's a felt sense thing. Or like, "Hey, uh, next weekend I'm planning a birthday party for a friend in upstate New York. Hey, can you give me some ideas for for what to do? These are the sorts of
things he's interested in. Can you sort of help me with an agenda and an itinerary? It'll be really good at that cuz there's not like a right answer. There's sort of a range of answers that it could give you. That's what it's good at. But like yo, for I need some really good quotes from John Steinbeck, it's just not going to be helpful with that. And you can make you can make some mistakes. So, like hard-earned mistake here, okay, paid the cost for this one. So, I got this project called writing examples and uh early
on I was like, hey, you know, we'll write a article about John Steinbeck. So, what I did, I said, all right, John Steinbeck uh has he ever written something about food? And so, I'm in GPT4 and we're going back and forth and uh it gives me this quote and it's just like the perfect quotes. It's the perfect quote. So, I spent like two days working on this quote about t taking the quote and sort of analyzing it. How does John Steinbeck write about food? Then we were like, man, this quote is so good. This information
is great. Let's go make a video about this. So, we spent a bunch of time recording. We uh you know, we get our editors on it. And we're like, we're so happy with it. We're we're we're so happy with it. And so, we're like, yes, we're going to publish the article. We're going to publish the video. We send it out to like 30,000 people and uh we get some emails like, "Yo, John Steinbeck never never wrote this quote in East of Eden." And I was like, it literally hallucinated the dang quote. And ah, you know,
I learned that one the hard way. I learned that one the hard way. And that's just my point. When it comes to quotes, it'll just make stuff up and just completely BS an answer. And you got to be careful there. Okay, so LLMs tend to be good at the things that computers are bad at and they tend to be bad at the things that computers are good at. Computers are really good at just like precise information retrieval. Like if you need this exact thing, LLMs aren't precise in that way. You know, they'll make stuff up
all the time. And there's good things about that and there's bad things about that. So be careful with hallucinations and don't make the same mistake I did. real embarrassing. And then also like when it comes to how I use AI, you know what it's really good for? It's really good for meetings. So if you're meeting somebody who especially somebody who has some information about them on the internet, if you put in a deep research report and you say, "I'm meeting this person. I want your help getting background on them. This is what I'm working on
and these are my goals for the meeting." Deep research. Okay, deep research. It'll give you a really good answer. And the advice that it'll give you will be solid. Like I've been meeting with some publishers for how I write to help me get guests and before the meeting I'll have deep research create a whole report and it'll do a really good job of saying okay if you want to pitch how I write and I'll tell tell them about the show what I'm trying to do and they'll say if you want to pitch how I write
here's my advice and the advice is pretty darn good. It's as good as any advice that I'll get from a from a human being. Okay. So, that's how I think with LLMs. And I just want to begin to wrap here and talk about how to follow what's going on in AI. And then I'm going to get super concrete and specific about exactly how I use every single model because by far the craziest thing about this clearly, uh, if these companies are using AI to name their models, it ain't working well cuz it's so darn confusing.
So, I'm going to talk through exactly what I do. But here's the first thing. I got this from Tyler Cowan on our recent How I Wide interview and he made a great point. He said, "If you want to understand what's going on in AI, you got to be using the latest models and that means paying for them." Okay, the free models are currently like 6 months behind the cutting edge models and the cutting edge models are just so much better. Okay, deep research was the iPhone moment for me with AI. It was that moment when
I was actually there. I was at Mac World in 2007, I think it was, when Steve Jobs announced the first iPhone and I remember like I had some flip phones before and I remember seeing that thing. It was in this glass spiral box rotating slowly and I was in seventh grade and I knew that the world had changed, that something something was going to be different. That first iPhone, it didn't even have the App Store. think it sort of ended up changing the world, but you kind of just knew it when the V1 came out.
That's how I felt with deep research. And if you're on a plan that doesn't allow you to use deep research and you're like, "Ah, AI isn't there yet," you know, get on the get on the bigger plan. I think you have a lot more credibility in terms of your critique if you're using the latest models. And it's also going to help you see how things are changing. So be on the latest models and use different models so that you can see what's going on. And by that I mean cloud 3.7, Grock 3, chat GBT 4.5,
OpenAI's deep research tool. Um that runs on the unreleased 03 model. Like I know it's super confusing, but those are the kinds of models that you should be using if you want to be on the cutting edge. You're going to have to pay for them, but they are so much better than than the free model. And what always happens to me with AI skeptics, this is the number one thing that annoys me about AI skeptics, is they'll be like, AI is not going to do this. AI isn't doing that. And then you'll talk to them
and you'll be like, "Okay, tell me about how you use GPT." And they're like, "Well, you know, I used it for this and it didn't give me a good output at the beginning, so like it's not that good." So I'm like, "Okay, so if you worked with someone and the first thing they produced wasn't very good and you just stopped working with them, like you wouldn't do that. You would kind of work with them." And that's the first thing that they do. They'll just try using it one time. it's not that good. They won't actually
give it a genuine try. That's the first thing. And then the second thing is they'll be like, "Oh yeah, you know, I'm on the free plan of chat GBT cuz it's not good. So why would I pay for it?" And I'm like, "No, it's not good because you don't pay for it." You know what I mean? So what ends up happening is they reinforce their own skepticism. Okay? And I think that the mark of good thinking is that whatever you believe, you're trying to challenge it. So, if you're like super bearish and skeptical of AI,
you're going to have a lot more credibility with me if you're really trying to use it well and if you're using the latest models. But who cares about me? Do that favor to yourself so that you can see what's going on. It's just going to give you a much better sense. Now, I want to talk about what models do I use for what? And I'm going to get super concrete and specific. And I'll start off with OpenAI and ChatgPT. Their 4.5 model is the core model that I make things with. Okay. So, it's pretty funny
with niche humor. It'll make me laugh every now and then. And then also, it's pretty good at writing with voice. It's like a six and a half, seven out of 10, but it's way better than chatbt4 was about a year and a half. Like way better. But also, it's kind of annoyingly corporate and sterile in its output. And that's my least favorite thing about writing with AI. the way it sort of like sucks up to you and just that annoying voice that it uses. H tactically what I'll do is I'll talk into my phone. I'll
ask GPT4.5 to clean it up as pros. I'll do a review on what it's written and then I'll send it. And now get this. Recently I did this for a team writeup. So I had a I had something I want to share sort of some feedback to give. So I went on a walk and I did a voice transcription. I said this what I'm thinking about. You know I repeat myself. I'll stutter. do whatever things that all humans do and then it'll say, "Hey, just clean it up and turn it into a piece of writing
for me and then be really clear about the thesis and the main point and it'll just turn it right into pros." And I copied and pasted that into Slack. And then at the bottom I had sort of in italics these parenthesis and I asked at any point as you were reading this, did you think to yourself, "Wow, an AI wrote this, did the final pass, not David." And everyone said no. Everyone said no. Like the output is pretty darn good. It's pretty darn good. Sometimes I need to do some editing, but it's pretty darn good.
And the same thing happened to me last November. This was another like wo moment for me. So there was a at a friend who was in San Francisco. I went out to a dinner and he took a few bullet point notes at dinner and then he fed those bullet points into the AI and he shared a Slack message with me and I said, "Dude, what's going on? this is this the best voice I've ever seen your writing had. And he goes, "AI wrote it for me." And I was like, "No way." And he's a good
writer. You know, he probably has 5,000 email subscribers. Like, he's not Robert Caro, but like he's a good writer. And I said to him, "This is the best thing you've ever I've ever seen you write." Okay. So, to get back to this, I use Chat GBT4.5 whenever I need to create something. And then whenever I need to consume something, I'll use 01 Pro or Deep Research. And that's because ChatGpt 4.5, it's pretty fast. And it's relatively good at writing and organizing information when I'm creating something. But whenever I need to consume something, I'm fine with
it being slower. And 01 Pro will take, you know, two or three minutes to give me an output. Deep Research will sometimes take 20 minutes to give me an output. But I'm happy to wait some time for something that's really worth reading. You know, when I'm making something, I want it to be the feedback cycles to be fast. But if I'm reading something, I'm happy to put in a prompt, go make some dinner, come back, and I have something to read while I eat. And then there's Claw 3.5 and 3.7. So, these are really good
for writing with voice. They sound the most human. Now, people tell me that's really good for coding, and I don't really use it for that. But what's funny is that I really like the tables that chat GPT produces, and then I like the charts that Claude produces. So, what I want to figure out how to do is how do I get really good data into Claude and then have it produce charts for me? And if you're trying to make an argument in a piece that you're working on, a lot of times a chart will be
super useful. And like I guess you could kind of do it in Excel, but you can't really use natural language in the same way. You can do that in Claude and have Claude make the chart for you. And it's really worth playing around with because a chart can just in the snap of a finger make your argument for you and you can show something really clearly. So that's something to think about as you're writing and working on pieces. And then there's deep research. Like I said, this was the iPhone moment for me with AI whenever
I want an in-depth explanation of something. So I was talking about the flora and the fauna in Austin at the beginning of spring between my office and where I live. And then the other day, you know, I was driving on I35, which is the highway that goes through Austin. Here's exactly what I prompted it with. I said, "It seems like there's traffic every time I drive I35 through downtown Austin. Why is there so much traffic? I want a comprehensive social and technological history of this road. And I want to know why the traffic jams are
so uniquely bad according to data and theories of engineering and road design." Now, I had to wait 12 minutes and 32 seconds for an answer. But look, the writing, remember I was talking about the quality, the absolute writing quality was probably like a 7 out of 10, but it was just 10 out of 10 personalized. The answer that it gave me was super clear, super specific, and in a situation like that, I'm a lot happier just reading pretty good writing about a topic that's then tailored to my exact interest in that moment, the exact question
I had at that time. I bet there's a legit incredible book about like Texas highway design, but I don't need that. It's complete overkill, you know? I just want the exact answer. And also, I'd have to do a bunch of searching for information. So, I'm happy to wait 12 minutes cuz it's 10 out of 10 with being tailored to exactly what I want. And then there's Grock. Grock has the most personality. Grock is your crazy friend. I mean, look, it's like Elon Musk in a box, right? It's like it's not as bland as the other
LLM. You can, that's what I use to explain things to me. So, if I'll read something, there's like a technical thing that I don't quite understand, I'll ask it to give me a funny analogy or a simple explanation, just, hey, help me understand this. And then, like I was talking about earlier, I love to argue about it in voice mode. It's so fun to do. You have high conviction on something. Hey, let's just start arguing with each other. And it'll just give me a transcript of the entire conversation. And then right at the end, this
is the thing you got to remember. at the end ask for that summary of the best ideas from the conversation. And now I haven't tried this yet. I want to do it, but Patrick Hollson, the CEO of of of Stripe, what he does is he uses Grock while he reads. So he'll turn on the voice mode and say that he's reading about 18th century England and there's context that he doesn't really understand. Rather than like opening your phone and then you get distracted, you can just ask Rock the question. It'll give you the answer and
then you sort of have a tutor by your side whenever you read. I really want to give that a try. And the thing is to keep it open while you're reading. Grock is really good in the background. Now, I was talking about hallucinations with LLMs earlier and that's a problem. But there's a solution. And the solution is perplexity. Like if you want facts and you want a quick answer and then really clear and concrete sources for what you're looking for, then use Perplexity. It's good. But their deep research tool isn't really that good. Chat GBTS
is the best by far. Grock has a deep think model, but it's only okay. Use chat GPT for the deep research. Then there's 11 Labs. So, we've done two things with 11 Labs. The first thing is their speechtoext model is great. So, what I'll do is I'll even take an MP3 file like this podcast and I will put it in and I'll instantly get a transcript of the entire thing and it's really good. So, that's how I'm getting my transcripts now. And they're way better than the kinds of transcripts that I'd get from like a
discript or a Rev computerenerated going a year and a half ago. I mean, it used to cost me something like this would cost me 150 bucks for a transcript in a 24-hour turnaround. Now it's free and it's a 10-minute turnaround. Like serious advancement there. And then the other thing is we clone my voice. We clone my voice. So, what we did is we put in a bunch of training data and if we ever need to, we're working on a video or something, we ever need to change a few things in the video. We now have
cloned my voice so that we can type what it is that we want to say and then 11 Labs will just make it sound like me. And that's going to end up changing audio books. You know, if you want to change a fact, you got something wrong in an audio book on page 320, you're not going to have to go back to the studio. You're just going to be able to make a small insertion. And look, it's not perfect. It's definitely not perfect over time. Like if you were to listen to me speak for 10
minutes, you'd be like, "Okay, this sounds a little bit robotic." But at 10 seconds, I don't think you can tell. Actually, we just did it and I bet you didn't notice. That's how good it is. You were just listening to something that was cloned and you probably couldn't tell. Go back, see if you can tell. And then there's Granola. I was talking about Granola for meeting notes. It's the first AI noteaker to just absolutely nail it. It's super unintrusive in meetings. It autogenerates meeting notes based on the conversation that you had. It has a transcript
of the entire conversation. And my favorite thing is that you can search that transcript. And so it's not just telling you what you said, but it's almost giving you quick summaries of what you said. So if you're like, "Hey, we were talking about some some plans that we wanted to do in Austin." It would basically summarize those plans and turn it into an output for you. And then there's Whisper Flow and Super Whisper. So, I don't like typing. I love being able to go for walks or just like walk around my my office as I'm
writing and I'll just speak things out. And, you know, Siri is just really annoying to use because it gets so many things wrong. Whisper Flow and Super Whisper are pretty darn accurate when you're just trying to share things. And what's really nice is I can just walk around for like 5 minutes as I think through something. I'll say 800 words worth of stuff. It'll just instantly put that on the screen. I can start writing from there. And what it's using is it's using not exactly sure how the technology works, but pretty advanced uh speech to
text models to basically assume things around punctuation and capitalization and it gets to know you over time. For example, whenever I say how I write, it capitalizes how and write, whereas like Siri would make that lower case. And it's like 10,000 little things like that that make it really good. So that's how I use it. That's how I use AI. And look, it's just gotten to the point where if you were to look at my life, the tower of my life, and a Jenga block was pulled from it, and it's like, "No more AI for
you," I would actually be really, really bummed. I'd be really bummed at this point. Um, AGI isn't here. We haven't reached the singularity. No, none of that. Like, just very pragmatically, I just be really bummed. Like, this stuff is fun to use. It's really helpful. It's integrated into my life all over the place. I'm probably doing 10 to 20 AI interactions per day. Um, you know, it's pretty crazy. I'm talking to AI more than any single person in my life. Uh, it's pretty wild to think about. And, you know, here are the things I'd miss.
I'd miss going back and forth on an LLM whenever I get stuck on a piece of writing. I'd miss getting instant 80th percentile feedback on my editing. You know, I have a pretty darn good editor right now who's better than the LLM still are, but you I always got to wait for his response and I get instant feedback with the LLMs. It can interview me, which is fun. It has context on what I'm writing about. I miss totally miss being able to speak out all my ideas into the chat GPT app and just ask it
to write me an outline and structure the ideas and tell me, hey, you're weaker in these places, you're stronger in these places. when I hear more, wanting you to clarify this, give me more about that. And man, the thing I'd miss the most is instant deep research reports on whatever I'm interested on. It's so cool being able to do that. It's so cool being able to do this. And so, if you've you've listened to all this, you're still an AI skeptic. That's my number one recommendation is just try deep research and give it an earnest
try. Okay? Give it an earnest try. go in and try to write a really good prompt about something specific that you're curious about where you know enough about the topic that you can ask a good question and then see how good the answer is. I'm not tell I'm not saying that it's going to be the greatest thing you've ever read, but I am saying it's going to be super tailored to your interest and I think it's going to be pretty darn good. All right, I know that was a lot. And if you just want all
of that information in one place that you can easily read, well, I put it on a PDF for you. And it's not just the stuff I just shared, but I have all of these prompts that I use throughout my day. I'll give them to chat GPT. And everything is on that one PDF. So, if you want me to just email it to you, then what you can do is you can go to pll.com/ai or you can just pop down to the description. I need a way to send it to you. So, go to that page,
enter your email, and I'll just flip it on over to you. All right. So, if you just watched this, you just listened to this, you're like, "Okay, I want more of this. I want to keep going down this rabbit hole." My number one recommendation is the episode I just recorded with Tyler Cowan, and it's all about writing with AI. But there's a few things that we talked about in that episode that I definitely wasn't able to talk about here. How to read with AI. He speaks really well about that. How AI is influencing academia. why
secrets are going to be more valuable in an AIdriven world. And look, the title of the episode explains it pretty well. It's something like how to write with AI in 68 minutes. So, if you just want to go check that out, you like this, then I recommend going to watch that episode. But also, look, there's probably going to be a lot of questions and I'm usually somewhat active in the YouTube chat comments. I'm going to be really active for this episode. So, if you have questions about how to write with AI, things that I said
that maybe weren't clear, just ask a question, share your reactions in the YouTube comments, and let's just all have a conversation. I'll pop in and we can we can all figure this stuff out together. All right, thanks for hanging out.