Suddenly all AI companies are going into deep research and now even perplexity is joining this deep research battle, but is it perfect compared to the premiums OpenAI deep research? No, but is it useful enough? Definitely.
Yes. So in this video, I'll share its strengths, limitations and the better research approach to make the best use of this feature. The biggest strength of using perplexity research perhaps is not only the affordable price tag that existing free or pro user can enjoy this feature, but also the speed in getting a response.
From my experience, usually the deep research takes a minute to return back the response, and you can then ask follow up research question using deep research. While for OpenAI deep research is not designed for this kind of back-and-forth interaction. Another strong plus of perplexity is being able to switch AI models.
You can access to reasoning models, which is a huge flexibility for your project. But although they're all called deep research. Perplexity's one is quite a different type of deep research compared with Gemini or OpenAI deep research.
One of the biggest limitations is about the reasoning process in evaluating information. First, I found it used lots of Reddit sources whenever the social mode is turned on. For some cases, nearly 70 percent of sources are Reddit, which might impact the outcome trustworthiness.
And second, the output context window is extremely small for this type of deep research task. So even though it can find lots of sources, the output is not comprehensive enough due to the contextual window limits. And also, I found it hallucinate more than the standard Pro Search.
I suspect that might be related to the reasoning layer that it generates insights that are not actually mentioned in the sources. So you might be using it wrong if you just dump the search query like a standard Pro Search and you're just wasting the reasoning power without giving context. There is a better research approach whenever we use deep research to overcome its limitations and to maximize its output quality.
First, we'll start with the research planning process. Unlike OpenAI or Gemini deep research that will always propose a research plan first. Perplexity doesn't have this feature built in.
So that's why we'll have a proper planning with a well defined scope, sources to include, so we can maximize the final output quality. Primarily we will use a reasoning model for this step. So make sure you give as much context as you can here to best utilize the reasoning power.
Then the actual deep research process. It will based on the research plan and criteria to avoid bias, there're extra tips for this step to overcome the output contextual window limit, which I'll show you in a minute. And finally, the synthesis stage that combines all the findings and cross checking by searching for specific evidence.
To give you a basic idea on how this process looks like, let's start with the most common use case. Comparative analysis. So let's say I'm a product manager working at a eCommerce SaaS company and I'm doing competitive analysis for the product roadmap.
So first we will create a perplexity space with some custom instruction. The reason to create a space is we can maintain a context and it allows you to set up custom instructions and also to change the day range when doing the researching. Our first prompt is to give us some context.
What's the aim for this research project and ask it to generate a research plan. Note, we will use the reasoning model, o3-mini. If you want, you can use DeepSeek R1, and then turn off all sources to make sure it doesn't reference any external source at this stage.
Now immediately we'll have a research plan, the project goal, as well as the prompt structure. And for each research step, like competitor identification, feature analysis, pricing, and business model, and then you can export it as a PDF file. But since for the export PDF files, we have the perplexity logo or the original prompt, which we can't remove.
So I recommend just to copy to work document and then export to a PDF file. Then we'll ask it to base on this research plan and to propose a structured template for this competitive analysis that is ready to use. The reasons for this is you can then make sure the template output aligns with your research plan.
So upload the research plan we just generate and make sure you turn off all the sources and just select the O3 reasoning model. Again, we want to just rely on the reasoning model thinking ability to generate the template. So you can see it will base on the research plan and output the report template, which is really detailed and well structured with clear sections like executive summaries, competitive landscape.
And again, copy to the work doc and export it as a PDF file that we can use later. Now we're not done yet. We're going to use this prompt and ask it to suggest high quality, reputable sources that could provide objective information about the research subject.
And also very important to ask it to indicate which sources should be excluded to avoid biased information. So output them in a prompt format that AI can follow. This time we're gonna turn on all the search mode and use the perplexity pro search mode because for tasks like this, pro search already do the job.
Then you will have the recommended source guideline that is ready to use, like using industry analysis report. Independent review sites and what source to avoid like marketing content, then copy this response and then paste back to the custom instruction setting so Perplexity can follow when doing the actual deep research. So upload both the research plan and the report template that we export earlier.
And the reason why I don't upload to the knowledge space is I found these features is a bit buggy. Sometimes it will only reference the. space file without searching, even search mode is turned on.
And when you delete the knowledge file, sometimes the file is still retained in performing search action. So that's why I recommend you just upload in a chat. And now very important, I suggest turning off social more whenever you use deep research.
It tends to use too much Reddit sources that makes the output very biased. Unless your research topic relies heavily on Reddit resources. And then pick deep research and start.
So it will take some time and then finally it use 64 sources, which is not too bad. And it is following the report template specified with different clear sections like the market leader, emerging players, and I like how it used table format to present the pricing and other information. So for the source it's also try to follow the instruction and use sources like Statista, although I see it may also use sources like Shopify, so it's not yet perfect.
But using this approach, you can try to make sure it follows your research plan, the output format and the source guidelines. So the output will be more balanced and you can also export it as ready to share PDF report. Now you have the basic idea, but one of the limitations of perplexity research compared to OpenAI Deep Research is the output context limit.
And it can also hallucinate more than OpenAI. But there are ways to overcome it. And I'll show you how in this content strategy research use case.
So let's say I'm a marketer and I'm doing the content strategy research for a podcast channel. So again, create a new perplexity space. And this time we're going to add this in the custom instruction.
So whenever it generates statistic related in response, include the verified and unverified markers. to indicate if the data are really a direct quote or just an estimate. And now use this prompt to generate the research plan with context that can identify the pain points of my target audience, who are parents, and potential content opportunities.
And make sure you use reasoning models. Now we have the plan ready. Again, export to a PDF file.
then ask it to suggest all the sources from reputable parenting websites, including organization, authoritative institutions for this research using standard Pro Search. And quickly it will come back list of recommended sources to use. And this is a step that I recommend you add whenever you use perplexity deep research.
Of course, you can also add any custom sources that you want to include. Then add to the custom instructions, and this time also add this to make sure to limit the reddit sources used. We can start the deep research process.
This time, instead of asking it to generate a complete report, we will deep dive into each topic for deeper insights. So attach the research plan. Our first question is to ask it to research specifically on the most common audience pain point.
You can also change the date range if you want. Again, turn off social mode when selecting deep research. So total 76 sources, not too bad.
And you can see in the response, it follows the instruction to add the verified marker to indicate there is a direct quote from the source. Like this one, 28 percent of parents rank infant feeding adequacy as top anxiety. You can see the source mentioned it exactly.
So this may help to minimize the manual checking process, but still, I recommend you double check for important figure as sometimes perplexity might not follow the instruction correctly. And then export this response into a PDF file. Then do a few more deep research queries, like asking for information that parents are actively searching and how different parent segments consume the content differently to inform the strategy planning, and then export all of them into PDF files.
Now in a new chat, upload every deep research queries response we just generate and attach here. The reason we do this is that Perplexly allows you to unlock the extended context window when uploading files. So you can utilize this fully.
Now turn off the all the search modes and pick o3 models. This is because we want to use the most intelligent model to synthesize the insights. I like how specific it suggests the content cluster ideas like parental identity, sleep solutions, evidence based parenting with episode ideas.
And I have compared the same research topic using this approach and the standard pro search and you will find this approach will always generate more tailored specific responses because of the research plan we provide and to ensure every single deep research queries have enough depth and maintain context. So make sure you use this approach and fine tune the step details to suit your own needs. Since perplexity won't ask follow up research questions like other deep research products.
Now, a bonus is to ask it to generate more follow-up research questions before even starting the deep research process. So upload the research plan, give it more context like the existing podcast growth status, and propose 5 to 7 critical research process questions. Then it will give you more research directions, like specific unmet needs, and I really like this one “how parents currently discover and choose parent podcasts”.
This will help you refine your research so much better before starting the actual deep research process. Alright, besides breaking a broad research topic into different smaller deep research queries, we can actually take a step further and compile a comprehensive report just like the one in OpenAI deep research with less hallucination. And this time let's use another use case, business case creation.
So let's say I'm working in a SaaS company as the head of customer service, and I need to build an internal business case to implement AI customer service tools. So give it the context. Why we're building such is because we want to optimize the response time for customer support cases.
And then ask it to build an initial outline with clear sections that are presentable. Again, turn off all the source modes and pick reasoning models o3. Now we have the initial business case skeleton with clear headings and sections.
They all make sense to me, like problem statement, objective, proposed AI solution, etc, and then export it to a PDF file. We can start the deep research process. So upload the business case outline and then ask it to research the first section as specified and without in-text citations.
We do this is because we want to generate the report section by section. And then turn off social search mode and pick deep research. For better results, you should always upload the research plan and source instructions like previous example.
Or any specific instructions, like the word limits for each section. But for the sake of demo, I will just simplify it here. For the first session, it used 54 sources with the list of reference at the end.
I can see quite a few are obviously AI tools vendors. And it mentions some current challenges in customer support like limited scalability and efficiency. Of course, I also recommend to upload your own specific data, like in this case, that's a business case, then the output will be more tailored to your business situations.
So copy it and then paste it in a Word doc. Now we're going to research the next sections. The beauty of using the same chat window is because you can always maintain the context and don't need to upload the outline again.
Then this time, it found over 20 sources, it's the section about the objective, like improving satisfaction metrics , scaling operations with the implementation framework, which I like. And again, copy the response and paste back to the same work document and number the section heading. And then you get the idea.
So we will continue to ask it to generate all the remaining sections, like section 3, proposed AI solutions. We're breaking each section into one single deep research query and building the final report section by section. So this way we can overcome the output context limit.
Now, assuming we're done with all the sections. And don't care about the final format yet. We're going to export it to a PDF format for the final verification check.
The best way is always do a manual fact checking, but we can actually ask perplexity to do an initial check first. So upload the final business case report, and then ask it to verify the accuracy of the key statistic and claims in this business case and flag them using different markers. So this time we'll just use the standard Pro Search.
Then you can see it will go through all the key data claims in each section and flag it. Like for the existing challenge section, this claim has been quote from the Salesforce data. So it is verified.
But then in the implementation section, some are flagged as Unverified. Then what we want to do is to highlight them in the actual report for further double checking or to replace them with the new data. So this way you can try to optimize the manual fact checking process.
And I also suggest you use other AI models like ChatGPT to do any cross verification and see if any discrepancy. And then finally, we can attach the verified report and ask o3 to generate an executive summary. So this way, you can ensure the generated executive summary isn't random, it will maintain the coherence and in context.
And then you can paste back to the final report and do some more formatting, polishing to make it more completed, like adding the cover, adding a table format. So using this method, you can produce a comprehensive report, just like the OpenAI Deep Research. Of course, this is not the most perfect solution, but using this method, you can overcome the token limit, have the biggest control on the output quality, and generate a comprehensive report just like OpenAI.
But the key is always do fact checking and have a good research plan in the first place. The best ways to use this kind of AI research agents is to always understand your goal and its limitation so you can use them more strategically. And after the deep research process, we can actually take a step further and to combine it with NotebookLM to get more meaningful insights.
I share them on my community. You can find the link in the description to join. And before you go also watch this video to get more inspiration on how to prompt better on perplexity or ChatGPT search.
I will see you next time.