If you're using Perplexly day to day, like me, you know it's already pretty good at doing research. But Google just launched an AI research assistant, Deep Research, that really caught my attention. It claims to have an agentic capability to help you create a multi-step research plan that you can review and modify.
I know most of you might be thinking, "Should I switch? " So in this video, let's find out how they compare. At the end of this video, I'll also share some of my thoughts about both tools.
Let's go. For Google Deep Research, it is now exclusively available through the Gemini Advanced subscription plan, so it's like an add-on feature to the existing Gemini app. Google is also planning to integrate more Gemini features, including Deep Research, for existing Google Workspace users.
As for Perplexity, the Pro plan is a standalone plan, just like Gemini. You can always upgrade from the free plan if you want. Both of them charge a similar pricing of around $20 per month.
However, for the AI model of Deep Research, you can only access the Gemini model, and I expect it will upgrade to the latest 2. 0 model once it becomes stabilized. Meanwhile, on Perplexity, you can always switch to different advanced AI models, including the latest trending models, DeepSeek or O1 model.
This flexibility can be a strong plus if you like the option to switch between models. But note that both have usage limits. For Perplexity Pro, you have around 300 Pro searches for now.
The research limit for Deep Research is not clearly specified; Google just mentioned you will get notified if you're close to the research limit. One more thing to note is that, in Deep Research, currently, it doesn't support any file uploads, unlike Perplexity, which allows you to build your own knowledge base by feeding your own data that might be relevant to the research. Alright, the first thing we want to evaluate is the research efficiency because, when we do research, speed and flexibility are huge factors.
How fast can it provide answers without endless back and forth, and how flexible is it to refine our research direction? Let's try using a very simple research topic around responsible AI. I will ask both to give me the key trends in responsible AI.
To use Deep Research, make sure you select the Gemini with Deep Research mode. Then you can type your research questions. It will first generate a research plan for you.
I would say they are detailed, well thought out, and comprehensive research plans with valid questions. Some of them are not even included in my first prompt. If you don't like it, you can edit the plan.
Though I would say the experience could be better and more intuitive than I expect. When I click 'Edit Research Plan,' I can just directly edit it instead of making another prompt. Let's try using search operators to pull only PDF files.
Obviously, you can see that Deep Research can't understand that, and we will need to explicitly state that in the prompt. While on Perplexity, there's no research plan; it will pull the resources right away based on what you mentioned in the prompt—no more, no less. But you can just edit the query and use search operators if you want to adjust, and then it will immediately update the response for you.
While on Deep Research, after you click 'Start Research,' the process is much slower. It usually takes around eight to ten minutes to fully load a response, depending on your research topics. In this case, it searched for 24 sources.
The good thing is you can export it directly to Google Docs, and it is ready to make more detailed changes. The document is also well formatted with detailed citations of the sources, which you can import to other Google products like NotebookLM if you want. On the other hand, on Perplexity, the response time is much faster with 11 sources, which is not too bad.
If you don't like a particular source, you can uncheck it, which is a flexibility you don't have on Deep Research. Additionally, I like how Perplexity always numbers the sources in each key point compared with Deep Research, making it easier to trace if you have many sources. There are also different follow-up questions available to inspire your research direction, which are not provided by Deep Research, so you need to ask it to give you follow-up question ideas.
However, most of them are just generic to AI and not specifically for responsible AI. Both of them allow you to share the report with anyone via a link. Overall, I would say both of them have good research efficiency, depending on the use case, whether you value interactive searching or formal research documentation.
However, I would say Perplexity might be slightly better than Deep Research in terms of efficiency and offers more flexibility to adjust the search directions and refine the scope. Another important area for research is source reliability, information depth, and output quality because oftentimes we want to ensure the sources are reliable, up-to-date, and that the output is actually useful for our projects. Let's say this time our research topic is AI agents, which is definitely an on-trend topic right now.
Here is the research plan generated by Deep Research: we’ll ask it to research the current state of AI agents, focusing on key players, capabilities, implementation approaches, and real-world applications. I will use the same plan to prompt on Perplexity. Now, on Deep Research, a total of 64 websites are used.
You can scroll down to the bottom for the full list, and looking at the list, they are of good quality and from reputable brands. At least for most of them, I have heard about, like Salesforce, IBM, and Zapier, but at the same time, it seems most of them are mainly service providers, instead of other media or source types like discussion threads, YouTube videos, etc. There are lots of discussions happening around AI agents on YouTube and Reddit, and even academic sources like arXiv are not being used.
Clicking on the source, you can see it will highlight the exact part it used to generate a response, just like on NotebookLM, which I really like. Most of them are up to date; some are even recently published, so source reliability is good. Also, every key point has a source as a citation, even if it is the same source, which can be a good thing if you prefer that.
But for me, I found it a bit unnecessary. On Perplexity, a total of 57 sources are used, and I like how it always details the reasoning steps, so you know its thought process. When you expand the source list, you can see it covers a variety of sources, not just the big brand Oracle, but also other tech media like Tech Informed, TechCrunch, and Yahoo Finance.
Additionally, there are other social media sources, like YouTube videos discussing AI agents, LinkedIn posts, and even IBM community forums, which definitely helps in expanding the diversity of responses. They're also very timely, so I would say source reliability is also good. As for information depth, Deep Research responses are obviously more comprehensive, covering each question in the plan and well-structured with good flow and different section headings.
It also uses specific examples with data, like in the real-world application sections showing how customer service agents increase results by 40%. On Perplexity, it follows a similar structure and flow, starting with the definition, key players, capabilities, and implementation. However, the response you can see is more condensed and at a higher level.
So I would say Deep Research has better information depth in this case. Regarding output quality, let's take a closer look. On Deep Research, I noted for a few sections, it just uses one source to generate the whole output, like in the AI agents limitation section.
Every point is based on a single source only. When you click on a source, it actually does not discuss AI agents, but more generally on AI, and it is just pulling the keywords. I would be doubtful about trusting it, especially for such comprehensiveness.
On Perplexity, the response is more useful. Most sections use at least two to three sources to ensure it is less biased. I found the output to be more meaningful to me, like the capabilities of AI agents.
Although it just has five bullet points, they're much more specific, such as Automation, Decision Making, and Collaboration, instead of pulling in every buzzword like Deep Research. The same case applies to the implementation approach; you can see it is more useful by mentioning incremental integration using a multi-agent system instead of Deep Research, which just throws out some random keywords like defining objectives, data preparations, and platform selection, which feels more step-by-step rather than approaches. In terms of output quality for this particular research topic, I found Perplexity does a better job.
But to be fair, I must say that from my experience, Google Deep Research can also provide useful output and thorough analysis; it really depends on your research topic and the sources it gathers in the first place. Now, let's also evaluate the ability to retain context and cross-referencing. When we do research, it's important to remember what we have discussed and build meaningful connections from multiple sources.
We'll use the same AI research topic. Our first follow-up question is to ask how we can measure AI agent performance based on the mentioned use case and to connect these metrics to the business outcomes discussed. On Deep Research, I noticed the response time is slow, even for follow-up prompts like this, taking at least a few seconds to fully load.
Also, in the follow-up prompt, just three to five sources are used, unlike in the initial research plan with many more sources. It's similar to the normal search experience in the Germinal Advanced. For the response, Deep Research does a good job retaining context.
The metrics refer back to the specific use case mentioned in the original report, and they are specific, like First Call Resolution and Average Speed to Answer, mentioning how these metrics link back to business impact, which I appreciate. However, for Perplexity, instead of tying back to the use case, it first mentions the metrics, similar to what Deep Research presented, but it doesn't closely connect them with all the earlier mentioned use cases. The metrics are less specific compared to Deep Research; for instance, the finance use case mentions task completion rate and revenue growth metrics, which are somewhat general to me.
Therefore, I would say Deep Research does a better job in context retention. Let's try another question to test their cross-referencing ability. This time, I will ask them to compare the claimed capabilities with initial implementation results and early real-user feedback and identify gaps and contradictions between marketed features and real-world performance.
For Deep Research, I like how it first summarizes the overstated capabilities and organizes information into clear sections. However, I noted that the analysis tends to be more general without using specific examples to back up the claims. Regarding early user feedback, the sources are over two years old, so most of them contain more high-level analysis.
For Perplexity, I found the response more specific. It first identifies the specific gaps between market claims and real-world performance one by one, taking reference from case studies and highlighting contradictions with specific examples. Although it still lacks the real user feedback I would have expected, I would say the cross-referencing ability is better in Perplexity for this use case.
It used more sources than Deep Research for every follow-up prompt—19 sources in total—ensuring it has multiple views and cross-validates claims across different sources better. Alright, rating time. Both of them can provide reliable sources and answer your research questions.
Of course, you always need to fact-check yourself, no matter which one you use. For Deep Research, the initial research plan is. .
. Always detailed and comprehensive, and the ability to maintain context through the chat is better. But for now, I see it lacks flexibility to refine the search plan and directions unless you spend time tweaking the prompt inputs, giving it really specific directions.
And every follow-up prompt is using the standard Gemini search experience with few sources unless you initiate a new research. Also, even though it can search a lot of sources—sometimes even over 100 websites—does it mean the quality is always better? In fact, I didn't find that the output quality is significantly improved consistently.
Furthermore, in some cases, it may be so biased that it only uses big brand websites, which may lack source diversity. I also found it may not be so efficient if you just want to do some quick analysis or quick research at a high level, as every search takes more time compared to Perplexity. So, does it really save you lots of time?
Initially, at the planning stage, maybe. But speaking from the whole project, from start to finish, I doubt that would be the case. As for Perplexity, it's more efficient, as the response time is always faster.
It's also easier to tweak the search plan and narrow down the scope with its different focus modes. What I like is the diversity of the sources and the ability to cross-reference. So even though it may not go to as many sources for deep research initially, I found the output quality is somehow more insightful due to its search flexibility, followed by also detailed searches.
Another big plus for me is that it can switch models. You can even use the reasoning model O1, DeepSeek R1, to maximize the output quality. Unfortunately, I still find Perplexity is doing a slightly better job than Google Deep Research overall.
I'm not saying Deep Research is bad; I can see it has high potential, given Google's huge website database and the improving Gemini model. But for now, I just don't find it super impressive. I can see it may be more suitable for doing academic research requiring lots of comprehensive citations or any research that needs formal and detailed documentation and formatting.
Another thing is that I know some people think Deep Research is an AI search engine. But if I were Google, why would I position Deep Research as an AI search engine, especially since Google is already pulling lots of results on AI overviews on its search result page? Instead, I think it might be possible if Google integrates Deep Research as an extra function on its regular search, just like the pro search on Perplexity; then it may make more sense.
Both tools have different purposes and positioning, although they have similar functions. Of course, you can always combine them with other tools like NotebookLM into your research workflow. I'd imagine both of them would work well.
If you want more inspiration, there are some powerful research techniques that you can use on both tools to get higher quality insights faster. I share them on my community; you can find the link in the description to join. If you prefer Perplexity for now, also watch this video about how to use it with NotebookLM to speed up your research process.
I'll see you next time.