SEO Guide
10 min readDiscovered but Not Indexed: What It Means and How to Fix It
You check Google Search Console and see pages stuck as 'Discovered, currently not indexed.' Google knows these pages exist but has not crawled or stored them. This guide explains exactly why it happens and how to fix it.
What 'Discovered, currently not indexed' means
When Google Search Console shows a page as "Discovered, currently not indexed," it means Google found the URL, usually through your sitemap or internal links, but has not crawled it yet. The page is in Google's queue, but Google has decided it is not worth visiting right now.
This is different from "Crawled, currently not indexed," where Google actually visited the page but chose not to store it. With "Discovered, not indexed," Google has not even looked at the content yet. It is a priority problem, not a quality problem.
Understanding this distinction is critical because the fixes are completely different. Our technical SEO guide covers the broader indexing landscape, and our article on why pages are not indexed walks through every common cause. This guide focuses specifically on the "Discovered" status and how to resolve it.
Discovered vs Crawled: why the difference matters
Discovered, not indexed
Google found the URL but has not visited it yet. This is a priority problem. Fix: improve internal linking and site authority.
Crawled, not indexed
Google visited the page but chose not to store it. This is a quality problem. Fix: improve content depth and uniqueness.
Google Search Console shows several indexing statuses. The two that confuse people most are closely related but have different root causes.
- Discovered, currently not indexed: Google found the URL but has not crawled it. Google is deprioritizing it in the crawl queue. The problem is getting Google to visit.
- Crawled, currently not indexed: Google visited the page, read the content, and decided not to store it. The problem is content quality, uniqueness, or value.
If your page is stuck in the "Discovered" state, you do not need to rewrite the content. You need to make Google think the page is worth crawling. That comes down to signals like internal links, site authority, crawl budget, and how Google perceives the overall quality of your domain.
Check your URL in Search Console's URL Inspection tool. If it says "Discovered, currently not indexed," the page has never been crawled. If it says "Crawled, currently not indexed," the problem is different and requires content-level fixes.
Why Google discovers but does not index your pages
There are several common reasons Google leaves pages in the "Discovered" queue without crawling them.
Crawl budget limitations
Every site gets a finite amount of crawl budget. Google allocates crawl resources based on how important it thinks your site is. If your site has thousands of pages but low authority, Google simply cannot crawl everything. Lower-priority pages get stuck in the queue.
Low site authority
New or small websites with few backlinks get less frequent crawling. Google has limited resources and prioritizes sites that have proven their value. If your domain is new, Google is cautious about how much time it spends on your content.
Weak internal linking
Pages buried deep in your site structure with few internal links appear less important to Google. If a page can only be reached through a sitemap but has no contextual links from other pages, Google deprioritizes it.
Too many URLs submitted at once
Publishing hundreds of pages quickly or submitting a massive sitemap can overwhelm your crawl budget. Google cannot process everything at once, so it queues many of those URLs and works through them gradually.
Server performance issues
If your server responds slowly or returns errors during crawling, Google throttles the crawl rate. This means fewer pages get crawled per session, and lower-priority pages stay in the queue longer.
Duplicate or near-duplicate content signals
If Google suspects a page might be similar to content it has already indexed, it may deprioritize crawling it. This is common with product variations, filtered listing pages, or auto-generated content with little variation.
How to diagnose the problem
Before fixing anything, you need to understand the scope and pattern of the problem on your specific site.
Check the Pages report in Search Console
Go to Indexing > Pages and filter for 'Discovered, currently not indexed.' Note how many pages are affected and whether the number is growing, shrinking, or stable over time.
Identify patterns in affected URLs
Look at which URLs are stuck. Are they all from one section of the site? Are they newly published? Are they parameter URLs or paginated pages? Patterns reveal the root cause.
Check internal links to affected pages
Use the Links report in Search Console or a crawl tool to see how many internal links point to the stuck pages. Pages with zero or very few internal links are the most likely to be deprioritized.
Review your sitemap
Make sure the affected pages are in your XML sitemap. Also check that your sitemap does not include thousands of low-value URLs that dilute the signal for your important pages.
Test server response times
If your server is slow, Google will crawl fewer pages per visit. Use tools like PageSpeed Insights or your server logs to check response times during Google's crawling windows.
Look at the date in the URL Inspection tool. If Google discovered the page weeks or months ago and still has not crawled it, the deprioritization is significant. Recently discovered pages may just need more time.
How to fix 'Discovered, currently not indexed'
The goal is to send stronger signals that your pages are worth crawling. Here are the most effective fixes, ordered by impact.
Strengthen internal linking
Add contextual internal links from your most visited, already-indexed pages to the stuck pages. This is the single most effective fix. Google follows links from important pages, and a strong internal link tells Google that the destination page matters.
Improve your site's overall quality
Google allocates more crawl budget to sites it trusts. Remove or consolidate thin, duplicate, or outdated pages. Every low-quality page on your site reduces the crawl budget available for your important content.
Request indexing in Search Console
Use the URL Inspection tool to request indexing for your most important stuck pages. This is not a permanent fix, but it can bump individual pages up in the queue. Google limits how many requests you can make per day.
Reduce sitemap bloat
Only include pages in your sitemap that you genuinely want indexed. If your sitemap has thousands of parameter URLs, tag pages, or thin pages, Google has to evaluate all of them. Clean your sitemap to focus on your valuable content.
Improve server performance
Faster server response times mean Google can crawl more pages per session. Optimize your hosting, enable caching, and reduce server-side processing time. Even small improvements compound over thousands of crawl requests.
Publish content consistently
Sites that publish regularly get crawled more frequently. If Google sees that your site is actively updated, it comes back more often and works through the crawl queue faster.
Our internal linking guide covers how to build a linking structure that helps Google discover and prioritize your content. Strong internal links are the most reliable way to move pages out of the "Discovered" queue.
Understanding crawl budget and why it matters
Crawl budget is the number of pages Google will crawl on your site within a given time period. It is determined by two factors: crawl rate limit, which is how fast Google can crawl without overloading your server, and crawl demand, which is how much Google wants to crawl based on the perceived value of your content.
For small sites with under a few hundred pages, crawl budget is rarely an issue. But for larger sites, or sites with many auto-generated or low-value URLs, crawl budget becomes the bottleneck that keeps pages stuck as "Discovered."
- Remove or noindex pages that add no value (empty categories, duplicate filters, old tag pages)
- Fix redirect chains that waste crawl resources
- Ensure your robots.txt is not blocking important resources that Google needs to render pages
- Use canonical tags correctly to consolidate duplicate content
Our robots.txt guide explains how to configure crawl directives properly so you are not accidentally wasting your crawl budget on pages that should not be crawled.
Common mistakes that make it worse
Many site owners try to fix this status and end up making things worse. Avoid these common mistakes.
- Spamming the Request Indexing button: Google limits daily requests and repeated submissions do not help. Requesting indexing once is fine. Doing it every day for the same URL is wasted effort.
- Adding more pages hoping for more traffic: Publishing more thin content when your existing pages are not getting crawled only makes the problem worse. Fix the crawl budget issue before scaling content.
- Ignoring low-quality pages: Old, outdated, or thin pages that you have forgotten about still consume crawl budget. Audit your site regularly and remove or consolidate content that is not serving any purpose.
- Relying only on sitemaps: A sitemap tells Google about your URLs, but it does not guarantee crawling. Sitemaps are suggestions, not commands. Internal links are far more effective at driving crawl priority.
How long does it take to fix?
There is no single answer. The timeline depends on your site's authority, the number of affected pages, and how significant the underlying issue is.
- Small sites with a few stuck pages: Days to a few weeks after adding internal links and requesting indexing
- Medium sites with a crawl budget issue: 2 to 6 weeks after cleaning up low-quality pages and improving internal linking
- Large sites with thousands of stuck URLs: 1 to 3 months of consistent improvement, including content pruning, server optimization, and link restructuring
If you are dealing with a new website with no traffic, expect the timeline to be on the longer end. New sites need time to build the trust signals that earn faster crawling. Understanding how long SEO takes helps set realistic expectations.
How to monitor progress
Once you have made changes, track whether pages are moving out of the "Discovered" status.
Check the Pages report weekly
Go to Indexing > Pages in Search Console and watch the 'Discovered, currently not indexed' count. A declining number means your fixes are working.
Use URL Inspection for specific pages
Check your most important stuck pages individually. Look for changes in the last crawl date. If Google has crawled a page that was previously only 'Discovered,' your internal linking changes are taking effect.
Watch your indexed page count
The total number of indexed pages in Search Console should gradually increase as pages move from Discovered to Indexed. If the number is flat despite publishing new content, the underlying issue is not resolved.
Be patient. Changes to crawl behavior take time. Google does not re-evaluate your entire site overnight. Give your fixes at least 2 to 4 weeks before evaluating whether they are working.
How RankSEO helps with discovery and indexing
Tracking which pages are stuck as "Discovered" and figuring out why is tedious when done manually. RankSEO automates the diagnostic process.
- RankSEO's site audit features identify pages with weak internal linking, crawl budget waste, and indexing issues across your entire site.
- Flags pages stuck in the 'Discovered' state and surfaces the likely cause
- Recommends internal linking opportunities to boost crawl priority for important pages
- Monitors your indexing health over time and alerts you when pages drop out or get stuck
Instead of manually checking every URL in Search Console, RankSEO gives you a prioritized list of what to fix first. Explore RankSEO's features or check out our pricing plans to get started.
Frequently Asked Questions
It means Google found the URL, usually through your sitemap or internal links, but has not crawled it yet. Google knows the page exists but has decided it is not a priority to visit right now.
No. "Discovered, not indexed" means Google has not even visited the page yet. "Crawled, not indexed" means Google visited it and decided not to store it. The first is a priority problem. The second is a quality problem.
It depends on your site's authority and crawl budget. Some pages move from Discovered to Indexed within days. Others stay stuck for weeks or months. Adding strong internal links and requesting indexing can speed up the process.
It can help for individual pages, but it is not a scalable solution. Google limits daily requests. The real fix is improving internal linking, cleaning up low-value pages, and building site authority so Google naturally prioritizes your content.
Yes. If your site has more URLs than your crawl budget can handle, Google queues the lower-priority ones. Publishing hundreds of pages quickly or having a bloated sitemap with low-value URLs makes this worse. Focus on quality over quantity.
Only if the pages are genuinely low-value. If the content is good and serves a purpose, keep it and work on improving internal links and site authority. If the pages are thin, duplicated, or no longer relevant, removing or consolidating them frees up crawl budget for your important content.
Continue reading
Technical SEO Guide
Make your site crawlable and fast
Read guideWhy Pages Are Not Indexed by Google
Learn why your pages are not indexed by Google and how to fix common indexing issues step by step.
Read guideRobots.txt Guide
Learn how to use robots.txt for SEO with simple examples. Control how search engines crawl your site.
Read guide