SEO Guide
11 min readWhy My Pages Are Not Indexed by Google (And How to Fix It)
You published a page, waited, and searched for it on Google. Nothing. If your pages are not showing up in search results, the problem is almost always diagnosable and fixable. This guide walks through every common cause and exactly what to do about each one.
Why indexing matters more than you think
If Google has not indexed your page, it does not exist in search results. No amount of keyword optimization, link building, or content quality matters if Google never adds the page to its database.
Indexing is the foundation of everything in SEO. Before your page can rank for anything, Google needs to discover it, crawl it, evaluate it, and decide it is worth storing. When that process breaks down, your content stays invisible.
The good news is that most indexing problems trace back to a small set of common causes. This SEO guide article covers each one, shows you how to diagnose it, and gives you a clear fix.
How Google Indexes a Page
Discover
Finds the URL via links or sitemap
Crawl
Visits and reads the page content
Evaluate
Assesses quality and relevance
Index
Stores the page in its database
What does 'not indexed' actually mean?
There is an important difference between crawling and indexing that most people overlook.
- Crawling is when Googlebot visits your page, downloads the HTML, and processes the content. It is the discovery phase.
- Indexing is when Google decides the page is valuable enough to store in its database and potentially show in search results.
A page can be crawled but not indexed. This happens when Google visits the page but decides it is not worth including. In Google Search Console, you will see statuses like "Discovered, currently not indexed" or "Crawled, currently not indexed." Both mean Google knows about your page but has chosen not to store it.
These two statuses have different root causes. "Discovered" means Google found the URL but has not even bothered to crawl it yet, often a priority or crawl budget issue — our dedicated Discovered but Not Indexed guide covers this in detail. "Crawled, not indexed" means Google looked at the page and decided to skip it, usually a quality or duplication problem.
A page showing impressions in Search Console is indexed. If you see zero impressions and the URL Inspection tool says "not on Google," that page has not been indexed.
Your page is new and has not been discovered yet
Google does not index pages instantly. If your page was published recently, Google may simply not have found it. This is especially common on newer websites with few backlinks and limited crawl history.
How to check: Open Google Search Console, go to URL Inspection, and paste the page URL. If it says "URL is not on Google" and has never been crawled, the page has not been discovered.
How to fix it:
- Submit the URL through Search Console's URL Inspection tool using Request Indexing
- Make sure the page is in your XML sitemap
- Add internal links from existing indexed pages to the new page
- The more connected your new page is to the rest of your site, the faster Google will find it
A noindex tag is blocking indexing
A noindex meta tag tells Google explicitly not to index the page. This is sometimes added intentionally for staging pages or thank-you pages, but it can also be set accidentally by CMS settings, SEO plugins, or theme configurations.
How to check: View the page source and search for "noindex." Look for a meta robots tag like <meta name="robots" content="noindex">. The URL Inspection tool in Search Console will also flag noindex directives.
How to fix it:
- Remove the noindex tag from the page's HTML or CMS settings
- If using WordPress, check your SEO plugin's settings for the specific page and the global defaults
- After removing the tag, request re-indexing through Search Console
- Double-check that your staging environment is not pushing noindex tags to production
Some CMS platforms add noindex to new pages by default. Always verify the robots meta tag after publishing, especially if you recently changed themes or plugins.
Robots.txt is blocking the crawler
Your robots.txt file can prevent Googlebot from crawling specific pages or entire directories. If a page is blocked by robots.txt, Google cannot access it and therefore cannot index it.
How to check: Visit your site's robots.txt file and look for Disallow rules that match your page's URL path. You can also use the robots.txt tester in Search Console to test specific URLs.
How to fix it:
- Remove or modify the Disallow rule blocking the page
- Make sure you are not accidentally blocking important directories
- Remember that robots.txt blocks crawling, not indexing. If other sites link to a blocked page, Google might still index the URL without content
For a complete walkthrough of robots.txt syntax and common mistakes, see our robots.txt guide.
Thin or low-quality content
Google does not index every page it crawls. If the content is too thin, too generic, or does not provide enough value compared to what already exists, Google may skip it entirely.
Pages with only a few sentences, auto-generated content with no real substance, or articles that restate what dozens of other pages already cover are common candidates for this.
How to check: Compare your page to the top-ranking results for your target keyword. If your page is noticeably shorter, less detailed, or less helpful, Google may be filtering it out. In Search Console, the status "Crawled, currently not indexed" often points to a quality issue.
How to fix it:
- Expand the content to fully cover the topic. This does not mean adding filler. It means answering every question a reader would have.
- Add original insights, examples, data, or perspectives that competing pages do not offer.
- Improve readability with clear headings, short paragraphs, and a logical structure.
Our content optimization guide walks through a step-by-step process for improving existing content that Google is not indexing.
Duplicate or very similar content
If Google finds multiple pages on your site with the same or very similar content, it may choose to index only one version and ignore the rest. This is common on e-commerce sites with filtered pages, blogs with tag and category archives, or sites that publish slight variations of the same article.
How to check: Search for a unique sentence from your page in Google using quotes. If a different URL from your site appears instead, Google is treating that version as the canonical. In Search Console, look for "Duplicate without user-selected canonical" or "Duplicate, Google chose different canonical."
How to fix it:
- Set a canonical tag pointing to the preferred version of the page
- Consolidate similar pages into one comprehensive page where possible
- Use 301 redirects for pages that should no longer exist separately
- Avoid publishing multiple pages targeting the same keyword with the same angle
Poor internal linking
If a page has no internal links pointing to it, Google may not discover it or may consider it unimportant. Pages buried deep in your site structure with no connections to other content are often the last to be crawled and the first to be dropped from the index.
Internal links are one of the strongest signals you can send to Google about which pages matter. A page with zero internal links is effectively invisible to both crawlers and users.
How to check: Use Google Search Console's Links report to see how many internal links point to the page. If the number is zero or very low, that is likely contributing to the problem.
How to fix it:
- Add internal links from your most important, already-indexed pages to the unindexed page
- Use descriptive anchor text that includes relevant keywords
- Aim for at least 3 to 5 internal links per page
- Make sure every important page is reachable within 3 clicks from your homepage
For a full strategy on how to build an effective internal linking structure, see our internal linking guide.
Crawl budget issues
Google allocates a limited number of pages it will crawl on your site during each visit. This is your crawl budget. For small sites with under a few hundred pages, this is rarely a problem. But for larger sites with thousands of pages, Google may not get to every page during each crawl cycle.
How to check: In Search Console, go to Settings and then Crawl Stats. Look at how many pages are crawled per day. If Google is crawling fewer pages than you have, crawl budget may be a factor.
How to fix it:
- Remove or noindex low-value pages so Google spends its budget on pages that matter
- Fix crawl errors that waste Googlebot's time, like broken links, redirect chains, and server errors
- Improve site speed so Google can crawl more pages in less time
- Submit an updated sitemap that only includes pages you actually want indexed
Technical errors blocking indexing
Several technical issues can prevent Google from indexing your pages, even when everything else looks fine.
- 404 errors. The page returns a "not found" status. Google will not index pages that do not exist.
- Server errors (5xx). If your server returns errors when Googlebot visits, it cannot access the content. Repeated server errors cause Google to crawl your site less frequently.
- Redirect chains or loops. Multiple redirects in a row or circular redirects confuse crawlers and waste crawl budget. Google may give up before reaching the final page.
- Incorrect canonical tags. If the canonical tag points to a different URL, Google will try to index that URL instead and ignore yours.
- Slow server response. If your server takes too long to respond, Googlebot may abandon the crawl entirely.
How to check: Use the URL Inspection tool in Search Console to see the exact status Google received when it last visited the page. Check for errors, redirects, or canonical issues.
How to fix it:
- Fix 404 errors by restoring the page or setting up a 301 redirect to a relevant alternative
- Resolve server errors with your hosting provider
- Simplify redirect chains to a single 301 redirect
- Audit canonical tags to make sure they point to the correct URL
- Optimize server response time to under 200ms where possible
How to check if your page is indexed
Quick check
site: search
Search site:yoursite.com/url on Google
Reliable check
URL Inspection
Use Google Search Console for definitive answer
Bulk check
Pages report
Indexing > Pages in Search Console
Before fixing anything, confirm whether the page is actually indexed. There are two reliable methods.
The site: search method
Type site:yoursite.com/page-url into Google. If the page appears in results, it is indexed. If nothing shows up, it is not. This is quick but not always perfectly accurate for recently indexed pages.
Google Search Console URL Inspection
Paste the full URL into the URL Inspection tool. It will tell you whether the page is indexed, when it was last crawled, whether there are any issues, and what the rendered page looks like to Google. This is the most reliable method.
Always use the URL Inspection tool for a definitive answer. The site: search method can sometimes lag behind by hours or even days.
How to get your pages indexed faster
Once you have fixed any blocking issues, here are the most effective ways to speed up indexing.
Submit your sitemap
Make sure your XML sitemap is submitted in Search Console and includes all pages you want indexed. Update it whenever you publish new content.
Build strong internal links
Link to new pages from your most visited, already-indexed pages. This is the single most effective way to get Google to discover and prioritize new content.
Update and republish content
Google re-crawls pages that change. Updating a page with fresh content signals that it is active and worth revisiting.
Build topical authority
Sites that consistently publish high-quality content on a specific topic get crawled more frequently. Google learns to trust your site and indexes new pages faster over time.
Avoid publishing thin pages
Every low-quality page dilutes your crawl budget and your site's overall quality signal. Fewer, stronger pages always beat a flood of content Google will ignore.
Targeting low-competition keywords helps new pages gain traction faster. When competition is weak, Google is more likely to index and rank your content quickly because fewer high-quality alternatives exist.
Understanding search intent also plays a role. Pages that precisely match what searchers are looking for get better engagement signals, which reinforces Google's decision to keep them indexed.
How RankSEO helps with indexing issues
Many indexing problems come down to content quality, internal linking gaps, and technical oversights that are hard to spot manually, especially as your site grows.
- RankSEO's site audit features automatically identify pages with thin content, missing internal links, and technical errors that prevent indexing.
- Flags pages that Google is crawling but not indexing so you know exactly what to fix
- Surfaces internal linking opportunities you may have missed, helping Google discover pages faster
- Monitors your indexing status over time and alerts you when pages drop out of the index
Instead of manually checking every URL in Search Console, RankSEO gives you a prioritized list of indexing issues with the biggest impact. If you are ready to fix your indexing problems and get your content in front of searchers, explore RankSEO's features or check out our pricing plans to get started.
Frequently Asked Questions
It varies. Some pages get indexed within hours, while others take days or weeks. New sites with little authority typically wait longer. Submitting the URL through Search Console and adding internal links from indexed pages can speed up the process significantly.
If Google is not indexing your site at all, check for a sitewide noindex tag, a robots.txt file blocking Googlebot, or server errors preventing crawling. Make sure your site is verified in Google Search Console and that a sitemap has been submitted.
You can use the Request Indexing button in Search Console for important pages, but it is not scalable. Focus on fixing the root cause, whether that is poor internal linking, missing sitemaps, or content quality. Google should discover and index your pages naturally if your site is well-structured.
Yes. Google evaluates whether a page adds enough value to be included in the index. Pages with very little content, content that closely matches other pages, or content that does not match any real search intent are often crawled but not indexed.
There is no universal ratio. The goal is to have all of your valuable, unique pages indexed and to keep low-value pages like empty tags, duplicates, and admin pages out of the index. Compare the number of indexed pages in Search Console to the number of pages in your sitemap to spot gaps.
"Discovered, currently not indexed" means Google knows the URL exists but has not crawled it yet, often a crawl budget or priority issue. "Crawled, currently not indexed" means Google visited the page but decided not to index it, usually a content quality or duplication problem. The fixes are different for each.
Continue reading
Technical SEO Guide
Make your site crawlable and fast
Read guideRobots.txt Guide
Learn how to use robots.txt for SEO with simple examples. Control how search engines crawl your site.
Read guideInternal Linking Guide
Learn how internal linking improves SEO, rankings, and site structure with practical strategies.
Read guide