
Updated by
Updated on Mar 17, 2026
"Discovered – currently not indexed" in Google Search Console means Google has found your URL through internal links or a sitemap but has not crawled it yet — so it has no last crawl date and cannot be indexed. In 2026, fixing this status carries higher stakes than ever: Google's May 2025 quality review actively removed significant page volumes from its index, AI Overviews now appear on up to 48% of tracked queries, and unindexed pages cannot appear in AI Overviews because Google's AI system draws exclusively from indexed content. The diagnosis and fix process flows through five stages: confirm severity, identify the root cause pattern, execute the highest-impact fix first, validate with URL Inspection, and monitor trend data over weeks. This guide covers the full methodology — and explains why keeping your indexed content AI-visible requires a second layer of monitoring that GSC cannot provide: Dageno AI.
Every day a page remains in the "Discovered – currently not indexed" state is a day it is invisible to two audiences simultaneously: users finding information through traditional organic search, and users receiving AI-generated answers.
Google's May 2025 quality review actively removed a significant volume of pages from its index, according to Marie Haynes' analysis of the deindexation pattern. Pages that survived the quality review are competing for AI Overview citations against a smaller, higher-quality indexed population — making each indexed page more valuable than before and each unindexed page more costly.
AI Overviews now appear on approximately 21% of all Google searches according to Safaridigital, and on up to 48% of tracked informational queries as of early 2026 per ALM Corp's industry analysis, with informational queries triggering them at approximately 57.9%. Google's AI system draws exclusively from indexed content — a page in "Discovered – currently not indexed" status cannot appear in an AI Overview for any query, regardless of how relevant its content is.
Fixing "Discovered – currently not indexed" is no longer only a traditional SEO task. It is a prerequisite for AI search visibility.
Google describes this status clearly in its Page Indexing documentation:
"Typically, Google wanted to crawl the URL but this was expected to overload the site; therefore Google rescheduled the crawl."
This is why the last crawl date is usually empty for affected URLs. Google found the URL — through internal links, your XML sitemap, or other discovery mechanisms — but has not fetched its content yet. The page exists in Google's discovery queue but has not entered the crawl or indexing pipeline.
This status is not automatically an error. For small numbers of low-priority or non-essential URLs, remaining in the discovery queue temporarily is normal. The status becomes a problem when:
These two statuses point to different stages in Google's pipeline and require different diagnostic approaches:
| Discovered – Currently Not Indexed | Crawled – Currently Not Indexed | |
|---|---|---|
| What it means | Google found the URL but hasn't crawled it | Google crawled but did not index the content |
| Last crawl date | Usually empty | Shows a crawl date |
| Primary issue | Crawl priority, crawl budget, discovery | Content quality, duplication, thin value |
| First check | Internal links, sitemap quality, URL sprawl | Uniqueness, usefulness, deduplication |
| Requests indexing? | Only after fixing crawl priority issues | Usually no — fix content quality first |
A page can move from "Discovered" to "Crawled – currently not indexed" as Google eventually fetches it but decides not to index it. If this happens, stop treating it as a discovery problem and switch to a content quality and deduplication diagnosis.
Before any fixes, determine whether you have a minor queue state or a genuine indexing problem.
Step 1 — Confirm the URL is still unindexed. GSC reports are not always fully up to date. Open URL Inspection, paste the affected URL, and check the current indexing status and last crawl date. If the page is already indexed or recently crawled, it may be resolving without intervention.
Step 2 — Check the scale. Fewer than 10 affected URLs on a large site can be noise. Hundreds of affected URLs on a mid-size site typically signals a systematic issue — crawl budget waste, sitemap bloat, duplicate URL patterns, or server load constraints.
Step 3 — Sample affected URLs. Manually review 20–30 affected URLs. Do they share characteristics — URL parameters, faceted navigation patterns, thin content, seasonal pages, near-duplicate variations? The pattern reveals the root cause.
Step 4 — Prioritize by business importance. Not all unindexed pages warrant the same urgency. Fix important commercial, informational, and AI-target pages first. Low-priority utility pages can wait.
Crawl budget waste is the most common root cause for "Discovered – currently not indexed" on large sites. If Googlebot is spending crawl budget on low-value URL variations, parameterized duplicates, or faceted navigation combinations, your important pages may be perpetually deprioritized.
Actions:
For large sites (100,000+ pages), crawl budget optimization can unblock hundreds of genuinely valuable pages that were stuck in the discovery queue behind crawl waste.
Weak internal linking is the second most common root cause. If a page is only discoverable from a sitemap but has no internal links pointing to it, Google deprioritizes it — sitemaps tell Google a URL exists, but internal links from high-authority pages tell Google it matters.
Actions:
If Googlebot repeatedly encounters slow server response times or 5xx errors during crawl attempts, it backs off and reschedules — meaning your pages stay in the discovery queue longer than they should.
Actions:
If affected pages have thin content, high duplication relative to other indexed pages, or low expected utility signals (sparse word count, minimal entity coverage, no external citations), Google may be actively deprioritizing them in the crawl queue based on predicted value.
Actions:
JavaScript-rendered pages that require significant processing can be deprioritized for crawl, since Googlebot has limits on the rendering resources it allocates per site.
Actions:
For pages that pass all the above checks but remain stuck:
Disallow rules or X-Robots-Tag HTTP headers)After implementing fixes, use URL Inspection to request crawling for priority pages. This signals to Google that you believe the page is now ready for indexing — but it does not guarantee immediate crawling and should only be done after fixing underlying crawl priority issues.
Important: Do not request indexing before fixes are in place. Requesting indexing for a page that still has weak internal linking or duplicates elsewhere does not override Google's crawl prioritization logic.
For sites with thousands of affected URLs:
Fixing "Discovered – currently not indexed" gets your pages into Google's index — and makes them eligible to appear in AI Overviews, which draw exclusively from indexed content. But indexing is a prerequisite for AI visibility, not a guarantee of it.
According to recent research, only 38% of AI Overview citations currently come from top-10 organic results — and AI Mode and AI Overviews share only 13.7% citation overlap. A page can be indexed, ranked in the top 5, and still be invisible in AI-generated responses. Traditional GSC provides no visibility into whether your indexed, ranking pages are being cited in AI answers.
This is where Dageno AI completes the picture. While GSC shows indexing status and organic performance, Dageno AI tracks whether your indexed pages are actually appearing as citations in ChatGPT, Perplexity, Google AI Overviews, Google AI Mode, Gemini, Claude, Grok, and other AI platforms.
The platform's AI knowledge graph integration unifies technical SEO signals with AI citation performance — enabling you to correlate content and technical improvements with changes in AI citation rates, not just organic rankings. When indexed pages are cited accurately, Dageno's knowledge graph structured data layer ensures the characterization is correct. When AI models misrepresent indexed content, one-click hallucination fixes enable rapid correction.
Getting pages indexed is the foundation. Getting them cited — and cited accurately — is the outcome.
Pricing: Free plan available. Paid plans scale with prompt volume and monitoring frequency.
How long does it take to fix "Discovered – currently not indexed"?
After implementing fixes — primarily crawl budget optimization and internal linking improvements — expect 2–6 weeks for the status count to begin declining in GSC. Requesting indexing via URL Inspection for priority pages can accelerate individual pages, but the underlying crawl priority improvement takes time to propagate.
Is "Discovered – currently not indexed" always a problem?
No. A small number of affected URLs on a large site is normal queue behavior. The status warrants investigation when important pages are affected, when the count is growing, or when a clear pattern of affected URLs suggests a systematic root cause.
Does fixing indexation help with AI Overview citations?
Yes — indexation is a prerequisite for AI Overview citation eligibility. However, it is not sufficient. Use Dageno AI to track whether your newly indexed pages are earning AI citations, and to understand which content characteristics are driving citation versus non-citation decisions.

Updated by
Ye Faye
Ye Faye is an SEO and AI growth executive with extensive experience spanning leading SEO service providers and high-growth AI companies, bringing a rare blend of search intelligence and AI product expertise. As a former Marketing Operations Director, he has led cross-functional, data-driven initiatives that improve go-to-market execution, accelerate scalable growth, and elevate marketing effectiveness. He focuses on Generative Engine Optimization (GEO), helping organizations adapt their content and visibility strategies for generative search and AI-driven discovery, and strengthening authoritative presence across platforms such as ChatGPT and Perplexity