Back to Blog

How to Fix 'Page Discovered — Currently Not Indexed' in Google Search Console

Google found your page but chose not to index it. Here's a systematic checklist to diagnose why and get your content back in the index.

Marcus Webb6 min readApril 1, 2026

SEO consultant, 9 years experience, formerly Head of SEO at two Series B startups

'Page discovered — currently not indexed' is one of the most frustrating statuses in Google Search Console. Google knows your URL exists but has decided not to crawl and index it yet. This is different from 'Excluded by noindex' (a deliberate block) — it means Google made a judgment call.

Why Google skips discovered pages

Google has a finite crawl budget for every site. When it finds more URLs than it can afford to crawl, it queues them and may deprioritise or skip pages it considers low-value. The most common causes:

  • Low perceived page quality — thin content, duplicate or near-duplicate pages, no incoming internal links
  • Crawl budget exhaustion — the site has too many URLs (pagination, filters, parameters) relative to its authority
  • Slow server response times — Googlebot times out and moves on
  • The page was only recently created and hasn't been crawled yet (may resolve on its own within days)
  • The page has very few or no internal links pointing to it (orphan page)

Step 1: Check for internal linking gaps

Open a crawl tool (Screaming Frog free tier or Ahrefs Site Audit) and filter for pages with zero internal inlinks. An orphan page — one no other page links to — is almost always 'discovered' via a sitemap but never prioritised by Google. Fix this first: add contextual internal links from at least two high-traffic pages on the same topic.

Step 2: Audit the page's quality signals

Ask yourself: if a human editor reviewed this page, would they consider it genuinely useful? Google's quality evaluators use E-E-A-T criteria (Experience, Expertise, Authoritativeness, Trustworthiness). Pages that are thin, repetitive, auto-generated, or duplicates of other pages on the site are systematically deprioritised.

✦ Insight

A useful test: search Google for the first two sentences of your page content in quotes. If an identical or very similar passage appears elsewhere on the web (or on your own site), Google may be treating your version as a duplicate and withholding indexation.

Step 3: Check for crawl budget waste

In Google Search Console, go to Settings → Crawl Stats. Look at the total URLs crawled per day. Then check your Coverage report for the total indexed + excluded URL count. If your site has tens of thousands of URLs in the 'Excluded' bucket (mostly parameterised URLs, infinite scroll variants, or auto-generated filter pages), Google is wasting its budget on noise instead of your real content.

# Add to robots.txt to block common crawl-budget wasters:
Disallow: /*?sort=
Disallow: /*?filter=
Disallow: /*?ref=
Disallow: /search

# OR use canonical tags on parameterised pages to consolidate signals:
<link rel="canonical" href="https://example.com/category/shoes" />

Step 4: Request indexing (sparingly)

Once you've improved internal linking and content quality, use the URL Inspection tool in GSC and click 'Request Indexing'. Google throttles this — you get roughly 10–12 requests per day across your property. Use it on your highest-priority pages only, not as a mass action. Submitting an updated XML sitemap is a better lever for bulk re-crawl signalling.

⚠️ Warning

Spamming the 'Request Indexing' button does not speed things up. Google processes the requests asynchronously. Repeated requests for the same URL within 48 hours are ignored.

Step 5: Build external signals

A page that nobody on the internet links to or shares has no reason to be prioritised. Even a single high-quality external link, a social share that gets clicks, or being cited in a relevant community can trigger Googlebot to re-evaluate a stalled page. This is why content marketing and link building are inseparable from indexation.


💡 Tip

Practice this in the game: Chapter 1 (The Indexing Inferno) simulates a real discovery-vs-indexing crisis where you must triage which pages deserve crawl budget and fix the internal architecture before a product launch.

Learn this by doing — not just reading.

SEOdisaster.com teaches SEO through interactive disaster scenarios. Put these concepts into practice in the game.

Play Free →