
Updated by
Updated on Mar 18, 2026
INP (Interaction to Next Paint) replaced FID as a Core Web Vital on March 12, 2024. It measures the time from every user interaction — click, tap, or key press — to the first visual response on screen, then reports the worst single interaction as the page's INP score. A good INP is 200ms or less. INP is harder to optimize than FID because it requires every interaction on the page to be fast, not just the first one. In 2026, poor INP also has a second consequence that most guides do not address: heavy JavaScript execution that causes bad INP scores is the same JavaScript that makes your pages invisible to AI crawlers like GPTBot and PerplexityBot, which do not render JavaScript at all. Dageno AI monitors whether your optimized, indexed pages are earning AI citations — the visibility layer that INP optimization enables but does not guarantee.
INP (Interaction to Next Paint) is a user experience metric that measures the time from a user action to the first visual change on screen.
When a user clicks a button, taps a menu, or presses a key on your page, INP starts a timer. That timer stops when the browser has painted the first new frame — the first visible evidence that the page responded. The time between action and visual response is the interaction duration.
INP collects this measurement for every interaction during a user's session. At the end of the session, it selects the worst interaction — the longest duration — as the page's INP score.
The one exception: for pages with 50 or more interactions in a single session, Google ignores the single highest outlier before selecting the worst. This prevents a single catastrophic edge-case interaction from defining the score for an otherwise responsive page. However, since most pages record fewer than 50 interactions per session, this exception rarely applies in practice.
INP tracks three categories of user input:
Interactions can be driven by JavaScript event handlers, CSS transitions, built-in browser controls (form elements, scrollbars), or combinations of these. The source of the interaction does not change how INP measures it — only the total time from input to visual response matters.
What does not count: Hover events (mouseover) and scroll events are excluded from INP measurement, as these do not produce a discrete, user-initiated action expecting a visual response.
Unlike its predecessor FID — which measured only the browser's input delay before beginning to process an event — INP measures the complete interaction timeline:
Input delay — time from user action to when the browser begins processing the event. If the main thread is busy executing JavaScript tasks, the browser cannot immediately handle the new event — this waiting time is the input delay.
Processing time — time spent executing the event handler code. Long event handlers that do complex DOM calculations, trigger synchronous operations, or run heavy JavaScript increase this component.
Presentation delay — time from when JavaScript finishes executing to when the browser paints the visual update on screen. Layout recalculation, style computation, and frame rendering all contribute here.
INP measures the sum of all three. A page that responds quickly to user input at the event level but takes 600ms to visually update because of expensive rendering operations will show a poor INP score — even if FID was excellent.
| Score | Classification |
|---|---|
| 200ms or less | Good ✅ |
| 201ms–500ms | Needs improvement ⚠️ |
| Over 500ms | Poor ❌ |
Achieving a good INP requires that the worst interaction during any representative session completes within 200ms from user input to visible screen update. This is substantially more demanding than FID, which only required fast input delay on a single interaction.
One of the most important usability patterns INP captures is rage-clicking — when a page fails to respond to a user's action, causing them to click the same element repeatedly in frustration.
The classic sequence: a mobile menu should open → user taps it → nothing visible happens (the response is taking 600ms) → user taps again → the menu opens → immediately reacts to the second tap and closes → user is confused and frustrated.
This interaction pattern appears in INP data as multiple consecutive interactions with extended durations. Google's decision to make INP a Core Web Vital explicitly reflects this user experience reality: responsiveness throughout a session — not just at first interaction — determines whether users feel the site is working or broken.
Google Search Console — Core Web Vitals report shows INP field data from real Chrome users. This is the authoritative source for ranking-relevant INP assessment.
Google PageSpeed Insights — combines real-user CrUX data with lab measurements for individual URL analysis. Best for understanding the gap between what lab tools show and what real users experience.
Chrome DevTools — Performance panel enables detailed interaction timeline recording. Use "Start profiling and reload page" then interact manually to record specific interactions and identify long tasks.
Web Vitals Chrome Extension — shows live INP score while browsing any page. Useful for real-time testing across different page types and interaction paths.
INP is fundamentally a Real User Monitoring (RUM) metric. Lab tools can approximate it by simulating interactions, but the Google Search Console CrUX data — collected from real Chrome users visiting your pages — is the metric that influences ranking signals and provides the most accurate picture of actual user experience.
In 2026, poor INP scores have a consequence that extends beyond organic rankings into AI search visibility.
The root causes of poor INP — heavy JavaScript execution on the main thread, large JavaScript bundles that delay interaction handler registration, expensive rendering triggered by user events — are the same factors that make content invisible to AI crawlers.
According to Cloudflare's 2025 AI crawler analysis, GPTBot, ClaudeBot, and PerplexityBot all consume static HTML without executing JavaScript. A page that requires JavaScript execution to display its main content — product descriptions, article text, comparison tables — delivers that content to real users (slowly, if INP is poor) but delivers nothing to AI crawlers. The optimization that improves INP by reducing JavaScript blocking on the main thread also resolves the static-content rendering gap that makes pages invisible to AI indexers.
Server-side rendering (SSR) or static site generation (SSG) — the technical solutions that ensure JavaScript-heavy content is available in the initial HTML response — simultaneously address poor INP from rendering complexity and AI crawler content invisibility. According to CrUX Run's 2025 INP by Framework analysis, server-rendered frameworks show measurably better INP distributions than client-side-only SPAs, with a 34% better median INP score for SSR implementations versus equivalent CSR equivalents.

Optimizing INP resolves the technical barriers that prevent AI crawlers from accessing your content. But once those barriers are removed and pages are accessible and indexed, a separate question emerges: are those indexed pages actually being cited in AI-generated responses?
Dageno AI answers this question — monitoring brand citation rates across ChatGPT, Perplexity, Google AI Overviews, Google AI Mode, Gemini, Claude, Grok, Microsoft Copilot, DeepSeek, and Qwen from a single dashboard with a free plan to start.
The relationship between INP optimization and Dageno AI monitoring is sequential. INP optimization → AI crawler can read your page content → your pages become eligible for AI indexation → Dageno AI measures whether that eligibility translates into actual citation appearances → Brand Kit knowledge graph alignment ensures those citations are accurate rather than hallucinated.
Brands that fix their JavaScript performance and then systematically monitor AI citation performance with Dageno AI close the full loop between technical SEO and AI search visibility.
Pricing: Free plan available. Paid plans scale with prompt volume and monitoring frequency.

Updated by
Ye Faye
Ye Faye is an SEO and AI growth executive with extensive experience spanning leading SEO service providers and high-growth AI companies, bringing a rare blend of search intelligence and AI product expertise. As a former Marketing Operations Director, he has led cross-functional, data-driven initiatives that improve go-to-market execution, accelerate scalable growth, and elevate marketing effectiveness. He focuses on Generative Engine Optimization (GEO), helping organizations adapt their content and visibility strategies for generative search and AI-driven discovery, and strengthening authoritative presence across platforms such as ChatGPT and Perplexity

Richard • Mar 05, 2026

Ye Faye • Mar 16, 2026

Ye Faye • Mar 23, 2026

Richard • Mar 13, 2026