1. How Google AI identifies the parts of a page that matter
For some time now, Google has been able to link to specific passages within a webpage. These so-called Text Fragments (visible through the URL extension #:~:text=
) highlight precise sections that Google considers particularly relevant to a search query.
What was once mainly a user experience feature — used in Featured Snippets or „jump-to“ links — is now becoming a strategic foundation for answer selection and visibility in Google AI Overviews and AI Mode.

What are Text Fragments – and how does Google use them?
Google has used Text Fragments (also known as Text Fragment Links or Scroll to Text Fragment) in search results since 2020. The feature was introduced with Chrome 80 and allows direct linking to specific passages on a page. A similar function had existed for AMP pages since 2018, but from June 2020 onward it was extended to all websites supported by compatible browsers (e.g., Google Chrome).
A typical use case: when a user clicks on a Featured Snippet, the full page loads, but the relevant passage is automatically scrolled to and highlighted in the browser — indicated by the #:~:text=
parameter in the URL.
What’s new: Google now uses this technology internally — in AI Overviews and AI Mode — not just for linking, but to identify and extract meaningful content in real time. These fragments act as semantic pointers within the index, enabling Google to scan dozens of search results on the fly and select the best candidates for answer generation. This is what makes Query Fan-Out possible — often evaluating 50 to 100+ candidates without deep crawling, but with remarkable speed and semantic focus.
2. The shortcut: From crawling to fragmenting
Unlike more complex retrieval architectures used by models like ChatGPT or Claude, Google uses a low-cost method for generating AI answers:
- After the query fan-out (synthetic search variations are generated)
- AI Mode scans dozens of SERP results
- Extracts the highlighted Text Fragments
- And builds a coherent answer — linguistically polished, but semantically compressed

#:~:text=
parameter.This eliminates much of the “deep research” overhead typical of LLM pipelines—such as vector search, caching, or document chaining. Instead, Google reuses its existing infrastructure as a semantic shortcut.
3. The effect: Turbo, not depth
This method doesn’t replace real research — but it simulates it with impressive efficiency. Google can:
- During the query fan-out, rapidly scan dozens of sources on the surface level
- Signal relevance through pre-highlighted fragments
- And generate a “smart” answer in seconds that’s often good enough for users
And the best part (for Google): The heavy lifting was already done by the Search system. Text Fragment highlights are the product of years of snippet optimization — and now they’re simply being reused.
4. Conclusion: Recycled relevance as a strategic edge
What looks like high-tech is really a deliberate mechanism for answer construction — and a new visibility lever for SEOs. Google uses Text Fragments as an internal semantic asset to speed up AI responses and minimize compute costs.
While other LLMs dive deep (and burn resources), Google bets on precise fragments with maximum ROI.
“Google doesn’t do deep research. Google does deep reuse.”