Search Off the Record podkast

Analysing Robots.txt at scale with HTTP Archive and BigQuery

0:00
27:40
Do tyłu o 15 sekund
Do przodu o 15 sekund

In this episode of Search Off the Record, Martin and Gary turn a simple robots.txt question into a data‑driven deep dive using HTTP Archive, WebPageTest, custom JavaScript metrics, and BigQuery. They explore how millions of real robots.txt files are actually written in 2025–2026, which directives and user‑agents are most common, and what that means for modern crawling and AI bots.

Perfect for beginner to mid‑level developers and SEOs, you'll learn how large‑scale web measurement works (HTTP Archive, Chrome UX Report, Web Almanac), and how to turn raw crawl data into actionable SEO insights. Subscribe for more candid conversations about crawling, indexing, and the data behind how Google Search and the web really work.

Resources:

Web Almanac →  https://almanac.httparchive.org/en/2025/
Robotstxt custom metric for the HTTP Archive → 
https://github.com/HTTPArchive/custom-metrics/pull/191
robots.txt parser change → https://github.com/google/robotstxt/commit/4af32e54b715442bb04cd0470e99192f0ffb9792#commitcomment-178586774

Episode transcript → https://goo.gle/sotr108-transcript


Listen to more Search Off the Record → https://goo.gle/sotr-yt  
Subscribe to Google Search Channel → https://goo.gle/SearchCentral

Search Off the Record is a podcast series that takes you behind the scenes of Google Search with the Search Relations team.

 #SOTRpodcast #SEO #GoogleSearch

Speakers: Martin Splitt, Gary Illyes

Więcej odcinków z kanału "Search Off the Record"