Robots.txt Checker
Free robots.txt tester — check if search engines can crawl your pages. Validate syntax, test specific URLs against your rules, and fix crawl blocks.
Frequently Asked Questions
What is robots.txt?
Robots.txt is a text file at your site's root that tells search engine crawlers which pages they can and cannot access. It helps manage crawl budget and prevent indexing of private pages.
Can robots.txt block a page from Google?
Robots.txt prevents crawling but not indexing. Google may still index a blocked URL if other pages link to it. To fully prevent indexing, use a noindex meta tag instead.
Where should robots.txt be located?
Robots.txt must be at your domain root: example.com/robots.txt. It won't be recognized in subdirectories. Each subdomain needs its own robots.txt file.
Related Tools
Need Continuous Monitoring?
These tools provide one-time analysis. For continuous monitoring of your website's performance, uptime, and SEO health, try OpsKitty.
Get Started with OpsKitty