Robots.txt Generator
Build a valid robots.txt with allow/disallow rules, crawl delay, and sitemap links — copy or download in one click.
User-agent: * Disallow: /admin Disallow: /private Disallow: /cart Sitemap: https://example.com/sitemap.xml
How robots.txt works
robots.txt sits at the root of your domain (e.g. yoursite.com/robots.txt) and tells search-engine crawlers which paths they may or may not visit. It's the first file most crawlers fetch from a new site.
The four directives you actually need
- User-agent — the crawler the rule applies to. Use
*for “all crawlers”. - Disallow — paths the crawler must skip.
- Allow — explicit exceptions inside a disallowed path.
- Sitemap — the full URL of your XML sitemap. Helps crawlers discover URLs faster.
Common mistakes
- Disallowing your entire site with
Disallow: /— this de-indexes everything. - Blocking your CSS/JS folders. Google needs them to render the page.
- Treating robots.txt as a security tool. It's a request, not a guarantee — anyone can read disallowed paths anyway.
Frequently asked questions
Where do I put my robots.txt file?
At the root of your domain, served from the URL yoursite.com/robots.txt. It must be a plain text file at the exact root path — subfolders or other filenames will be ignored by crawlers.
What's the difference between robots.txt and noindex?
robots.txt controls whether a crawler is allowed to fetch a URL. The noindex meta tag tells the crawler not to add the URL to its index even if it does fetch it. If you want a page hidden from search results, use noindex — robots.txt-blocked URLs can still appear in search if other sites link to them.
Can robots.txt block Google from indexing my page?
Not reliably. Disallow only prevents crawling. Google can still index a URL it has never crawled if external sites link to it — and those listings will look bad because Google has no title or description to show. Use noindex for actual de-indexing.
Should I add a sitemap to robots.txt?
Yes. Adding a Sitemap: line helps every crawler — not just Google — discover your URLs faster. It's especially helpful for new sites and large sites with many deep pages.
Does robots.txt affect SEO rankings?
Indirectly. Misconfigured robots.txt is one of the most common ways sites accidentally tank their SEO — blocking CSS or JS, or worse, disallowing the entire site. Get it right once and it shouldn't need to change often.
More free SEO tools
Practical tools for the rest of your SEO workflow — all free, no signup.
URL Slug Generator
Turn any title into a clean, lowercase, dashed URL slug — accent-free, no junk.
SEO Checklist
An interactive checklist covering technical, on-page, content, and link building — progress saves automatically.
SERP Preview
Preview how your title and meta description appear in Google — desktop and mobile, with pixel-accurate truncation.
Meta Tag Generator
Generate clean HTML meta tags with Open Graph and Twitter Card data — copy a complete head snippet.
Character Count Checker
Count characters, words, sentences, and paragraphs with live limits for titles, meta descriptions, tweets, and more.
Schema Markup Generator
Build valid JSON-LD for Article, Product, LocalBusiness, FAQ, and Organization — copy and paste into your <head>.
Word Counter
Count words, characters, sentences, paragraphs, reading time, and average word length in real time.
SEO ROI Calculator
Forecast SEO return on investment — leads, revenue, and payback period from organic search.
LLMs.txt Generator
Build a clean llms.txt file so AI assistants can find and quote your most important pages accurately.
