Robots.txt Generator

Generate a robots.txt file visually. Set User-agent rules, Allow/Disallow paths, crawl delay, and sitemap URL. Copy or download instantly.

When to Use

Use when launching a new website, migrating content, blocking unwanted crawlers, or managing how search engines index your site.

How to Use Robots.txt Generator

  1. Select a User-agent from the dropdown (e.g. * for all bots, Googlebot for Google only).
  2. Add Allow and Disallow rules for the paths you want to permit or block.
  3. Use the quick presets (Allow all, Block all, Block /admin) to start quickly.
  4. Optionally add your Sitemap URL and a Crawl-delay value.
  5. The robots.txt preview updates in real time on the right panel.
  6. Click 'Download robots.txt' and upload it to your website's root directory.

Examples

Allow Googlebot only

Input: User-agent: Googlebot, Allow: /

Output: User-agent: Googlebot Allow: / Sitemap: https://example.com/sitemap.xml

Block admin pages

Input: User-agent: *, Disallow: /admin/

Output: User-agent: * Allow: / Disallow: /admin/

Frequently Asked Questions

What is robots.txt?

robots.txt is a plain text file placed at the root of your website (e.g. https://example.com/robots.txt) that tells search engine crawlers which pages or sections they are allowed or not allowed to crawl.

Does robots.txt prevent my page from appearing in search results?

Disallowing a page in robots.txt prevents crawling, but does not guarantee the page won't appear in search results if other sites link to it. To prevent indexing, use the noindex meta tag.

Do all bots respect robots.txt?

Legitimate bots from Google, Bing, and other search engines respect robots.txt. However, malicious scrapers or spam bots often ignore it. Robots.txt is advisory only.

Should I block AI crawlers like GPTBot?

You can block AI training crawlers (GPTBot, CCBot, anthropic-ai) using Disallow: / under their User-agent. This stops them from using your content to train AI models.

Related Tools