Peregrine falcon logoPeregrine Kit

Robots.txt Generator — Create Robots.txt Free

Create a properly formatted robots.txt file for your website. Configure crawling rules for search engine bots. No sign-up required.

Rule Group 1

Generated robots.txt

User-agent: *

How to robots txt generator

  1. 1Set the User-agent (use * for all crawlers or specify a bot name)
  2. 2Enter paths to allow and disallow, one per line
  3. 3Optionally set a crawl delay and sitemap URL
  4. 4Add more rule groups if you need different rules for different bots
  5. 5Copy or download the generated robots.txt file

About This Tool

The robots.txt file is a plain-text file placed in the root directory of your website that tells search engine crawlers which pages or sections they are allowed or not allowed to access. A well-configured robots.txt file helps control how search engines index your site, prevents them from wasting crawl budget on unimportant pages, and keeps private directories out of search results.

This robots txt generator lets you define multiple rule groups, each targeting a specific user agent or all crawlers. You can specify allowed and disallowed paths, set a crawl delay, and include a sitemap URL. The generated file follows the Robots Exclusion Standard and is ready to be uploaded to your server.

All processing runs locally in your browser. Your configuration is never sent to any external server, making this tool safe to use for client projects and private websites.

Frequently Asked Questions

The robots.txt file must be placed in the root directory of your website so it is accessible at https://yourdomain.com/robots.txt. Search engines look for it at this exact location.

Not exactly. Robots.txt tells crawlers not to access certain pages, but if other pages link to a blocked URL, search engines may still index it with a minimal listing. To prevent indexing entirely, use a noindex meta tag on the page itself.

The asterisk (*) is a wildcard that matches all search engine crawlers. Rules under User-agent: * apply to every bot unless you have a more specific rule group for a particular crawler like Googlebot.

Crawl-delay is a directive that asks crawlers to wait a specified number of seconds between requests. This can reduce server load. Note that Googlebot does not honour the crawl-delay directive — use Google Search Console to manage Googlebot's crawl rate instead.

Yes. You can define different rules for different user agents. For example, you might allow Googlebot to access everything but block certain paths for other crawlers.

Related Tools