robots.txt Generator
Create and test robots.txt files visually. Add rules for different user-agents, set crawl delays and sitemaps.
User-Agent Block
Additional Settings
Note: Google ignores Crawl-delay. Bing and other crawlers may respect it.
Test a URL
Generated robots.txt
User-agent: * Allow: /
Where to place it
The robots.txt file must be placed in the root directory of your website, accessible at https://example.com/robots.txt. It applies to the entire domain.
Crawling vs Indexing
robots.txt controls crawling, not indexing. A page blocked by robots.txt can still appear in search results if other pages link to it. Use noindex to prevent indexing.
Sitemap Directive
Including a Sitemap directive helps search engines discover your XML sitemap. This directive is not user-agent-specific and applies globally regardless of where it appears in the file.
How to Use robots.txt Generator
The robots.txt Generator helps you create properly formatted robots.txt files that control how search engine crawlers access and index your website. Everything runs locally in your browser. This tool ensures your crawl directives follow the robots exclusion standard and correctly guide search engine bots through your site.
Open the robots.txt Generator
Navigate to the robots.txt Generator from the developer tools menu. The tool provides an intuitive form-based interface for creating robots.txt rules without needing to memorize the syntax.
Add User-Agent Rules
Specify which search engine crawlers your rules apply to by selecting user agents such as Googlebot, Bingbot, or use the wildcard (*) to apply rules to all crawlers universally.
Define Allow and Disallow Paths
Add directory paths and URL patterns that should be allowed or disallowed for each user agent. Common disallowed paths include admin panels, private directories, and duplicate content sections.
Add Sitemap Reference
Include your XML sitemap URL in the robots.txt file so search engines can discover and crawl all important pages on your website efficiently and completely.
Generate and Download
Review the generated robots.txt content for correctness, then copy it or download the file. Upload it to the root directory of your web server for search engines to find.
Common Use Cases
New Website SEO Setup
Create a comprehensive robots.txt file as part of your initial SEO configuration, ensuring crawlers can access important content while avoiding non-public areas.
Staging Environment Protection
Generate a robots.txt that blocks all crawlers for staging and development environments to prevent search engines from indexing test content.
Crawl Budget Optimization
Control which sections of large websites get crawled to optimize your crawl budget, directing search engines to prioritize your most valuable pages.
E-Commerce Store Management
Block crawlers from accessing shopping cart pages, checkout flows, internal search results, and filtered product listings that create duplicate content.
Pro Tips
- -Always place your robots.txt file in the root directory of your domain (e.g., example.com/robots.txt) as crawlers only check this specific location.
- -Use Disallow sparingly and test with Google Search Console's robots.txt tester to ensure you are not accidentally blocking important pages from indexing.
- -Remember that robots.txt is publicly accessible, so never use it to hide sensitive URLs as it actually draws attention to those paths.
- -Include a Crawl-delay directive for well-behaved bots if your server has limited resources, but note that Googlebot does not honor this directive.
You might also like
JWT Decoder
Decode and inspect JSON Web Tokens without sending them to a server.
DeveloperColor Converter
Convert colors between HEX, RGB, HSL and preview them live.
DeveloperJPG ↔ PNG
Convert images between JPG and PNG instantly, without uploading to any server.
ImageGCD & LCM Calculator
Find the Greatest Common Divisor and Least Common Multiple.
Math