Saturday, September 29, 2007

Examples robots.txt

Examples

This example allows all robots to visit all files because the wildcard "*" specifies all robots:

User-agent: *
Disallow:

This example keeps all robots out:

User-agent: *
Disallow: /

The next is an example that tells all crawlers not to enter into four directories of a website:

User-agent: *
Disallow: /cgi-bin/
Disallow: /images/
Disallow: /tmp/
Disallow: /private/

Example that tells a specific crawler not to enter one specific directory:

User-agent: BadBot
Disallow: /private/

Example that tells all crawlers not to enter one specific file:

User-agent: *
Disallow: /directory/file.html

Note that all other files in the specified directory will be processed.
Example demonstrating how comments can be used:# Comments appear after the "#" symbol at the start of a line, or after a directive

User-agent: * # match all bots
Disallow: / # keep them out

Compatibility
In order to prevent access to all pages by robots, do not use

Disallow: *

as this is not a stable standard extension.
Instead:

Disallow: /

should be used.

Sitemaps auto-discovery
The Sitemap parameter is supported by major crawlers (including Google, Yahoo, MSN, Ask). Sitemaps specifies the location of the site's list of URLs. This parameter is independent from User-agent parameter so it can be placed anywhere in the file.

Sitemap: http://www.kateep.com/sitemap.xml

An explanation of how to author SiteMap files can be found at sitemaps.org

Nonstandard extensions
Several crawlers support a Crawl-delay parameter, set to the number of seconds to wait between successive requests to the same server:

User-agent: *
Crawl-delay: 10

Extended Standard
An Extended Standard for Robot Exclusion has been proposed, which adds several new directives, such as Visit-time and Request-rate. For example:

User-agent: *
Disallow: /downloads/
Request-rate: 1/5 # maximum rate is one page every 5 seconds
Visit-time: 0600-0845 # only visit between 6:00 AM and 8:45 AM UT (GMT)

The first version of the Robot Exclusion standard, does not mention anything about the "*" character in the Disallow: statement. Modern crawlers like Googlebot and Slurp recognize strings containing "*", while MSNbot and Teoma interpret it in different ways.

Alternatives
While robots.txt is the older and more widely accepted method, there are others (which can be used together with robots.txt) that allow greater control, like disabling indexing of images only or disabling archiving of page contents.

HTML meta tags for robots
HTML meta tags can be used to exclude robots according to the contents of web pages. Again, this is purely advisory, and also relies on the cooperation of the robot programs. For example,

meta content="noindex,nofollow" name="robots"

within the HEAD section of an HTML document tells search engines such as Google,Yahoo! or MSN to exclude the page from its index and not to follow any links on this page for further possible indexing.

Edit by Kateep.com

No comments: