
Www::robotrules parses /robots.txt files as specified in "a standard for robot exclusion", at
. webmasters can use the /robots.txt file to forbid conforming robots from accessing parts of their web site.
the parsed files are kept in a www::robotrules object, and this object provides methods to check if access to a given url is prohibited. the same www::robotrules object can be used for one or more parsed /robots.txt files on any number of hosts.