+summary : WWW::RobotRules - database of robots.txt-derived permissions
+description: |
+ This module parses /robots.txt files as specified in "A Standard for Robot Exclusion", at <http://www.robotstxt.org/wc/norobots.html> Webmasters can use the /robots.txt file to forbid conforming robots from accessing parts of their web site. The parsed files are kept in a WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited. The same WWW::RobotRules object can be used for one or more parsed /robots.txt files on any number of hosts.
+rundeps :
+ - perl-uri
+setup : |
+ %perl_setup
+build : |
+ %perl_build
+install : |
+ %perl_install
diff --git a/pspec_x86_64.xml b/pspec_x86_64.xml
new file mode 100644
--- /dev/null
+++ b/pspec_x86_64.xml
@@ -0,0 +1,37 @@
+<PISI>
+ <Source>
+ <Name>perl-www-robotrules</Name>
+ <Packager>
+ <Name>Joey Riches</Name>
+ <Email>josephriches@gmail.com</Email>
+ </Packager>
+ <License>Artistic-1.0-Perl</License>
+ <PartOf>programming.perl</PartOf>
+ <Summary xml:lang="en">WWW::RobotRules - database of robots.txt-derived permissions</Summary>
+ <Description xml:lang="en">This module parses /robots.txt files as specified in "A Standard for Robot Exclusion", at <http://www.robotstxt.org/wc/norobots.html> Webmasters can use the /robots.txt file to forbid conforming robots from accessing parts of their web site. The parsed files are kept in a WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited. The same WWW::RobotRules object can be used for one or more parsed /robots.txt files on any number of hosts.
+ <Summary xml:lang="en">WWW::RobotRules - database of robots.txt-derived permissions</Summary>
+ <Description xml:lang="en">This module parses /robots.txt files as specified in "A Standard for Robot Exclusion", at <http://www.robotstxt.org/wc/norobots.html> Webmasters can use the /robots.txt file to forbid conforming robots from accessing parts of their web site. The parsed files are kept in a WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited. The same WWW::RobotRules object can be used for one or more parsed /robots.txt files on any number of hosts.