python-docs-fr/library/urllib.robotparser.po

95 lines
3.1 KiB
Plaintext
Raw Normal View History

2016-10-30 09:46:26 +00:00
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2001-2016, Python Software Foundation
# This file is distributed under the same license as the Python package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: Python 3.6\n"
"Report-Msgid-Bugs-To: \n"
2017-04-02 20:14:06 +00:00
"POT-Creation-Date: 2017-04-02 22:11+0200\n"
2016-10-30 09:46:26 +00:00
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
2017-04-02 20:14:06 +00:00
"Language: \n"
2016-10-30 09:46:26 +00:00
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
#: ../Doc/library/urllib.robotparser.rst:2
msgid ":mod:`urllib.robotparser` --- Parser for robots.txt"
msgstr ""
#: ../Doc/library/urllib.robotparser.rst:10
msgid "**Source code:** :source:`Lib/urllib/robotparser.py`"
msgstr ""
#: ../Doc/library/urllib.robotparser.rst:20
msgid ""
"This module provides a single class, :class:`RobotFileParser`, which answers "
"questions about whether or not a particular user agent can fetch a URL on "
"the Web site that published the :file:`robots.txt` file. For more details "
"on the structure of :file:`robots.txt` files, see http://www.robotstxt.org/"
"orig.html."
msgstr ""
#: ../Doc/library/urllib.robotparser.rst:28
msgid ""
"This class provides methods to read, parse and answer questions about the :"
"file:`robots.txt` file at *url*."
msgstr ""
#: ../Doc/library/urllib.robotparser.rst:33
msgid "Sets the URL referring to a :file:`robots.txt` file."
msgstr ""
#: ../Doc/library/urllib.robotparser.rst:37
msgid "Reads the :file:`robots.txt` URL and feeds it to the parser."
msgstr ""
#: ../Doc/library/urllib.robotparser.rst:41
msgid "Parses the lines argument."
msgstr ""
#: ../Doc/library/urllib.robotparser.rst:45
msgid ""
"Returns ``True`` if the *useragent* is allowed to fetch the *url* according "
"to the rules contained in the parsed :file:`robots.txt` file."
msgstr ""
#: ../Doc/library/urllib.robotparser.rst:51
msgid ""
"Returns the time the ``robots.txt`` file was last fetched. This is useful "
"for long-running web spiders that need to check for new ``robots.txt`` files "
"periodically."
msgstr ""
#: ../Doc/library/urllib.robotparser.rst:57
msgid ""
"Sets the time the ``robots.txt`` file was last fetched to the current time."
msgstr ""
#: ../Doc/library/urllib.robotparser.rst:62
msgid ""
"Returns the value of the ``Crawl-delay`` parameter from ``robots.txt`` for "
"the *useragent* in question. If there is no such parameter or it doesn't "
"apply to the *useragent* specified or the ``robots.txt`` entry for this "
"parameter has invalid syntax, return ``None``."
msgstr ""
#: ../Doc/library/urllib.robotparser.rst:71
msgid ""
"Returns the contents of the ``Request-rate`` parameter from ``robots.txt`` "
"in the form of a :func:`~collections.namedtuple` ``(requests, seconds)``. "
"If there is no such parameter or it doesn't apply to the *useragent* "
"specified or the ``robots.txt`` entry for this parameter has invalid syntax, "
"return ``None``."
msgstr ""
#: ../Doc/library/urllib.robotparser.rst:80
msgid ""
"The following example demonstrates basic use of the :class:`RobotFileParser` "
"class::"
msgstr ""