python-docs-fr/library/tokenize.po
2017-09-21 09:29:22 +02:00

263 lines
8.3 KiB
Plaintext

# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2001-2016, Python Software Foundation
# This file is distributed under the same license as the Python package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: Python 3.6\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2017-09-21 09:15+0200\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"Language: fr\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
#: ../Doc/library/tokenize.rst:2
msgid ":mod:`tokenize` --- Tokenizer for Python source"
msgstr ""
#: ../Doc/library/tokenize.rst:10
msgid "**Source code:** :source:`Lib/tokenize.py`"
msgstr "**Code source :** :source:`Lib/tokenize.py`"
#: ../Doc/library/tokenize.rst:14
msgid ""
"The :mod:`tokenize` module provides a lexical scanner for Python source "
"code, implemented in Python. The scanner in this module returns comments as "
"tokens as well, making it useful for implementing \"pretty-printers,\" "
"including colorizers for on-screen displays."
msgstr ""
#: ../Doc/library/tokenize.rst:19
msgid ""
"To simplify token stream handling, all :ref:`operator <operators>` and :ref:"
"`delimiter <delimiters>` tokens and :data:`Ellipsis` are returned using the "
"generic :data:`~token.OP` token type. The exact type can be determined by "
"checking the ``exact_type`` property on the :term:`named tuple` returned "
"from :func:`tokenize.tokenize`."
msgstr ""
#: ../Doc/library/tokenize.rst:26
msgid "Tokenizing Input"
msgstr ""
#: ../Doc/library/tokenize.rst:28
msgid "The primary entry point is a :term:`generator`:"
msgstr ""
#: ../Doc/library/tokenize.rst:32
msgid ""
"The :func:`.tokenize` generator requires one argument, *readline*, which "
"must be a callable object which provides the same interface as the :meth:`io."
"IOBase.readline` method of file objects. Each call to the function should "
"return one line of input as bytes."
msgstr ""
#: ../Doc/library/tokenize.rst:37
msgid ""
"The generator produces 5-tuples with these members: the token type; the "
"token string; a 2-tuple ``(srow, scol)`` of ints specifying the row and "
"column where the token begins in the source; a 2-tuple ``(erow, ecol)`` of "
"ints specifying the row and column where the token ends in the source; and "
"the line on which the token was found. The line passed (the last tuple item) "
"is the *logical* line; continuation lines are included. The 5 tuple is "
"returned as a :term:`named tuple` with the field names: ``type string start "
"end line``."
msgstr ""
#: ../Doc/library/tokenize.rst:46
msgid ""
"The returned :term:`named tuple` has an additional property named "
"``exact_type`` that contains the exact operator type for :data:`token.OP` "
"tokens. For all other token types ``exact_type`` equals the named tuple "
"``type`` field."
msgstr ""
#: ../Doc/library/tokenize.rst:51
msgid "Added support for named tuples."
msgstr ""
#: ../Doc/library/tokenize.rst:54
msgid "Added support for ``exact_type``."
msgstr ""
#: ../Doc/library/tokenize.rst:57
msgid ""
":func:`.tokenize` determines the source encoding of the file by looking for "
"a UTF-8 BOM or encoding cookie, according to :pep:`263`."
msgstr ""
#: ../Doc/library/tokenize.rst:61
msgid ""
"All constants from the :mod:`token` module are also exported from :mod:"
"`tokenize`, as are three additional token type values:"
msgstr ""
#: ../Doc/library/tokenize.rst:66
msgid "Token value used to indicate a comment."
msgstr ""
#: ../Doc/library/tokenize.rst:71
msgid ""
"Token value used to indicate a non-terminating newline. The NEWLINE token "
"indicates the end of a logical line of Python code; NL tokens are generated "
"when a logical line of code is continued over multiple physical lines."
msgstr ""
#: ../Doc/library/tokenize.rst:78
msgid ""
"Token value that indicates the encoding used to decode the source bytes into "
"text. The first token returned by :func:`.tokenize` will always be an "
"ENCODING token."
msgstr ""
#: ../Doc/library/tokenize.rst:83
msgid ""
"Another function is provided to reverse the tokenization process. This is "
"useful for creating tools that tokenize a script, modify the token stream, "
"and write back the modified script."
msgstr ""
#: ../Doc/library/tokenize.rst:90
msgid ""
"Converts tokens back into Python source code. The *iterable* must return "
"sequences with at least two elements, the token type and the token string. "
"Any additional sequence elements are ignored."
msgstr ""
#: ../Doc/library/tokenize.rst:94
msgid ""
"The reconstructed script is returned as a single string. The result is "
"guaranteed to tokenize back to match the input so that the conversion is "
"lossless and round-trips are assured. The guarantee applies only to the "
"token type and token string as the spacing between tokens (column positions) "
"may change."
msgstr ""
#: ../Doc/library/tokenize.rst:100
msgid ""
"It returns bytes, encoded using the ENCODING token, which is the first token "
"sequence output by :func:`.tokenize`."
msgstr ""
#: ../Doc/library/tokenize.rst:104
msgid ""
":func:`.tokenize` needs to detect the encoding of source files it tokenizes. "
"The function it uses to do this is available:"
msgstr ""
#: ../Doc/library/tokenize.rst:109
msgid ""
"The :func:`detect_encoding` function is used to detect the encoding that "
"should be used to decode a Python source file. It requires one argument, "
"readline, in the same way as the :func:`.tokenize` generator."
msgstr ""
#: ../Doc/library/tokenize.rst:113
msgid ""
"It will call readline a maximum of twice, and return the encoding used (as a "
"string) and a list of any lines (not decoded from bytes) it has read in."
msgstr ""
#: ../Doc/library/tokenize.rst:117
msgid ""
"It detects the encoding from the presence of a UTF-8 BOM or an encoding "
"cookie as specified in :pep:`263`. If both a BOM and a cookie are present, "
"but disagree, a SyntaxError will be raised. Note that if the BOM is found, "
"``'utf-8-sig'`` will be returned as an encoding."
msgstr ""
#: ../Doc/library/tokenize.rst:122
msgid ""
"If no encoding is specified, then the default of ``'utf-8'`` will be "
"returned."
msgstr ""
#: ../Doc/library/tokenize.rst:125
msgid ""
"Use :func:`.open` to open Python source files: it uses :func:"
"`detect_encoding` to detect the file encoding."
msgstr ""
#: ../Doc/library/tokenize.rst:131
msgid ""
"Open a file in read only mode using the encoding detected by :func:"
"`detect_encoding`."
msgstr ""
#: ../Doc/library/tokenize.rst:138
msgid ""
"Raised when either a docstring or expression that may be split over several "
"lines is not completed anywhere in the file, for example::"
msgstr ""
#: ../Doc/library/tokenize.rst:144
msgid "or::"
msgstr "ou : ::"
#: ../Doc/library/tokenize.rst:150
msgid ""
"Note that unclosed single-quoted strings do not cause an error to be raised. "
"They are tokenized as ``ERRORTOKEN``, followed by the tokenization of their "
"contents."
msgstr ""
#: ../Doc/library/tokenize.rst:158
msgid "Command-Line Usage"
msgstr ""
#: ../Doc/library/tokenize.rst:162
msgid ""
"The :mod:`tokenize` module can be executed as a script from the command "
"line. It is as simple as:"
msgstr ""
#: ../Doc/library/tokenize.rst:169
msgid "The following options are accepted:"
msgstr ""
#: ../Doc/library/tokenize.rst:175
msgid "show this help message and exit"
msgstr ""
#: ../Doc/library/tokenize.rst:179
msgid "display token names using the exact type"
msgstr ""
#: ../Doc/library/tokenize.rst:181
msgid ""
"If :file:`filename.py` is specified its contents are tokenized to stdout. "
"Otherwise, tokenization is performed on stdin."
msgstr ""
#: ../Doc/library/tokenize.rst:185
msgid "Examples"
msgstr "Exemples"
#: ../Doc/library/tokenize.rst:187
msgid ""
"Example of a script rewriter that transforms float literals into Decimal "
"objects::"
msgstr ""
#: ../Doc/library/tokenize.rst:229
msgid "Example of tokenizing from the command line. The script::"
msgstr ""
#: ../Doc/library/tokenize.rst:236
msgid ""
"will be tokenized to the following output where the first column is the "
"range of the line/column coordinates where the token is found, the second "
"column is the name of the token, and the final column is the value of the "
"token (if any)"
msgstr ""
#: ../Doc/library/tokenize.rst:264
msgid "The exact token type names can be displayed using the ``-e`` option:"
msgstr ""