python-docs-fr/library/tokenize.po

177 lines
7.0 KiB
Plaintext
Raw Normal View History

2016-10-30 09:46:26 +00:00
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 1990-2016, Python Software Foundation
# This file is distributed under the same license as the Python package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: Python 2.7\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2016-10-30 10:44+0100\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
#: ../Doc/library/tokenize.rst:2
msgid ":mod:`tokenize` --- Tokenizer for Python source"
2018-07-03 09:30:39 +00:00
msgstr ":mod:`tokenize` --- Analyseur lexical de Python"
2016-10-30 09:46:26 +00:00
#: ../Doc/library/tokenize.rst:9
msgid "**Source code:** :source:`Lib/tokenize.py`"
2018-07-03 09:30:39 +00:00
msgstr "**Code source :** :source:`Lib/tokenize.py`"
2016-10-30 09:46:26 +00:00
#: ../Doc/library/tokenize.rst:13
msgid ""
"The :mod:`tokenize` module provides a lexical scanner for Python source "
"code, implemented in Python. The scanner in this module returns comments as "
"tokens as well, making it useful for implementing \"pretty-printers,\" "
"including colorizers for on-screen displays."
msgstr ""
2018-07-03 09:30:39 +00:00
"Le module :mod:`tokenize` fournit un analyseur lexical pour Python, "
2018-10-10 16:34:12 +00:00
"implémenté en Python. L'analyseur de ce module renvoie les commentaire sous "
"forme de *token*, se qui le rend intéressant pour implémenter des *pretty-"
2018-07-03 09:30:39 +00:00
"printers*, typiquement pour faire de la coloration syntaxique."
2016-10-30 09:46:26 +00:00
#: ../Doc/library/tokenize.rst:18
msgid ""
"To simplify token stream handling, all :ref:`operators` and :ref:"
"`delimiters` tokens are returned using the generic :data:`token.OP` token "
"type. The exact type can be determined by checking the second field "
"(containing the actual token string matched) of the tuple returned from :"
"func:`tokenize.generate_tokens` for the character sequence that identifies a "
"specific operator token."
msgstr ""
#: ../Doc/library/tokenize.rst:25
msgid "The primary entry point is a :term:`generator`:"
2018-07-03 09:30:39 +00:00
msgstr "Le point d'entrée principal est un :term:`générateur <generator>` :"
2016-10-30 09:46:26 +00:00
#: ../Doc/library/tokenize.rst:29
msgid ""
"The :func:`generate_tokens` generator requires one argument, *readline*, "
"which must be a callable object which provides the same interface as the :"
"meth:`~file.readline` method of built-in file objects (see section :ref:"
"`bltin-file-objects`). Each call to the function should return one line of "
"input as a string. Alternately, *readline* may be a callable object that "
"signals completion by raising :exc:`StopIteration`."
msgstr ""
#: ../Doc/library/tokenize.rst:36
msgid ""
"The generator produces 5-tuples with these members: the token type; the "
"token string; a 2-tuple ``(srow, scol)`` of ints specifying the row and "
"column where the token begins in the source; a 2-tuple ``(erow, ecol)`` of "
"ints specifying the row and column where the token ends in the source; and "
"the line on which the token was found. The line passed (the last tuple "
"item) is the *logical* line; continuation lines are included."
msgstr ""
#: ../Doc/library/tokenize.rst:45
msgid "An older entry point is retained for backward compatibility:"
msgstr ""
#: ../Doc/library/tokenize.rst:50
msgid ""
"The :func:`.tokenize` function accepts two parameters: one representing the "
"input stream, and one providing an output mechanism for :func:`.tokenize`."
msgstr ""
#: ../Doc/library/tokenize.rst:53
msgid ""
"The first parameter, *readline*, must be a callable object which provides "
"the same interface as the :meth:`~file.readline` method of built-in file "
"objects (see section :ref:`bltin-file-objects`). Each call to the function "
"should return one line of input as a string. Alternately, *readline* may be "
"a callable object that signals completion by raising :exc:`StopIteration`."
msgstr ""
#: ../Doc/library/tokenize.rst:59
msgid "Added :exc:`StopIteration` support."
msgstr ""
#: ../Doc/library/tokenize.rst:62
msgid ""
"The second parameter, *tokeneater*, must also be a callable object. It is "
"called once for each token, with five arguments, corresponding to the tuples "
"generated by :func:`generate_tokens`."
msgstr ""
#: ../Doc/library/tokenize.rst:66
msgid ""
"All constants from the :mod:`token` module are also exported from :mod:"
"`tokenize`, as are two additional token type values that might be passed to "
"the *tokeneater* function by :func:`.tokenize`:"
msgstr ""
#: ../Doc/library/tokenize.rst:73
msgid "Token value used to indicate a comment."
2018-10-10 16:34:12 +00:00
msgstr "Valeur du jeton utilisée pour indiquer un commentaire."
2016-10-30 09:46:26 +00:00
#: ../Doc/library/tokenize.rst:78
msgid ""
"Token value used to indicate a non-terminating newline. The NEWLINE token "
"indicates the end of a logical line of Python code; NL tokens are generated "
"when a logical line of code is continued over multiple physical lines."
msgstr ""
#: ../Doc/library/tokenize.rst:82
msgid ""
"Another function is provided to reverse the tokenization process. This is "
"useful for creating tools that tokenize a script, modify the token stream, "
"and write back the modified script."
msgstr ""
2018-10-10 16:34:12 +00:00
"Une autre fonction est fournie pour inverser le processus de tokenisation. "
"Ceci est utile pour créer des outils permettant de codifier un script, de "
"modifier le flux de jetons et de réécrire le script modifié."
2016-10-30 09:46:26 +00:00
#: ../Doc/library/tokenize.rst:89
msgid ""
"Converts tokens back into Python source code. The *iterable* must return "
"sequences with at least two elements, the token type and the token string. "
"Any additional sequence elements are ignored."
msgstr ""
#: ../Doc/library/tokenize.rst:93
msgid ""
"The reconstructed script is returned as a single string. The result is "
"guaranteed to tokenize back to match the input so that the conversion is "
"lossless and round-trips are assured. The guarantee applies only to the "
"token type and token string as the spacing between tokens (column positions) "
"may change."
msgstr ""
2018-10-10 16:34:12 +00:00
"Le script reconstruit est renvoyé sous la forme d'une chaîne unique. Le "
"résultat est garanti pour que le jeton corresponde à l'entrée afin que la "
"conversion soit sans perte et que les allers et retours soient assurés. La "
"garantie ne s'applique qu'au type de jeton et à la chaîne de jetons car "
"l'espacement entre les jetons (positions des colonnes) peut changer."
2016-10-30 09:46:26 +00:00
#: ../Doc/library/tokenize.rst:103
msgid ""
"Raised when either a docstring or expression that may be split over several "
"lines is not completed anywhere in the file, for example::"
msgstr ""
2018-10-10 16:34:12 +00:00
"Déclenché lorsque soit une *docstring* soit une expression qui pourrait être "
"divisée sur plusieurs lignes n'est pas complété dans le fichier, par "
"exemple ::"
2016-10-30 09:46:26 +00:00
#: ../Doc/library/tokenize.rst:109
msgid "or::"
msgstr "ou : ::"
#: ../Doc/library/tokenize.rst:115
msgid ""
"Note that unclosed single-quoted strings do not cause an error to be raised. "
"They are tokenized as ``ERRORTOKEN``, followed by the tokenization of their "
"contents."
msgstr ""
#: ../Doc/library/tokenize.rst:119
msgid ""
"Example of a script re-writer that transforms float literals into Decimal "
"objects::"
msgstr ""