Kaydet (Commit) 428de65c authored tarafından Trent Nelson's avatar Trent Nelson

- Issue #719888: Updated tokenize to use a bytes API. generate_tokens has been

  renamed tokenize and now works with bytes rather than strings. A new
  detect_encoding function has been added for determining source file encoding
  according to PEP-0263. Token sequences returned by tokenize always start
  with an ENCODING token which specifies the encoding used to decode the file.
  This token is used to encode the output of untokenize back to bytes.

Credit goes to Michael "I'm-going-to-name-my-first-child-unittest" Foord from Resolver Systems for this work.
üst 112367a9
...@@ -209,3 +209,5 @@ docs@python.org), and we'll be glad to correct the problem. ...@@ -209,3 +209,5 @@ docs@python.org), and we'll be glad to correct the problem.
* Moshe Zadka * Moshe Zadka
* Milan Zamazal * Milan Zamazal
* Cheng Zhang * Cheng Zhang
* Trent Nelson
* Michael Foord
...@@ -9,50 +9,34 @@ ...@@ -9,50 +9,34 @@
The :mod:`tokenize` module provides a lexical scanner for Python source code, The :mod:`tokenize` module provides a lexical scanner for Python source code,
implemented in Python. The scanner in this module returns comments as tokens as implemented in Python. The scanner in this module returns comments as tokens
well, making it useful for implementing "pretty-printers," including colorizers as well, making it useful for implementing "pretty-printers," including
for on-screen displays. colorizers for on-screen displays.
The primary entry point is a :term:`generator`: The primary entry point is a :term:`generator`:
.. function:: generate_tokens(readline) .. function:: tokenize(readline)
The :func:`generate_tokens` generator requires one argument, *readline*, which The :func:`tokenize` generator requires one argument, *readline*, which
must be a callable object which provides the same interface as the must be a callable object which provides the same interface as the
:meth:`readline` method of built-in file objects (see section :meth:`readline` method of built-in file objects (see section
:ref:`bltin-file-objects`). Each call to the function should return one line of :ref:`bltin-file-objects`). Each call to the function should return one
input as a string. line of input as bytes.
The generator produces 5-tuples with these members: the token type; the token The generator produces 5-tuples with these members: the token type; the
string; a 2-tuple ``(srow, scol)`` of ints specifying the row and column where token string; a 2-tuple ``(srow, scol)`` of ints specifying the row and
the token begins in the source; a 2-tuple ``(erow, ecol)`` of ints specifying column where the token begins in the source; a 2-tuple ``(erow, ecol)`` of
the row and column where the token ends in the source; and the line on which the ints specifying the row and column where the token ends in the source; and
token was found. The line passed is the *logical* line; continuation lines are the line on which the token was found. The line passed is the *logical*
included. line; continuation lines are included.
tokenize determines the source encoding of the file by looking for a utf-8
An older entry point is retained for backward compatibility: bom or encoding cookie, according to :pep:`263`.
.. function:: tokenize(readline[, tokeneater])
The :func:`tokenize` function accepts two parameters: one representing the input
stream, and one providing an output mechanism for :func:`tokenize`.
The first parameter, *readline*, must be a callable object which provides the
same interface as the :meth:`readline` method of built-in file objects (see
section :ref:`bltin-file-objects`). Each call to the function should return one
line of input as a string. Alternately, *readline* may be a callable object that
signals completion by raising :exc:`StopIteration`.
The second parameter, *tokeneater*, must also be a callable object. It is
called once for each token, with five arguments, corresponding to the tuples
generated by :func:`generate_tokens`.
All constants from the :mod:`token` module are also exported from All constants from the :mod:`token` module are also exported from
:mod:`tokenize`, as are two additional token type values that might be passed to :mod:`tokenize`, as are three additional token type values:
the *tokeneater* function by :func:`tokenize`:
.. data:: COMMENT .. data:: COMMENT
...@@ -62,55 +46,95 @@ the *tokeneater* function by :func:`tokenize`: ...@@ -62,55 +46,95 @@ the *tokeneater* function by :func:`tokenize`:
.. data:: NL .. data:: NL
Token value used to indicate a non-terminating newline. The NEWLINE token Token value used to indicate a non-terminating newline. The NEWLINE token
indicates the end of a logical line of Python code; NL tokens are generated when indicates the end of a logical line of Python code; NL tokens are generated
a logical line of code is continued over multiple physical lines. when a logical line of code is continued over multiple physical lines.
Another function is provided to reverse the tokenization process. This is useful
for creating tools that tokenize a script, modify the token stream, and write
back the modified script.
.. data:: ENCODING
.. function:: untokenize(iterable) Token value that indicates the encoding used to decode the source bytes
into text. The first token returned by :func:`tokenize` will always be an
ENCODING token.
Converts tokens back into Python source code. The *iterable* must return
sequences with at least two elements, the token type and the token string. Any
additional sequence elements are ignored.
The reconstructed script is returned as a single string. The result is Another function is provided to reverse the tokenization process. This is
guaranteed to tokenize back to match the input so that the conversion is useful for creating tools that tokenize a script, modify the token stream, and
lossless and round-trips are assured. The guarantee applies only to the token write back the modified script.
type and token string as the spacing between tokens (column positions) may
change.
.. function:: untokenize(iterable)
Converts tokens back into Python source code. The *iterable* must return
sequences with at least two elements, the token type and the token string.
Any additional sequence elements are ignored.
The reconstructed script is returned as a single string. The result is
guaranteed to tokenize back to match the input so that the conversion is
lossless and round-trips are assured. The guarantee applies only to the
token type and token string as the spacing between tokens (column
positions) may change.
It returns bytes, encoded using the ENCODING token, which is the first
token sequence output by :func:`tokenize`.
:func:`tokenize` needs to detect the encoding of source files it tokenizes. The
function it uses to do this is available:
.. function:: detect_encoding(readline)
The :func:`detect_encoding` function is used to detect the encoding that
should be used to decode a Python source file. It requires one argment,
readline, in the same way as the :func:`tokenize` generator.
It will call readline a maximum of twice, and return the encoding used
(as a string) and a list of any lines (not decoded from bytes) it has read
in.
It detects the encoding from the presence of a utf-8 bom or an encoding
cookie as specified in pep-0263. If both a bom and a cookie are present,
but disagree, a SyntaxError will be raised.
If no encoding is specified, then the default of 'utf-8' will be returned.
Example of a script re-writer that transforms float literals into Decimal Example of a script re-writer that transforms float literals into Decimal
objects:: objects::
def decistmt(s): def decistmt(s):
"""Substitute Decimals for floats in a string of statements. """Substitute Decimals for floats in a string of statements.
>>> from decimal import Decimal >>> from decimal import Decimal
>>> s = 'print(+21.3e-5*-.1234/81.7)' >>> s = 'print(+21.3e-5*-.1234/81.7)'
>>> decistmt(s) >>> decistmt(s)
"print(+Decimal ('21.3e-5')*-Decimal ('.1234')/Decimal ('81.7'))" "print (+Decimal ('21.3e-5')*-Decimal ('.1234')/Decimal ('81.7'))"
>>> exec(s) The format of the exponent is inherited from the platform C library.
-3.21716034272e-007 Known cases are "e-007" (Windows) and "e-07" (not Windows). Since
>>> exec(decistmt(s)) we're only showing 12 digits, and the 13th isn't close to 5, the
-3.217160342717258261933904529E-7 rest of the output should be platform-independent.
""" >>> exec(s) #doctest: +ELLIPSIS
result = [] -3.21716034272e-0...7
g = generate_tokens(StringIO(s).readline) # tokenize the string
for toknum, tokval, _, _, _ in g: Output from calculations with Decimal should be identical across all
if toknum == NUMBER and '.' in tokval: # replace NUMBER tokens platforms.
result.extend([
(NAME, 'Decimal'), >>> exec(decistmt(s))
(OP, '('), -3.217160342717258261933904529E-7
(STRING, repr(tokval)), """
(OP, ')') result = []
]) g = tokenize(BytesIO(s.encode('utf-8')).readline) # tokenize the string
else: for toknum, tokval, _, _, _ in g:
result.append((toknum, tokval)) if toknum == NUMBER and '.' in tokval: # replace NUMBER tokens
return untokenize(result) result.extend([
(NAME, 'Decimal'),
(OP, '('),
(STRING, repr(tokval)),
(OP, ')')
])
else:
result.append((toknum, tokval))
return untokenize(result).decode('utf-8')
...@@ -392,6 +392,9 @@ details. ...@@ -392,6 +392,9 @@ details.
* The functions :func:`os.tmpnam`, :func:`os.tempnam` and :func:`os.tmpfile` * The functions :func:`os.tmpnam`, :func:`os.tempnam` and :func:`os.tmpfile`
have been removed in favor of the :mod:`tempfile` module. have been removed in favor of the :mod:`tempfile` module.
* The :mod:`tokenize` module has been changed to work with bytes. The main
entry point is now :func:`tokenize.tokenize`, instead of generate_tokens.
.. ====================================================================== .. ======================================================================
.. whole new modules get described in subsections here .. whole new modules get described in subsections here
......
...@@ -1437,7 +1437,9 @@ class IndentSearcher(object): ...@@ -1437,7 +1437,9 @@ class IndentSearcher(object):
_tokenize.tabsize = self.tabwidth _tokenize.tabsize = self.tabwidth
try: try:
try: try:
_tokenize.tokenize(self.readline, self.tokeneater) tokens = _tokenize.generate_tokens(self.readline)
for token in tokens:
self.tokeneater(*token)
except _tokenize.TokenError: except _tokenize.TokenError:
# since we cut off the tokenizer early, we can trigger # since we cut off the tokenizer early, we can trigger
# spurious errors # spurious errors
......
...@@ -657,7 +657,9 @@ def getblock(lines): ...@@ -657,7 +657,9 @@ def getblock(lines):
"""Extract the block of code at the top of the given list of lines.""" """Extract the block of code at the top of the given list of lines."""
blockfinder = BlockFinder() blockfinder = BlockFinder()
try: try:
tokenize.tokenize(iter(lines).__next__, blockfinder.tokeneater) tokens = tokenize.generate_tokens(iter(lines).__next__)
for _token in tokens:
blockfinder.tokeneater(*_token)
except (EndOfBlock, IndentationError): except (EndOfBlock, IndentationError):
pass pass
return lines[:blockfinder.last] return lines[:blockfinder.last]
......
This diff is collapsed.
# -*- coding: latin1 -*-
# IMPORTANT: this file has the utf-8 BOM signature '\xef\xbb\xbf'
# at the start of it. Make sure this is preserved if any changes
# are made! Also note that the coding cookie above conflicts with
# the presense of a utf-8 BOM signature -- this is intended.
# Arbitrary encoded utf-8 text (stolen from test_doctest2.py).
x = 'ЉЊЈЁЂ'
def y():
"""
And again in a comment. ЉЊЈЁЂ
"""
pass
# IMPORTANT: this file has the utf-8 BOM signature '\xef\xbb\xbf'
# at the start of it. Make sure this is preserved if any changes
# are made!
# Arbitrary encoded utf-8 text (stolen from test_doctest2.py).
x = 'ЉЊЈЁЂ'
def y():
"""
And again in a comment. ЉЊЈЁЂ
"""
pass
# -*- coding: utf-8 -*-
# IMPORTANT: unlike the other test_tokenize-*.txt files, this file
# does NOT have the utf-8 BOM signature '\xef\xbb\xbf' at the start
# of it. Make sure this is not added inadvertently by your editor
# if any changes are made to this file!
# Arbitrary encoded utf-8 text (stolen from test_doctest2.py).
x = 'ЉЊЈЁЂ'
def y():
"""
And again in a comment. ЉЊЈЁЂ
"""
pass
# -*- coding: utf-8 -*-
# IMPORTANT: this file has the utf-8 BOM signature '\xef\xbb\xbf'
# at the start of it. Make sure this is preserved if any changes
# are made!
# Arbitrary encoded utf-8 text (stolen from test_doctest2.py).
x = 'ЉЊЈЁЂ'
def y():
"""
And again in a comment. ЉЊЈЁЂ
"""
pass
This diff is collapsed.
...@@ -752,3 +752,5 @@ Artur Zaprzala ...@@ -752,3 +752,5 @@ Artur Zaprzala
Mike Zarnstorff Mike Zarnstorff
Siebren van der Zee Siebren van der Zee
Uwe Zessin Uwe Zessin
Trent Nelson
Michael Foord
...@@ -41,6 +41,12 @@ Library ...@@ -41,6 +41,12 @@ Library
- Issue #1202: zlib.crc32 and zlib.adler32 now return an unsigned value. - Issue #1202: zlib.crc32 and zlib.adler32 now return an unsigned value.
- Issue #719888: Updated tokenize to use a bytes API. generate_tokens has been
renamed tokenize and now works with bytes rather than strings. A new
detect_encoding function has been added for determining source file encoding
according to PEP-0263. Token sequences returned by tokenize always start
with an ENCODING token which specifies the encoding used to decode the file.
This token is used to encode the output of untokenize back to bytes.
What's New in Python 3.0a3? What's New in Python 3.0a3?
=========================== ===========================
...@@ -175,7 +181,6 @@ Library ...@@ -175,7 +181,6 @@ Library
- Issue #1578: Problems in win_getpass. - Issue #1578: Problems in win_getpass.
Build Build
----- -----
......
...@@ -631,7 +631,9 @@ def main(): ...@@ -631,7 +631,9 @@ def main():
try: try:
eater.set_filename(filename) eater.set_filename(filename)
try: try:
tokenize.tokenize(fp.readline, eater) tokens = tokenize.generate_tokens(fp.readline)
for _token in tokens:
eater(*_token)
except tokenize.TokenError as e: except tokenize.TokenError as e:
print('%s: %s, line %d, column %d' % ( print('%s: %s, line %d, column %d' % (
e.args[0], filename, e.args[1][0], e.args[1][1]), e.args[0], filename, e.args[1][0], e.args[1][1]),
......
...@@ -103,7 +103,9 @@ class AppendChecker: ...@@ -103,7 +103,9 @@ class AppendChecker:
def run(self): def run(self):
try: try:
tokenize.tokenize(self.file.readline, self.tokeneater) tokens = tokenize.generate_tokens(self.file.readline)
for _token in tokens:
self.tokeneater(*_token)
except tokenize.TokenError as msg: except tokenize.TokenError as msg:
errprint("%r: Token Error: %s" % (self.fname, msg)) errprint("%r: Token Error: %s" % (self.fname, msg))
self.nerrors = self.nerrors + 1 self.nerrors = self.nerrors + 1
......
...@@ -173,7 +173,9 @@ class Reindenter: ...@@ -173,7 +173,9 @@ class Reindenter:
self.stats = [] self.stats = []
def run(self): def run(self):
tokenize.tokenize(self.getline, self.tokeneater) tokens = tokenize.generate_tokens(self.getline)
for _token in tokens:
self.tokeneater(*_token)
# Remove trailing empty lines. # Remove trailing empty lines.
lines = self.lines lines = self.lines
while lines and lines[-1] == "\n": while lines and lines[-1] == "\n":
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment