Kaydet (Commit) b51eaa18 authored tarafından Guido van Rossum's avatar Guido van Rossum

Fixed doc string, added __version__, fixed 1 bug.

üst fc6f5339
"""tokenize.py (Ka-Ping Yee, 4 March 1997) """Tokenization help for Python programs.
This module compiles a regular expression that recognizes Python tokens This module compiles a regular expression that recognizes Python
in individual lines of text. The regular expression handles everything tokens in individual lines of text. The regular expression handles
except indentation, continuations, and triple-quoted strings. The function everything except indentation, continuations, and triple-quoted
'tokenize.tokenize()' takes care of these things for streams of text. It strings. The function 'tokenize.tokenize()' takes care of these
accepts a file-like object and a function, uses the readline() method to things for streams of text. It accepts a readline-like function which
scan the file, and calls the function called once for each token found is called repeatedly to come up with the next input line (or "" for
passing its type, a string containing the token, the line number, the line, EOF), and a "token-eater" function which is called for each token
and the starting and ending positions of the token within the line. found, passing its type, a string containing the token, the line
It is designed to match the working of the Python tokenizer exactly.""" number, the line, and the starting and ending positions of the token
within the line. It is designed to match the working of the Python
tokenizer exactly.
"""
__version__ = "Ka-Ping Yee, 4 March 1997, updated by GvR, 6 March 1997"
import string, regex import string, regex
from token import * from token import *
...@@ -117,6 +123,7 @@ def tokenize(readline, tokeneater = printtoken): ...@@ -117,6 +123,7 @@ def tokenize(readline, tokeneater = printtoken):
endprog = endprogs[token] endprog = endprogs[token]
if endprog.search(line, pos) >= 0: # all on one line if endprog.search(line, pos) >= 0: # all on one line
pos = endprog.regs[0][1] pos = endprog.regs[0][1]
token = line[start:pos]
tokeneater(STRING, token, linenum, line, start, pos) tokeneater(STRING, token, linenum, line, start, pos)
else: else:
contstr = line[start:] # multiple lines contstr = line[start:] # multiple lines
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment