Skip to content
Projeler
Gruplar
Parçacıklar
Yardım
Yükleniyor...
Oturum aç / Kaydol
Gezinmeyi değiştir
C
cpython
Proje
Proje
Ayrıntılar
Etkinlik
Cycle Analytics
Depo (repository)
Depo (repository)
Dosyalar
Kayıtlar (commit)
Dallar (branch)
Etiketler
Katkıda bulunanlar
Grafik
Karşılaştır
Grafikler
Konular (issue)
0
Konular (issue)
0
Liste
Pano
Etiketler
Kilometre Taşları
Birleştirme (merge) Talepleri
0
Birleştirme (merge) Talepleri
0
CI / CD
CI / CD
İş akışları (pipeline)
İşler
Zamanlamalar
Grafikler
Paketler
Paketler
Wiki
Wiki
Parçacıklar
Parçacıklar
Üyeler
Üyeler
Collapse sidebar
Close sidebar
Etkinlik
Grafik
Grafikler
Yeni bir konu (issue) oluştur
İşler
Kayıtlar (commit)
Konu (issue) Panoları
Kenar çubuğunu aç
Batuhan Osman TASKAYA
cpython
Commits
b51eaa18
Kaydet (Commit)
b51eaa18
authored
Mar 07, 1997
tarafından
Guido van Rossum
Dosyalara gözat
Seçenekler
Dosyalara Gözat
İndir
Eposta Yamaları
Sade Fark
Fixed doc string, added __version__, fixed 1 bug.
üst
fc6f5339
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
18 additions
and
11 deletions
+18
-11
tokenize.py
Lib/tokenize.py
+18
-11
No files found.
Lib/tokenize.py
Dosyayı görüntüle @
b51eaa18
"""tokenize.py (Ka-Ping Yee, 4 March 1997)
This module compiles a regular expression that recognizes Python tokens
in individual lines of text. The regular expression handles everything
except indentation, continuations, and triple-quoted strings. The function
'tokenize.tokenize()' takes care of these things for streams of text. It
accepts a file-like object and a function, uses the readline() method to
scan the file, and calls the function called once for each token found
passing its type, a string containing the token, the line number, the line,
and the starting and ending positions of the token within the line.
It is designed to match the working of the Python tokenizer exactly."""
"""Tokenization help for Python programs.
This module compiles a regular expression that recognizes Python
tokens in individual lines of text. The regular expression handles
everything except indentation, continuations, and triple-quoted
strings. The function 'tokenize.tokenize()' takes care of these
things for streams of text. It accepts a readline-like function which
is called repeatedly to come up with the next input line (or "" for
EOF), and a "token-eater" function which is called for each token
found, passing its type, a string containing the token, the line
number, the line, and the starting and ending positions of the token
within the line. It is designed to match the working of the Python
tokenizer exactly.
"""
__version__
=
"Ka-Ping Yee, 4 March 1997, updated by GvR, 6 March 1997"
import
string
,
regex
from
token
import
*
...
...
@@ -117,6 +123,7 @@ def tokenize(readline, tokeneater = printtoken):
endprog
=
endprogs
[
token
]
if
endprog
.
search
(
line
,
pos
)
>=
0
:
# all on one line
pos
=
endprog
.
regs
[
0
][
1
]
token
=
line
[
start
:
pos
]
tokeneater
(
STRING
,
token
,
linenum
,
line
,
start
,
pos
)
else
:
contstr
=
line
[
start
:]
# multiple lines
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment