Skip to content
Projeler
Gruplar
Parçacıklar
Yardım
Yükleniyor...
Oturum aç / Kaydol
Gezinmeyi değiştir
C
cpython
Proje
Proje
Ayrıntılar
Etkinlik
Cycle Analytics
Depo (repository)
Depo (repository)
Dosyalar
Kayıtlar (commit)
Dallar (branch)
Etiketler
Katkıda bulunanlar
Grafik
Karşılaştır
Grafikler
Konular (issue)
0
Konular (issue)
0
Liste
Pano
Etiketler
Kilometre Taşları
Birleştirme (merge) Talepleri
0
Birleştirme (merge) Talepleri
0
CI / CD
CI / CD
İş akışları (pipeline)
İşler
Zamanlamalar
Grafikler
Paketler
Paketler
Wiki
Wiki
Parçacıklar
Parçacıklar
Üyeler
Üyeler
Collapse sidebar
Close sidebar
Etkinlik
Grafik
Grafikler
Yeni bir konu (issue) oluştur
İşler
Kayıtlar (commit)
Konu (issue) Panoları
Kenar çubuğunu aç
Batuhan Osman TASKAYA
cpython
Commits
29bef0bb
Kaydet (Commit)
29bef0bb
authored
Agu 23, 2006
tarafından
Jeremy Hylton
Dosyalara gözat
Seçenekler
Dosyalara Gözat
İndir
Eposta Yamaları
Sade Fark
Baby steps towards better tests for tokenize
üst
2214507e
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
46 additions
and
3 deletions
+46
-3
test_tokenize.py
Lib/test/test_tokenize.py
+46
-3
No files found.
Lib/test/test_tokenize.py
Dosyayı görüntüle @
29bef0bb
"""Tests for the tokenize module.
The tests were originally written in the old Python style, where the
test output was compared to a golden file. This docstring represents
the first steps towards rewriting the entire test as a doctest.
The tests can be really simple. Given a small fragment of source
code, print out a table with the tokens. The ENDMARK is omitted for
brevity.
>>> dump_tokens("1 + 1")
NUMBER '1' (1, 0) (1, 1)
OP '+' (1, 2) (1, 3)
NUMBER '1' (1, 4) (1, 5)
There will be a bunch more tests of specific source patterns.
The tokenize module also defines an untokenize function that should
regenerate the original program text from the tokens. (It doesn't
work very well at the moment.)
>>> roundtrip("if x == 1:
\\
n"
... " print x
\\
n")
if x ==1 :
print x
"""
import
os
,
glob
,
random
import
os
,
glob
,
random
from
cStringIO
import
StringIO
from
cStringIO
import
StringIO
from
test.test_support
import
(
verbose
,
findfile
,
is_resource_enabled
,
from
test.test_support
import
(
verbose
,
findfile
,
is_resource_enabled
,
TestFailed
)
TestFailed
)
from
tokenize
import
(
tokenize
,
generate_tokens
,
untokenize
,
from
tokenize
import
(
tokenize
,
generate_tokens
,
untokenize
,
tok_name
,
NUMBER
,
NAME
,
OP
,
STRING
)
ENDMARKER
,
NUMBER
,
NAME
,
OP
,
STRING
)
# Test roundtrip for `untokenize`. `f` is a file path. The source code in f
# Test roundtrip for `untokenize`. `f` is a file path. The source code in f
# is tokenized, converted back to source code via tokenize.untokenize(),
# is tokenized, converted back to source code via tokenize.untokenize(),
...
@@ -24,6 +51,22 @@ def test_roundtrip(f):
...
@@ -24,6 +51,22 @@ def test_roundtrip(f):
if
t1
!=
t2
:
if
t1
!=
t2
:
raise
TestFailed
(
"untokenize() roundtrip failed for
%
r"
%
f
)
raise
TestFailed
(
"untokenize() roundtrip failed for
%
r"
%
f
)
def
dump_tokens
(
s
):
"""Print out the tokens in s in a table format.
The ENDMARKER is omitted.
"""
f
=
StringIO
(
s
)
for
type
,
token
,
start
,
end
,
line
in
generate_tokens
(
f
.
readline
):
if
type
==
ENDMARKER
:
break
type
=
tok_name
[
type
]
print
"
%(type)-10.10
s
%(token)-10.10
r
%(start)
s
%(end)
s"
%
locals
()
def
roundtrip
(
s
):
f
=
StringIO
(
s
)
print
untokenize
(
generate_tokens
(
f
.
readline
)),
# This is an example from the docs, set up as a doctest.
# This is an example from the docs, set up as a doctest.
def
decistmt
(
s
):
def
decistmt
(
s
):
"""Substitute Decimals for floats in a string of statements.
"""Substitute Decimals for floats in a string of statements.
...
@@ -105,7 +148,7 @@ def foo():
...
@@ -105,7 +148,7 @@ def foo():
# Run the doctests in this module.
# Run the doctests in this module.
from
test
import
test_tokenize
# i.e., this module
from
test
import
test_tokenize
# i.e., this module
from
test.test_support
import
run_doctest
from
test.test_support
import
run_doctest
run_doctest
(
test_tokenize
)
run_doctest
(
test_tokenize
,
verbose
)
if
verbose
:
if
verbose
:
print
'finished'
print
'finished'
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment