Skip to content
Projeler
Gruplar
Parçacıklar
Yardım
Yükleniyor...
Oturum aç / Kaydol
Gezinmeyi değiştir
C
cpython
Proje
Proje
Ayrıntılar
Etkinlik
Cycle Analytics
Depo (repository)
Depo (repository)
Dosyalar
Kayıtlar (commit)
Dallar (branch)
Etiketler
Katkıda bulunanlar
Grafik
Karşılaştır
Grafikler
Konular (issue)
0
Konular (issue)
0
Liste
Pano
Etiketler
Kilometre Taşları
Birleştirme (merge) Talepleri
0
Birleştirme (merge) Talepleri
0
CI / CD
CI / CD
İş akışları (pipeline)
İşler
Zamanlamalar
Grafikler
Paketler
Paketler
Wiki
Wiki
Parçacıklar
Parçacıklar
Üyeler
Üyeler
Collapse sidebar
Close sidebar
Etkinlik
Grafik
Grafikler
Yeni bir konu (issue) oluştur
İşler
Kayıtlar (commit)
Konu (issue) Panoları
Kenar çubuğunu aç
Batuhan Osman TASKAYA
cpython
Commits
4b244ef2
Kaydet (Commit)
4b244ef2
authored
May 23, 2011
tarafından
Raymond Hettinger
Dosyalara gözat
Seçenekler
Dosyalara Gözat
İndir
Eposta Yamaları
Sade Fark
Clean-up example.
üst
b43dd4b8
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
41 additions
and
31 deletions
+41
-31
re.rst
Doc/library/re.rst
+41
-31
No files found.
Doc/library/re.rst
Dosyayı görüntüle @
4b244ef2
...
@@ -1298,24 +1298,27 @@ The text categories are specified with regular expressions. The technique is
...
@@ -1298,24 +1298,27 @@ The text categories are specified with regular expressions. The technique is
to combine those into a single master regular expression and to loop over
to combine those into a single master regular expression and to loop over
successive matches::
successive matches::
Token = collections.namedtuple('Token', 'typ value line column')
import collections
import re
Token = collections.namedtuple('Token', ['typ', 'value', 'line', 'column'])
def tokenize(s):
def tokenize(s):
keywords = {'IF', 'THEN', 'FOR', 'NEXT', 'GOSUB', 'RETURN'}
keywords = {'IF', 'THEN', '
ENDIF', '
FOR', 'NEXT', 'GOSUB', 'RETURN'}
tok
_spec
= [
tok
en_specification
= [
('NUMBER', r'\d+(\.\d*)?'), # Integer or decimal number
('NUMBER',
r'\d+(\.\d*)?'), # Integer or decimal number
('ASSIGN', r':='), # Assignment operator
('ASSIGN',
r':='), # Assignment operator
('END',
';'),
# Statement terminator
('END',
r';'),
# Statement terminator
('ID',
r'[A-Za-z]+'),
# Identifiers
('ID',
r'[A-Za-z]+'),
# Identifiers
('OP',
r'[+*\/\-]'),
# Arithmetic operators
('OP',
r'[+*\/\-]'),
# Arithmetic operators
('NEWLINE', r'\n'), # Line endings
('NEWLINE', r'\n'),
# Line endings
('SKIP',
r'[ \t]'),
# Skip over spaces and tabs
('SKIP',
r'[ \t]'),
# Skip over spaces and tabs
]
]
tok_re
= '|'.join('(?P<%s>%s)' % pair for pair in tok_spec
)
tok_re
gex = '|'.join('(?P<%s>%s)' % pair for pair in token_specification
)
get
tok = re.compile(tok_re
).match
get
_token = re.compile(tok_regex
).match
line = 1
line = 1
pos = line_start = 0
pos = line_start = 0
mo = get
tok
(s)
mo = get
_token
(s)
while mo is not None:
while mo is not None:
typ = mo.lastgroup
typ = mo.lastgroup
if typ == 'NEWLINE':
if typ == 'NEWLINE':
...
@@ -1327,13 +1330,15 @@ successive matches::
...
@@ -1327,13 +1330,15 @@ successive matches::
typ = val
typ = val
yield Token(typ, val, line, mo.start()-line_start)
yield Token(typ, val, line, mo.start()-line_start)
pos = mo.end()
pos = mo.end()
mo = get
tok
(s, pos)
mo = get
_token
(s, pos)
if pos != len(s):
if pos != len(s):
raise RuntimeError('Unexpected character %r on line %d' %(s[pos], line))
raise RuntimeError('Unexpected character %r on line %d' %(s[pos], line))
statements = '''\
statements = '''
total := total + price * quantity;
IF quantity THEN
tax := price * 0.05;
total := total + price * quantity;
tax := price * 0.05;
ENDIF;
'''
'''
for token in tokenize(statements):
for token in tokenize(statements):
...
@@ -1341,17 +1346,22 @@ successive matches::
...
@@ -1341,17 +1346,22 @@ successive matches::
The tokenizer produces the following output::
The tokenizer produces the following output::
Token(typ='ID', value='total', line=1, column=8)
Token(typ='IF', value='IF', line=2, column=5)
Token(typ='ASSIGN', value=':=', line=1, column=14)
Token(typ='ID', value='quantity', line=2, column=8)
Token(typ='ID', value='total', line=1, column=17)
Token(typ='THEN', value='THEN', line=2, column=17)
Token(typ='OP', value='+', line=1, column=23)
Token(typ='ID', value='total', line=3, column=9)
Token(typ='ID', value='price', line=1, column=25)
Token(typ='ASSIGN', value=':=', line=3, column=15)
Token(typ='OP', value='*', line=1, column=31)
Token(typ='ID', value='total', line=3, column=18)
Token(typ='ID', value='quantity', line=1, column=33)
Token(typ='OP', value='+', line=3, column=24)
Token(typ='END', value=';', line=1, column=41)
Token(typ='ID', value='price', line=3, column=26)
Token(typ='ID', value='tax', line=2, column=9)
Token(typ='OP', value='*', line=3, column=32)
Token(typ='ASSIGN', value=':=', line=2, column=13)
Token(typ='ID', value='quantity', line=3, column=34)
Token(typ='ID', value='price', line=2, column=16)
Token(typ='END', value=';', line=3, column=42)
Token(typ='OP', value='*', line=2, column=22)
Token(typ='ID', value='tax', line=4, column=9)
Token(typ='NUMBER', value='0.05', line=2, column=24)
Token(typ='ASSIGN', value=':=', line=4, column=13)
Token(typ='END', value=';', line=2, column=28)
Token(typ='ID', value='price', line=4, column=16)
Token(typ='OP', value='*', line=4, column=22)
Token(typ='NUMBER', value='0.05', line=4, column=24)
Token(typ='END', value=';', line=4, column=28)
Token(typ='ENDIF', value='ENDIF', line=5, column=5)
Token(typ='END', value=';', line=5, column=10)
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment