Lines Matching refs:tokenize
3 tokenize(readline) is a generator that breaks a stream of bytes into
42 __all__ = token.__all__ + ["tokenize", "generate_tokens", "detect_encoding",
264 token, which is the first token sequence output by tokenize.
274 # Output bytes will tokenize back to the input
275 t1 = [tok[:2] for tok in tokenize(f.readline)]
278 t2 = [tok[:2] for tok in tokenize(readline)]
303 in the same way as the tokenize() generator.
408 def tokenize(readline):
410 The tokenize() generator requires one argument, readline, which
518 ("<tokenize>", lnum, pos, line))
619 This has the same API as tokenize(), except that it expects the *readline*
643 parser = argparse.ArgumentParser(prog='python -m tokenize')
646 help='the file to tokenize; defaults to stdin')
656 tokens = list(tokenize(f.readline))