My understanding is that the Python interpreter already has enough information 
when bytecode-compiling a .py file to determine which names correspond to local 
variables in functions. That suggests it has enough information to identify all 
valid names in a .py file and in particular to identify which names are not 
valid.

If broken name references were detected at compile time, it would eliminate a 
huge class of errors before running the program: missing imports, call of 
misspelled top-level function, reference to misspelled local variable.

Of course running a full typechecker like mypy would eliminate more errors like 
misspelled method calls, type mismatch errors, etc. But if it is cheap to 
detect a wide variety of name errors at compile time, is there any particular 
reason it is not done?

- David

P.S. Here are some uncommon language features that interfere with identifying 
all valid names. In their absence, one might expect an invalid name to be a 
syntax error:

* import *
* manipulating locals() or globals()
* manipulating a frame object
* eval
-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to