On 03/07/2015 04:58 AM, Steven D'Aprano wrote:
On Fri, Mar 06, 2015 at 08:00:20PM -0500, Ron Adam wrote:

>Have you considered doing this by having different magic numbers in the
>.pyc file for standard, -O, and -O0 compiled bytecode files?  Python
>already checks that number and recompiles the files if it's not what it's
>expected to be.  And it wouldn't require any naming conventions or new
>cache directories.  It seems to me it would be much easier to do as well.
And it would fail to solve the problem. The problem isn't just that the
.pyo file can contain the wrong byte-code for the optimization level,
that's only part of the problem. Another issue is that you cannot have
pre-compiled byte-code for multiple different optimization levels. You
can have a "no optimization" byte-code file, the .pyc file, but only one
"optimized" byte-code file at the same time.

Brett's proposal will allow -O optimized and -OO optimized byte-code
files to co-exist, as well as setting up a clear naming convention for
future optimizers in either the Python compiler or third-party
optimizers.

So all the different versions can be generated ahead of time. I think that is the main difference.

My suggestion would cause a recompile of all dependent python files when different optimisation levels are used in different projects. Which may be worse than not generating bytecode files at all. OK


A few questions...

Can a submodule use an optimazation level that is different from the file that imports it? (Other than the case this is trying to solve.)

Is there way to specify that an imported module not use any optimisation level, or to always use a specific optimisation level?

Is there a way to run tests with all the different optimisation levels?


Cheers,
   Ron

_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to