New submission from Tom Goddard <godd...@cgl.ucsf.edu>: Bytecode compiling large Python files uses an unexpectedly large amount of memory. For example, compiling a file containing a list of 5 million integers uses about 2 Gbytes of memory while the Python file size is about 40 Mbytes. The memory used is 50 times the file size. The resulting list in Python consumes about 400 Mbytes of memory, so compiling the byte codes uses about 5 times the memory of the list object. Can the byte-code compilation can be made more memory efficient?
The application that creates simlilarly large Python files is a molecular graphics program called UCSF Chimera that my lab develops. It writes session files which are Python code. Sessions of reasonable size for Chimera for a given amount of physical memory cannot be byte-compiled without thrashing, crippling the interactivity of all software running on the machine. Here is Python code to produce the test file test.py containing a list of 5 million integers: print >>open('test.py','w'), 'x = ', repr(range(5000000)) I tried importing the test.py file with Python 2.5, 2.6.1 and 3.0.1 on Mac OS 10.5.6. In each case when the test.pyc file is not present the python process as monitored by the unix "top" command took about 1.7 Gb RSS and 2.2 Gb VSZ on a MacBook Pro which has 2 Gb of memory. ---------- components: Interpreter Core messages: 84108 nosy: goddard severity: normal status: open title: Byte-code compilation uses excessive memory type: performance versions: Python 2.6 _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue5557> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com