Konstantin Svist <fry....@gmail.com> added the comment:

This issue sounds very interesting to me for a somewhat different reason.
My problem is that I'm trying to run multiple processes on separate CPUs/cores 
with os.fork(). In short, the data set is the same (~2GB) and the separate 
processes do whatever they need, although each fork treats the data set as 
read-only.
Right after the fork, data is shared and fits in RAM nicely, but after a few 
minutes each child process runs over a bunch of the data set (thereby modifying 
the ref counters) and the data is copied for each process. RAM usage jumps from 
15GB to 30GB and the advantage of a fork is gone.

It would be great if there was an option to separate out the ref counters for 
specific data structures, since it's obviously a bad idea to turn it on by 
default for everything and everyone.

----------
nosy: +Fry-kun

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue9942>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to