Luis added the comment:

Thanks for answer, although I still think I haven't made myself fully 
understood here, allow me to paraphrase:
"...You need some means of transferring objects between processes, and pickling 
is the Python standard serialization method" 

Yes, but the question that stands is why Pool has to use a multiprocess.Queue 
to load and spin the workers (therefore pickling-unpickling their arguments), 
whereas we should just inheriting in that moment and then just create a Queue 
for the returns of the workers.

This applies to method "fork", not to "spawn", and not sure for "fork server".

Plus, I'm not trying to avoid inheritance, I'm trying to use it with Pools and 
large arguments as theoretically allowed by forking, and instead at the moment 
I'm forced to use Processes with a Queue for the results, as shown in the code 
above.

"OverflowError: cannot serialize a bytes object larger than 4 GiB" is just what 
allows us to expose this behavior, cause the Pool pickles the arguments 
without, in my opinion, having to do so.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue23979>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to