Jesse Noller <jnol...@gmail.com> added the comment:

My results don't match yours. (8 cores, Mac OS/X):

-------- testing multiprocessing on  8 cores ----------

100000 elements map() time 0.0444118976593 s
100000 elements pool.map() time 0.0366489887238 s
100000 elements pool.apply_async() time 24.3125801086 s

Now, this could be for a variety of reasons: More cores, different OS 
(which means different speed at which processes can be forked) and so 
on. As Antoine/Amaury point out you really need a use case that is large 
enough to offset the cost of forking the processes in the first place.

I also ran this on an 8 core Ubuntu box with kernel 2.6.22.19 and 
py2.6.1 and
16gb of ram:

-------- testing multiprocessing on  8 cores ----------

100000 elements map() time 0.0258889198303 s
100000 elements pool.map() time 0.0339770317078 s
100000 elements pool.apply_async() time 11.0373139381 s

OS/X is pretty snappy when it comes for forking. 

Now, if you cut the example you provided over to Amaury's example, you 
see a significant difference:

OS/X, 8 cores:

-------- testing multiprocessing on  8 cores ----------

100000 elements map() time 30.704061985 s
100000 elements pool.map() time 4.95880293846 s
100000 elements pool.apply_async() time 23.6090102196 s

Ubuntu, kernel 2.6.22.19 and py2.6.1:

-------- testing multiprocessing on  8 cores ----------

100000 elements map() time 38.3818569183 s
100000 elements pool.map() time 5.65878105164 s
100000 elements pool.apply_async() time 14.1757941246 s

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue5000>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to