I don't think it's due to the warmup of the JIT. Here's a simpler example.
import time
import multiprocessing
def do_nothing(): pass
if __name__ == '__main__':
time1 = time.time()
do_nothing()
time2 = time.time()
pool = multiprocessing.Pool(processes=1)
time3 = time.time()
res
.Pool
object is used.
See the attachment or this link for the code: http://pastie.org/2614925
On Thu, Sep 29, 2011 at 8:24 PM, Josh Ayers wrote:
> I think the slowdown you're seeing is due to the time it takes to create
> new processes. This seems to be quite a bit slower in PyPy t
I think the slowdown you're seeing is due to the time it takes to create new
processes. This seems to be quite a bit slower in PyPy than in CPython.
However, once the process pool is created and has been used once, the
execution time vs. process count behaves as expected.
I attached a modified ve
gregiously stupid in my code? Any ideas on better methods for
efficiently storing and retrieving binary data from disk under PyPy?
Thanks in advance for your help.
Sincerely,
Josh Ayers
___
pypy-dev mailing list
pypy-dev@python.org
http://mail.python.org/mailman/listinfo/pypy-dev