I switched to using numpy for the matrix multiply and while the overall time to do the matrix multiply is much faster, there is still no speed up from using more than one python thread. If I look at top while running 2 or more threads, both cores are being used 100% and there is no idle time on the system.

I did a quick google search and didn't find anything conclusive about numpy releasing the GIL. The most conclusive and recent reference I found was

http://mail.python.org/pipermail/python-list/2007-October/463148.html

I found some other references where people were expressing concern over numpy releasing the GIL due to the fact that other C extensions could call numpy and unexpectedly have the GIL released on them (or something like that).

On May 15, 2008, at 6:43 PM, Nick Coghlan wrote:

Tom Pinckney wrote:
All the discussion recently about pyprocessing got me interested in actually benchmarking Python's multiprocessing performance to see if reality matched my expectations around what would scale up and what would not. I knew Python threads wouldn't be good for compute bound problems, but I was curious to see how well they worked for i/ o bound problems. The short answer is that for i/o bound problems, python threads worked just as well as using multiple operating system processes.

Interesting - given that your example compute bound problem happened to be a matrix multiply, I'd be curious what the results are when using python threads with numpy to do the same thing (my understanding is that numpy will usually release the GIL while doing serious number-crunching)

Cheers,
Nick.

--
Nick Coghlan   |   [EMAIL PROTECTED]   |   Brisbane, Australia
---------------------------------------------------------------
           http://www.boredomandlaziness.org

_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to