Threads in python are slower then single threads on multicore machines (any 
modern computer). So 2 threads on 2 cores is almost twice as slow instead 
of twice as fast because of the GIL. Yet there are advantages. If a thread 
blocks (because it is streaming data or doing a computation), a 
multithreaded server is still responsive. A non threaded server can only do 
one thing at the same.

In some languages on multicore concurrency means speed. In python this is 
not true for threads.

One can tweak things to make faster benchmarks for simple apps by 
serializing all requests but this is not good in a real life scenario where 
you need concurrency.

On Wednesday, 3 October 2012 08:09:12 UTC-5, Niphlod wrote:
>
> Just for the sake of discussion: are you all saying that in threaded 
> servers on Cpython the best thing is to have a single thread ? Why on hell 
> should anyone support a thread-pool in the beginning if that's the case? 
> Cherrypy existed long before pypy, so it's not a matter of "we predisposed 
> a Thread-pool just for pypy and jython". Also, everywhere on the net is 
> recommended to have a rather "high" number of threads when cherrypy/rocket 
> is used in production (as in "use 10 to 64 min threads on cPython").
> Is it only for concurrent long-running requests (i.e. 500-700ms)?
>
> PS: ab -c 1000 -n 10000 on tornado-motor fails after ~7000 requests, 
> cherrypy finishes without issues (both with 1-1 (89 rps) and 10-20 
> threads(18 rps)).
>

-- 



Reply via email to