Re: [python-tulip] Re: FrameworkBenchmarks Round 11 results are available

2016-02-14 Thread Ludovic Gasc
Hi Yury,

Thanks for the remark, I'll test with two different machines during the
next week.

Have a nice day.
--
Ludovic Gasc (GMLudo)
http://www.gmludo.eu/

2016-02-14 20:42 GMT+01:00 Yury Selivanov :

> Hi Ludovic,
>
> I’m usually highly sceptical about any network benchmarks over localhost.
> I’d suggest you to rerun your benchmarks using two different machines
> connected over a gigabit network.  Please make sure that your server
> process loads the CPU 100%, and ideally you should use several processes
> for clients.
>
> Yury
>
> > On Feb 14, 2016, at 1:31 PM, Ludovic Gasc  wrote:
> >
> > Hi,
> >
> > This is some values I've found during the update attempt of the AsyncIO
> test suite for FrameworkBenchmarks:
> https://www.techempower.com/benchmarks/#section=intro
> >
> > Long story short: After Python 3.5 update, dependencies update and
> async/await syntax usage, on my test setup, I loose around 15% of
> efficiency.
> > You can see the difference between old test setup and now:
> https://github.com/Eyepea/FrameworkBenchmarks/commit/a40c82ed720a53e04ebdafd7db90302ca67b5226
> >
> > If somebody has any suggestion to find my mistake or how to find the new
> bottleneck, be my guest.
> >
> > However, even if the absolute values aren't exact, the relative values
> between each test should give us an idea of the evolution.
> >
> > For each test, I've launched 5 times, and I've taken the best one.
> >
> > The values on this setup before to change something in the test setup
> (python 3.4.2, aiohttp 0.16.3, aiopg 0.7.0):
> >
> > $ wrk -t8 -c256 -d1m http://127.0.0.1:8080/queries?queries=20
> > Running 1m test @ http://127.0.0.1:8080/queries?queries=20
> >   8 threads and 256 connections
> >   Thread Stats   Avg  Stdev Max   +/- Stdev
> > Latency   216.97ms  105.14ms   1.14s75.31%
> > Req/Sec   148.88 17.97   206.00 69.38%
> >   71647 requests in 1.00m, 54.98MB read
> > Requests/sec:   1194.19
> > Transfer/sec:  0.92MB
> >
> > Now, I upgrade only Python 3.4 to Python 3.5 (python 3.5.1, aiohttp
> 0.16.3, aiopg 0.7.0):
> >
> > $ wrk -t8 -c256 -d1m http://127.0.0.1:8080/queries?queries=20
> > Running 1m test @ http://127.0.0.1:8080/queries?queries=20
> >   8 threads and 256 connections
> >   Thread Stats   Avg  Stdev Max   +/- Stdev
> > Latency   237.25ms  118.33ms   1.17s74.74%
> > Req/Sec   134.77 13.24   171.00 66.98%
> >   65051 requests in 1.00m, 49.92MB read
> > Requests/sec:   1084.09
> > Transfer/sec:851.93KB
> >
> > And now, I update aiohttp and aiopg  (python 3.5.1, aiohttp 0.21.1,
> aiopg 0.9.2)
> >
> > $ wrk -t8 -c256 -d1m http://127.0.0.1:8080/queries?queries=20
> > Running 1m test @ http://127.0.0.1:8080/queries?queries=20
> >   8 threads and 256 connections
> >   Thread Stats   Avg  Stdev Max   +/- Stdev
> > Latency   254.25ms  181.80ms   1.56s75.51%
> > Req/Sec   129.43 24.09   204.00 68.37%
> >   62122 requests in 1.00m, 46.25MB read
> > Requests/sec:   1035.44
> > Transfer/sec:789.32KB
> >
> > And now, I use async/await syntax:
> >
> > $ wrk -t8 -c256 -d1m http://127.0.0.1:8080/queries?queries=20
> > Running 1m test @ http://127.0.0.1:8080/queries?queries=20
> >   8 threads and 256 connections
> >   Thread Stats   Avg  Stdev Max   +/- Stdev
> > Latency   259.27ms  121.86ms 842.74ms   70.02%
> > Req/Sec   126.29 17.70   207.00 75.82%
> >   60740 requests in 1.00m, 45.22MB read
> > Requests/sec:   1014.01
> > Transfer/sec:772.99KB
> >
> > To be sure it isn't a problem with my hardware (CPU too hot after a
> while...) at the end, I've relaunched the first test without any updates:
> >
> > $ wrk -t8 -c256 -d1m http://127.0.0.1:8080/queries?queries=20
> > Running 1m test @ http://127.0.0.1:8080/queries?queries=20
> >   8 threads and 256 connections
> >   Thread Stats   Avg  Stdev Max   +/- Stdev
> > Latency   220.24ms   98.73ms 798.70ms   72.78%
> > Req/Sec   147.85 19.10   215.00 70.30%
> >   70967 requests in 1.00m, 54.46MB read
> > Requests/sec:   1183.54
> > Transfer/sec:  0.91MB
> >
> > Thanks for your remarks.
> >
> > On Monday, December 28, 2015 at 2:54:06 PM UTC+1, Ludovic Gasc wrote:
> > Hi everybody,
> >
> > For now, I can't contribute like I want because I'm handling a lot of
> personal and professional changes.
> >
> > However, I continue to keep an eye.
> > FrameworkBenchmarks Round 11 results are available:
> >
> https://www.techempower.com/benchmarks/#section=data-r11=peak=fortune=1kw
> >
> > If you don't know what is FrameworkBenchmarks:
> https://www.techempower.com/benchmarks/#section=intro
> >
> > No big surprises, AsyncIO+aiohttp continues to have good results, except
> for plain text, need to dig where is the bottleneck.
> >
> > For the next round, I've a small todo list:
> > 1. Add tests with MySQL, because MySQL is more optimized by techempower
> than PostgreSQL setup and all other Python 

Re: [python-tulip] Re: FrameworkBenchmarks Round 11 results are available

2016-02-14 Thread Yury Selivanov
Hi Ludovic,

I’m usually highly sceptical about any network benchmarks over localhost.  I’d 
suggest you to rerun your benchmarks using two different machines connected 
over a gigabit network.  Please make sure that your server process loads the 
CPU 100%, and ideally you should use several processes for clients.

Yury

> On Feb 14, 2016, at 1:31 PM, Ludovic Gasc  wrote:
> 
> Hi,
> 
> This is some values I've found during the update attempt of the AsyncIO test 
> suite for FrameworkBenchmarks: 
> https://www.techempower.com/benchmarks/#section=intro
> 
> Long story short: After Python 3.5 update, dependencies update and 
> async/await syntax usage, on my test setup, I loose around 15% of efficiency.
> You can see the difference between old test setup and now: 
> https://github.com/Eyepea/FrameworkBenchmarks/commit/a40c82ed720a53e04ebdafd7db90302ca67b5226
> 
> If somebody has any suggestion to find my mistake or how to find the new 
> bottleneck, be my guest.
> 
> However, even if the absolute values aren't exact, the relative values 
> between each test should give us an idea of the evolution.
> 
> For each test, I've launched 5 times, and I've taken the best one.
> 
> The values on this setup before to change something in the test setup (python 
> 3.4.2, aiohttp 0.16.3, aiopg 0.7.0):
> 
> $ wrk -t8 -c256 -d1m http://127.0.0.1:8080/queries?queries=20
> Running 1m test @ http://127.0.0.1:8080/queries?queries=20
>   8 threads and 256 connections
>   Thread Stats   Avg  Stdev Max   +/- Stdev
> Latency   216.97ms  105.14ms   1.14s75.31%
> Req/Sec   148.88 17.97   206.00 69.38%
>   71647 requests in 1.00m, 54.98MB read
> Requests/sec:   1194.19
> Transfer/sec:  0.92MB
> 
> Now, I upgrade only Python 3.4 to Python 3.5 (python 3.5.1, aiohttp 0.16.3, 
> aiopg 0.7.0):
> 
> $ wrk -t8 -c256 -d1m http://127.0.0.1:8080/queries?queries=20
> Running 1m test @ http://127.0.0.1:8080/queries?queries=20
>   8 threads and 256 connections
>   Thread Stats   Avg  Stdev Max   +/- Stdev
> Latency   237.25ms  118.33ms   1.17s74.74%
> Req/Sec   134.77 13.24   171.00 66.98%
>   65051 requests in 1.00m, 49.92MB read
> Requests/sec:   1084.09
> Transfer/sec:851.93KB
> 
> And now, I update aiohttp and aiopg  (python 3.5.1, aiohttp 0.21.1, aiopg 
> 0.9.2)
> 
> $ wrk -t8 -c256 -d1m http://127.0.0.1:8080/queries?queries=20
> Running 1m test @ http://127.0.0.1:8080/queries?queries=20
>   8 threads and 256 connections
>   Thread Stats   Avg  Stdev Max   +/- Stdev
> Latency   254.25ms  181.80ms   1.56s75.51%
> Req/Sec   129.43 24.09   204.00 68.37%
>   62122 requests in 1.00m, 46.25MB read
> Requests/sec:   1035.44
> Transfer/sec:789.32KB
> 
> And now, I use async/await syntax:
> 
> $ wrk -t8 -c256 -d1m http://127.0.0.1:8080/queries?queries=20
> Running 1m test @ http://127.0.0.1:8080/queries?queries=20
>   8 threads and 256 connections
>   Thread Stats   Avg  Stdev Max   +/- Stdev
> Latency   259.27ms  121.86ms 842.74ms   70.02%
> Req/Sec   126.29 17.70   207.00 75.82%
>   60740 requests in 1.00m, 45.22MB read
> Requests/sec:   1014.01
> Transfer/sec:772.99KB
> 
> To be sure it isn't a problem with my hardware (CPU too hot after a while...) 
> at the end, I've relaunched the first test without any updates:
> 
> $ wrk -t8 -c256 -d1m http://127.0.0.1:8080/queries?queries=20
> Running 1m test @ http://127.0.0.1:8080/queries?queries=20
>   8 threads and 256 connections
>   Thread Stats   Avg  Stdev Max   +/- Stdev
> Latency   220.24ms   98.73ms 798.70ms   72.78%
> Req/Sec   147.85 19.10   215.00 70.30%
>   70967 requests in 1.00m, 54.46MB read
> Requests/sec:   1183.54
> Transfer/sec:  0.91MB
> 
> Thanks for your remarks.
> 
> On Monday, December 28, 2015 at 2:54:06 PM UTC+1, Ludovic Gasc wrote:
> Hi everybody,
> 
> For now, I can't contribute like I want because I'm handling a lot of 
> personal and professional changes.
> 
> However, I continue to keep an eye.
> FrameworkBenchmarks Round 11 results are available:
> https://www.techempower.com/benchmarks/#section=data-r11=peak=fortune=1kw
> 
> If you don't know what is FrameworkBenchmarks: 
> https://www.techempower.com/benchmarks/#section=intro
> 
> No big surprises, AsyncIO+aiohttp continues to have good results, except for 
> plain text, need to dig where is the bottleneck.
> 
> For the next round, I've a small todo list:
> 1. Add tests with MySQL, because MySQL is more optimized by techempower than 
> PostgreSQL setup and all other Python frameworks with better results use 
> MySQL.
> 2. Upgrade aiohttp, because the latest version has several performance 
> improvements: https://github.com/KeepSafe/aiohttp/releases/tag/v0.20.0
> 3. Use Python 3.5 with async/await. Apparently, I could hope some performance 
> improvements.
> 4. If I have the time, take a try with MicroPython.
> 
> If somebody wants to help me or suggests