I don't know if bumping up the number of processors from 1 to 4 makes 
sense. I have a dual-core Mac mini. The VM may be doing something funny.

I changed to 2 processors and we're back to the 10x performance 
discrepancy. So whether it's 1 or 2 processors makes very little difference.

Apache:Elapsed time: 2.27643203735
Apache:Elapsed time: 6.1853530407
Apache:Percentage fill: 60.0
Apache:Begin...
Apache:Elapsed time: 0.270731925964
Apache:Elapsed time: 0.80504989624
Apache:Percentage fill: 60.0
Apache:Begin...
Apache:Elapsed time: 0.292776823044
Apache:Elapsed time: 0.856013059616
Apache:Percentage fill: 60.0
Apache:Begin...
Apache:Elapsed time: 0.28355884552
Apache:Elapsed time: 0.832424879074
Apache:Percentage fill: 60.0
Apache:Begin...
Apache:Elapsed time: 0.310907125473
Apache:Elapsed time: 0.810643911362
Apache:Percentage fill: 60.0
Apache:Begin...
Apache:Elapsed time: 0.282160043716
Apache:Elapsed time: 0.809345960617
Apache:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.0269491672516
Gunicorn:Elapsed time: 0.0727801322937
Gunicorn:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.0269680023193
Gunicorn:Elapsed time: 0.0745708942413
Gunicorn:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.0281398296356
Gunicorn:Elapsed time: 0.0747048854828
Gunicorn:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.0501861572266
Gunicorn:Elapsed time: 0.0854380130768
Gunicorn:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.0284719467163
Gunicorn:Elapsed time: 0.0778048038483
Gunicorn:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.026153087616
Gunicorn:Elapsed time: 0.0714471340179
Gunicorn:Percentage fill: 60.0


On Monday, 17 March 2014 12:21:33 UTC-4, horridohobbyist wrote:
>
> I bumped up the number of processors from 1 to 4. Here are the results:
>
> Apache:Begin...
> Apache:Elapsed time: 2.31899785995
> Apache:Elapsed time: 6.31404495239
> Apache:Percentage fill: 60.0
> Apache:Begin...
> Apache:Elapsed time: 0.274327039719
> Apache:Elapsed time: 0.832695960999
> Apache:Percentage fill: 60.0
> Apache:Begin...
> Apache:Elapsed time: 0.277992010117
> Apache:Elapsed time: 0.875190019608
> Apache:Percentage fill: 60.0
> Apache:Begin...
> Apache:Elapsed time: 0.284713983536
> Apache:Elapsed time: 0.82108092308
> Apache:Percentage fill: 60.0
> Apache:Begin...
> Apache:Elapsed time: 0.289800882339
> Apache:Elapsed time: 0.850221157074
> Apache:Percentage fill: 60.0
> Apache:Begin...
> Apache:Elapsed time: 0.287453889847
> Apache:Elapsed time: 0.822550058365
> Apache:Percentage fill: 60.0
> Gunicorn:Begin...
> Gunicorn:Elapsed time: 1.9300968647
> Gunicorn:Elapsed time: 5.28614592552
> Gunicorn:Percentage fill: 60.0
> Gunicorn:Begin...
> Gunicorn:Elapsed time: 0.315547943115
> Gunicorn:Elapsed time: 0.944733142853
> Gunicorn:Percentage fill: 60.0
> Gunicorn:Begin...
> Gunicorn:Elapsed time: 0.321009159088
> Gunicorn:Elapsed time: 0.95100903511
> Gunicorn:Percentage fill: 60.0
> Gunicorn:Begin...
> Gunicorn:Elapsed time: 0.310179948807
> Gunicorn:Elapsed time: 0.930527925491
> Gunicorn:Percentage fill: 60.0
> Gunicorn:Begin...
> Gunicorn:Elapsed time: 0.311529874802
> Gunicorn:Elapsed time: 0.939922809601
> Gunicorn:Percentage fill: 60.0
> Gunicorn:Begin...
> Gunicorn:Elapsed time: 0.308799028397
> Gunicorn:Elapsed time: 0.932448863983
> Gunicorn:Percentage fill: 60.0
>
> WTF. Now, both Apache and Gunicorn are slow. *Equally slow!*
>
> I am befuddled. I think I'll go get stinking drunk...
>
>
> On Monday, 17 March 2014 11:58:07 UTC-4, Cliff Kachinske wrote:
>>
>> Apparently the number of cores is adjustable. Try this link.
>>
>>
>> http://download.parallels.com/desktop/v5/docs/en/Parallels_Desktop_Users_Guide/23076.htm
>>
>> On Monday, March 17, 2014 10:02:13 AM UTC-4, horridohobbyist wrote:
>>>
>>> Parallels VM running on a 2.5GHz dual-core Mac mini. I really don't know 
>>> what Parallels uses.
>>>
>>>
>>> On Monday, 17 March 2014 00:05:58 UTC-4, Massimo Di Pierro wrote:
>>>>
>>>> What kind of VM is this? What is the host platform? How many CPU cores? 
>>>> Is VM using all the cores? The only thing I can think of is the GIL and 
>>>> the 
>>>> fact that multithreaded code in python gets slower and slower the more 
>>>> cores I have. On my laptop, with two cores, I do not see any slow down. 
>>>> Rocket preallocate a thread pool. The rationale is that it decreases the 
>>>> latency time. Perhaps you can also try rocket in this way:
>>>>
>>>> web2py.py --minthreads=1 --maxthreads=1
>>>>
>>>> This will reduce the number of worker threads to 1. Rocket also runs a 
>>>> background non-worker thread that monitors worker threads and kills them 
>>>> if 
>>>> they get stuck.
>>>>
>>>> On Sunday, 16 March 2014 20:22:45 UTC-5, horridohobbyist wrote:
>>>>>
>>>>> Using gunicorn (Thanks, Massimo), I ran the full web2py Welcome code:
>>>>>
>>>>> Welcome: elapsed time: 0.0511929988861
>>>>> Welcome: elapsed time: 0.0024790763855
>>>>> Welcome: elapsed time: 0.00262713432312
>>>>> Welcome: elapsed time: 0.00224614143372
>>>>> Welcome: elapsed time: 0.00218415260315
>>>>> Welcome: elapsed time: 0.00213503837585
>>>>>
>>>>> Oddly enough, it's slightly faster! But still 37% slower than the 
>>>>> command line execution.
>>>>>
>>>>> I'd really, really, **really** like to know why the shipping code is 
>>>>> 10x slower...
>>>>>
>>>>>
>>>>> On Sunday, 16 March 2014 21:13:56 UTC-4, horridohobbyist wrote:
>>>>>>
>>>>>> Okay, I did the calculations test in my Linux VM using command line 
>>>>>> (fred0), Flask (hello0), and web2py (Welcome).
>>>>>>
>>>>>> fred0: elapsed time: 0.00159001350403
>>>>>>
>>>>>> fred0: elapsed time: 0.0015709400177
>>>>>>
>>>>>> fred0: elapsed time: 0.00156021118164
>>>>>>
>>>>>> fred0: elapsed time: 0.0015971660614
>>>>>>
>>>>>> fred0: elapsed time: 0.00315999984741
>>>>>>
>>>>>> hello0: elapsed time: 0.00271105766296
>>>>>>
>>>>>> hello0: elapsed time: 0.00213503837585
>>>>>>
>>>>>> hello0: elapsed time: 0.00195693969727
>>>>>>
>>>>>> hello0: elapsed time: 0.00224900245667
>>>>>>
>>>>>> hello0: elapsed time: 0.00205492973328
>>>>>> Welcome: elapsed time: 0.0484869480133
>>>>>>
>>>>>> Welcome: elapsed time: 0.00296783447266
>>>>>>
>>>>>> Welcome: elapsed time: 0.00293898582458
>>>>>>
>>>>>> Welcome: elapsed time: 0.00300216674805
>>>>>>
>>>>>> Welcome: elapsed time: 0.00312614440918
>>>>>>
>>>>>> The Welcome discrepancy is just under 2x, not nearly as bad as 10x in 
>>>>>> my shipping code.
>>>>>>
>>>>>>
>>>>>> On Sunday, 16 March 2014 17:52:00 UTC-4, Massimo Di Pierro wrote:
>>>>>>>
>>>>>>> In order to isolate the problem one must take it in steps. This is a 
>>>>>>> good test but you must first perform this test with the code you 
>>>>>>> proposed 
>>>>>>> before:
>>>>>>>
>>>>>>> def test():
>>>>>>>     t = time.time
>>>>>>>     start = t()
>>>>>>>     x = 0.0
>>>>>>>     for i in range(1,5000):
>>>>>>>         x += (float(i+10)*(i+25)+175.0)/3.14
>>>>>>>     debug("elapsed time: "+str(t()-start))
>>>>>>>     return
>>>>>>>
>>>>>>> I would like to know the results about this test code first.
>>>>>>>
>>>>>>> The other code you are using performs an import:
>>>>>>>
>>>>>>>     from shippackage import Package
>>>>>>>
>>>>>>>
>>>>>>> Now that is something that is very different in web2py and flask for 
>>>>>>> example. In web2py the import is executed at every request (although it 
>>>>>>> should be cached by Python) while in flask it is executed only once.  
>>>>>>> This 
>>>>>>> should also not cause a performance difference but it is a different 
>>>>>>> test 
>>>>>>> than the one above.
>>>>>>>
>>>>>>> TLTR: we should test separately python code execution (which may be 
>>>>>>> affected by threading) and import statements (which may be affected by 
>>>>>>> web2py custom_import and/or module weird behavior).
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sunday, 16 March 2014 08:47:13 UTC-5, horridohobbyist wrote:
>>>>>>>>
>>>>>>>> I've conducted a test with Flask.
>>>>>>>>
>>>>>>>> fred.py is the command line program.
>>>>>>>> hello.py is the Flask program.
>>>>>>>> default.py is the Welcome controller.
>>>>>>>> testdata.txt is the test data.
>>>>>>>> shippackage.py is a required module.
>>>>>>>>
>>>>>>>> fred.py:
>>>>>>>> 0.024 second
>>>>>>>> 0.067 second
>>>>>>>>
>>>>>>>> hello.py:
>>>>>>>> 0.029 second
>>>>>>>> 0.073 second
>>>>>>>>
>>>>>>>> default.py:
>>>>>>>> 0.27 second
>>>>>>>> 0.78 second
>>>>>>>>
>>>>>>>> The Flask program is slightly slower than the command line. 
>>>>>>>> However, the Welcome app is about 10x slower!
>>>>>>>>
>>>>>>>> *Web2py is much, much slower than Flask.*
>>>>>>>>
>>>>>>>> I conducted the test in a Parallels VM running Ubuntu Server 12.04 
>>>>>>>> (1GB memory allocated). I have a 2.5GHz dual-core Mac mini with 8GB.
>>>>>>>>
>>>>>>>>
>>>>>>>> I can't quite figure out how to use gunicom.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Saturday, 15 March 2014 23:41:49 UTC-4, horridohobbyist wrote:
>>>>>>>>>
>>>>>>>>> I'll see what I can do. It will take time for me to learn how to 
>>>>>>>>> use another framework.
>>>>>>>>>
>>>>>>>>> As for trying a different web server, my (production) Linux server 
>>>>>>>>> is intimately reliant on Apache. I'd have to learn how to use another 
>>>>>>>>> web 
>>>>>>>>> server, and then try it in my Linux VM.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Saturday, 15 March 2014 22:45:27 UTC-4, Anthony wrote:
>>>>>>>>>>
>>>>>>>>>> Are you able to replicate the exact task in another web 
>>>>>>>>>> framework, such as Flask (with the same server setup)?
>>>>>>>>>>
>>>>>>>>>> On Saturday, March 15, 2014 10:34:56 PM UTC-4, horridohobbyist 
>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Well, putting back all my apps hasn't widened the discrepancy. 
>>>>>>>>>>> So I don't know why my previous web2py installation was so slow.
>>>>>>>>>>>
>>>>>>>>>>> While the Welcome app with the calculations test shows a 2x 
>>>>>>>>>>> discrepancy, the original app that initiated this thread now shows 
>>>>>>>>>>> a 13x 
>>>>>>>>>>> discrepancy instead of 100x. That's certainly an improvement, but 
>>>>>>>>>>> it's 
>>>>>>>>>>> still too slow.
>>>>>>>>>>>
>>>>>>>>>>> The size of the discrepancy depends on the code that is 
>>>>>>>>>>> executed. Clearly, what I'm doing in the original app (performing 
>>>>>>>>>>> permutations) is more demanding than mere arithmetical operations. 
>>>>>>>>>>> Hence, 
>>>>>>>>>>> 13x vs 2x.
>>>>>>>>>>>
>>>>>>>>>>> I anxiously await any resolution to this performance issue, 
>>>>>>>>>>> whether it be in WSGI or in web2py. I'll check in on this thread 
>>>>>>>>>>> periodically...
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Saturday, 15 March 2014 16:19:12 UTC-4, horridohobbyist wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Interestingly, now that I've got a fresh install of web2py with 
>>>>>>>>>>>> only the Welcome app, my Welcome vs command line test shows a 
>>>>>>>>>>>> consistent 2x 
>>>>>>>>>>>> discrepancy, just as you had observed.
>>>>>>>>>>>>
>>>>>>>>>>>> My next step is to gradually add back all the other apps I had 
>>>>>>>>>>>> in web2py (I had 8 of them!) and see whether the discrepancy grows 
>>>>>>>>>>>> with the 
>>>>>>>>>>>> number of apps. That's the theory I'm working on.
>>>>>>>>>>>>
>>>>>>>>>>>> Yes, yes, I know, according to the Book, I shouldn't have so 
>>>>>>>>>>>> many apps installed in web2py. This apparently affects 
>>>>>>>>>>>> performance. But the 
>>>>>>>>>>>> truth is, most of those apps are hardly ever executed, so their 
>>>>>>>>>>>> existence 
>>>>>>>>>>>> merely represents a static overhead in web2py. In my mind, this 
>>>>>>>>>>>> shouldn't 
>>>>>>>>>>>> widen the discrepancy, but you never know.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Saturday, 15 March 2014 11:19:06 UTC-4, Niphlod wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> @mcm: you got me worried. Your test function was clocking a 
>>>>>>>>>>>>> hell lower than the original script. But then I found out why; 
>>>>>>>>>>>>> one order of 
>>>>>>>>>>>>> magnitude less (5000 vs 50000). Once that was corrected, you got 
>>>>>>>>>>>>> the exact 
>>>>>>>>>>>>> same clock times as "my app" (i.e. function directly in the 
>>>>>>>>>>>>> controller). I 
>>>>>>>>>>>>> also stripped out the logging part making the app just return the 
>>>>>>>>>>>>> result 
>>>>>>>>>>>>> and no visible changes to the timings happened.
>>>>>>>>>>>>>
>>>>>>>>>>>>> @hh: glad at least we got some grounds to hold on. 
>>>>>>>>>>>>> @mariano: compiled or not, it doesn't seem to "change" the 
>>>>>>>>>>>>> mean. a compiled app has just lower variance. 
>>>>>>>>>>>>>
>>>>>>>>>>>>> @all: jlundell definitively hit something. Times are much more 
>>>>>>>>>>>>> lower when threads are 1.
>>>>>>>>>>>>>
>>>>>>>>>>>>> BTW: if I change "originalscript.py" to 
>>>>>>>>>>>>>
>>>>>>>>>>>>> # -*- coding: utf-8 -*-
>>>>>>>>>>>>> import time
>>>>>>>>>>>>> import threading
>>>>>>>>>>>>>
>>>>>>>>>>>>> def test():
>>>>>>>>>>>>>     start = time.time()
>>>>>>>>>>>>>     x = 0.0
>>>>>>>>>>>>>     for i in range(1,50000):
>>>>>>>>>>>>>         x += (float(i+10)*(i+25)+175.0)/3.14
>>>>>>>>>>>>>     res = str(time.time()-start)
>>>>>>>>>>>>>     print "elapsed time: "+ res + '\n'
>>>>>>>>>>>>>
>>>>>>>>>>>>> if __name__ == '__main__':
>>>>>>>>>>>>>     t = threading.Thread(target=test)
>>>>>>>>>>>>>     t.start()
>>>>>>>>>>>>>     t.join()
>>>>>>>>>>>>>
>>>>>>>>>>>>> I'm getting really close timings to "wsgi environment, 1 
>>>>>>>>>>>>> thread only" tests, i.e. 
>>>>>>>>>>>>> 0.23 min, 0.26 max, ~0.24 mean
>>>>>>>>>>>>>
>>>>>>>>>>>>>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to