On Tue, Mar 8, 2016 at 12:33 PM, BartC <b...@freeuk.com> wrote:
> On 08/03/2016 01:23, Chris Angelico wrote:
>>
>> On Tue, Mar 8, 2016 at 12:00 PM, BartC <b...@freeuk.com> wrote:
>>>
>>> Yes of course it does. As does 'being slow'. Take another microbenchmark:
>>>
>>> def whiletest():
>>> |   i=0
>>> |   while i<=100000000:
>>> |   |   i+=1
>>>
>>> whiletest()
>>>
>>> Python 2.7:  8.4 seconds
>>> Python 3.1: 12.5 seconds
>>> Python 3.4: 18.0 seconds
>>>
>>> Even if you don't care about speed, you must admit that there appears to
>>> be
>>> something peculiar going on here: why would 3.4 take more than twice as
>>> long
>>> as 2.7? What do they keep doing to 3.x to cripple it on each new version?
>>
>>
>> How do your benchmarks compare on this code:
>>
>> pass
>
>
> Let me ask you a follow-on question first: how slow does a new Python
> version have to be before even you would take notice?
>
> Compared with 2.7, 3.4 above is spending nearly an extra ten seconds doing
> .... what? I can't understand why someone just wouldn't care.

Performance matters, when you actually have something useful to
measure. Startup performance matters enormously if interpreter startup
is what you're doing a lot of (for example, the "feel" of Mercurial
depends heavily on Python startup performance). It matters not a whit
if your process keeps running for a long time, and handles many
requests (for example, a web server).

You need to be VERY clear about exactly what you're measuring. Are you
using the 'timeit' module to measure execution of one line of code?
Are you putting your code into a file and running that with
/usr/bin/time? Are you putting the code into Idle and running it in a
loop with 'exec' and using time.time() around the outside? Your
numbers do not concern me *because they mean nothing*.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to