On 08/03/2016 03:39, Terry Reedy wrote:
On 3/7/2016 8:33 PM, BartC wrote:
On Tue, Mar 8, 2016 at 12:00 PM, BartC <b...@freeuk.com> wrote:

def whiletest():
|   i=0
|   while i<=100000000:
|   |   i+=1

whiletest()

Python 2.7:  8.4 seconds
Python 3.1: 12.5 seconds
Python 3.4: 18.0 seconds

Even if you don't care about speed, you must admit that there appears
to be
something peculiar going on here: why would 3.4 take more than twice
as long
as 2.7? What do they keep doing to 3.x to cripple it on each new
version?

Let me ask you a follow-on question first: how slow does a new Python
version have to be before even you would take notice?

We now run a suite of benchmarks daily on latest versions of 2.7 and
default (3.6) to compare not to each other but to previous days to
detect changes within either branch.

I verified your result with installed 64 bit 2.7.11 versus 3.5.1

import time
def test():
     i=0
     while i<=100000000:
         i+=1
start = time.time()
test()
print(time.time() - start)

4.4 (2.7) versus 10.7 (3.5)

Running loop at top level instead of inside the function doubled the
time.  Replacing globals dict lookup with function locals array lookup
really helps.

Next, I replaced the body of the function with
     for i in range(100000000): pass

This time, 3.5 wins: 3.8 versus 2.7.  Whoops, unfair. Change range to
xrange in 2.7 and the time is 1.5.

Neither version optimizes the do-nothing loop away.  A further version
of CPython might.  There are people working on improving the speed of
CPython.  Integer operations are not their focus, though, because that
can be done in numpy.  Text operations have been getting work, and at
least one person is actively working on an ast optimizer.


For reference all of the tips of the trade are given here https://wiki.python.org/moin/PythonSpeed/PerformanceTips

--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence

--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to