Re: [Tutor] Astonishing timing result

2008-06-27 Thread Dick Moores


At 05:57 AM 6/26/2008, Kent Johnson wrote:
On Thu, Jun 26, 2008 at 3:18 AM,
Dick Moores [EMAIL PROTECTED] wrote:
 I thought I'd use this to compare the 2 ways of string
concatenation. Ever
 since I began to learn Python I've been told that only one of these
is the
 proper and efficient one to use, and especially so if the string to
be
 stitched together is a very long one.
String concatenation was optimized in Python 2.4. You might like to
try this test in Python 2.3. See the last note here:

http://www.python.org/doc/2.4.4/whatsnew/node12.html#SECTION000121

Kent
Interesting. 
Instead I've tried to find out if it's true what Alex Martelli writes on
p. 484 in the section, Building up a string from pieces in
his _Python in a Nutshell_, 2nd ed., which covers Python 2.4x.
==
The single Python anti-idiom that's likeliest to kill your
program's performance, to the point that you should _never_ use it, is to
build up a large string from pieces by looping on string concatenation
statements such as big_string+=piece. Python strings
are immutable, so each such concatenation means that Python must free the
M bytes previously allocated for big_string, and allocate and fill
M+K bytes for the new version. Doing this repeatedly in a loop, you end
up with roughly O(N**2) performance, where N is the total number of
characters. More often than not, O(N**2) performance where O(N) is
available is a performance disaster. On some platforms, things may be
even bleaker due to memory fragmentation effects caused by freeing many
memory areas of progressively larger sizes.
To achieve O(N) performance, accumulate intermediate pieces in a list
rather than build up the string piece by piece. Lists, unlike strings,
are mutable, so appending to a list has O(1) performance (amortized).
Change each occurrence of big_string+=piece
into temp_list.append(piece) , Then when you're done
accumulating, use the following to build your desired string result
inO(N) time:
  big_string = ''.join(temp_list)

==

Please see

http://py77.python.pastebin.com/f324475f5. 
It's probably obvious to you what I did. But just in case it's not, I
successively modified the lines 14 and 29 so that the length of b varied
from 6,000 to 60,000,000 for both ch1() and ch2(). The outputs show that
the longer b (Martelli's big_string) is, the greater the
performance hit taken by string concatenation (function ch2() ) compared
to the other kind (function ch1() ), the time ratios ranging from
about 1 to about 9.
Dick Moores
Win XP Pro, Python 2.5.1






___
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Astonishing timing result

2008-06-27 Thread Kent Johnson
On Fri, Jun 27, 2008 at 6:48 AM, Dick Moores [EMAIL PROTECTED] wrote:

 Instead I've tried to find out if it's true what Alex Martelli writes on p.
 484 in the section, Building up a string from pieces in his _Python in a
 Nutshell_, 2nd ed., which covers Python 2.4x.

You might be interested in this, complete with a picture:
http://personalpages.tds.net/~kent37/blog/arch_m1_2004_08.html#e55

and this followup:
http://personalpages.tds.net/~kent37/blog/arch_m1_2004_08.html#e56

Kent
___
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Astonishing timing result

2008-06-27 Thread Dick Moores

At 04:28 AM 6/27/2008, Kent Johnson wrote:

On Fri, Jun 27, 2008 at 6:48 AM, Dick Moores [EMAIL PROTECTED] wrote:

 Instead I've tried to find out if it's true what Alex Martelli writes on p.
 484 in the section, Building up a string from pieces in his _Python in a
 Nutshell_, 2nd ed., which covers Python 2.4x.

You might be interested in this, complete with a picture:
http://personalpages.tds.net/~kent37/blog/arch_m1_2004_08.html#e55

and this followup:
http://personalpages.tds.net/~kent37/blog/arch_m1_2004_08.html#e56


Good stuff, Kent!

Dick

___
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Astonishing timing result

2008-06-26 Thread Dick Moores


At 05:52 PM 6/24/2008, Dick Moores wrote:
At 05:35 PM 6/24/2008, Kent
Johnson wrote:
On Tue, Jun 24, 2008 at 5:20 PM,
Dick Moores [EMAIL PROTECTED] wrote:
 Basically, I'm not worried, just curious. Not about the small
differences,
 but why did the use of the standard if __name__ ==
'__main__' result
 it such speed?
Because __name__ is not equal to __main__, so you were
basically
skipping the whole test.
Ah.
The most common cause of
unexpected timing
results is tests that don't do what you think they do.
The test code is not run in the main module. You can dig into the
timeit module if you want the details.
OK, I'll dig.
While digging I came across this at the bottom of

http://docs.python.org/lib/node808.html:

To give the timeit module access to functions you define, you
can pass a setup parameter which contains an import statement: 
def test():
 Stupid test function
 L = []
 for i in range(100):
 L.append(i)

if __name__=='__main__':
 from timeit import Timer
 t = Timer(test(), from __main__
import test)
 print t.timeit()

=

I thought I'd use this to compare the 2 ways of string
concatenation. Ever since I began to learn Python I've been told that
only one of these is the proper and efficient one to use, and especially
so if the string to be stitched together is a very long one. So I set up
two tests in separate scripts, with functions z1() and z2(). The string
is 100,000 7-digit ints converted to strings and strung together, to make
a string of 700,000 digits. I thought this should be long enough to
be a good test. 
def z1():
 lst = []
 for y in range(10):
 m = str(randint(100,
999)) 
 lst.append(m)
 return ''.join(lst)

if __name__=='__main__':
 from random import randint
 from timeit import Timer
 t = Timer(z1(), from __main__ import
z1)
 print t.timeit(number=10)

OUTPUTS:
9.95473754346
9.74315730072


def z2():
 astr = ''
 for y in range(10):
 m = str(randint(100,
999)) 
 astr += m
 return astr

if __name__=='__main__':
 from random import randint
 from timeit import Timer
 t = Timer(z2(), from __main__ import
z2)
 print t.timeit(number=10)

OUTPUTS
10.8160584655
10.605619988


The proper way did win, but by less than 10 percent.

I also tested z1() and z2() (slightly modified) with timeit at the
command line (see

http://py77.python.pastebin.com/f60e746fe), with similar
results.

Dick Moores




___
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Astonishing timing result

2008-06-26 Thread Kent Johnson
On Thu, Jun 26, 2008 at 3:18 AM, Dick Moores [EMAIL PROTECTED] wrote:

 I thought I'd use this to compare the 2 ways of string concatenation. Ever
 since I began to learn Python I've been told that only one of these is the
 proper and efficient one to use, and especially so if the string to be
 stitched together is a very long one.

String concatenation was optimized in Python 2.4. You might like to
try this test in Python 2.3. See the last note here:
http://www.python.org/doc/2.4.4/whatsnew/node12.html#SECTION000121

Kent
___
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Astonishing timing result

2008-06-25 Thread Kent Johnson
On Wed, Jun 25, 2008 at 1:16 AM, Dick Moores [EMAIL PROTECTED] wrote:
 At 07:00 PM 6/24/2008, Marilyn Davis wrote:

 Has anyone ever timed the difference between using a function that was
 imported with:

 from my_module import MyFunction

 and:

 import my_module

 Here are 2 comparisons: http://py77.python.pastebin.com/f53ab3769, and
  http://py77.python.pastebin.com/f68346b28

 I don't see a significant difference.

I wouldn't expect much. The only difference is the extra attribute
lookup in the second form. Attribute lookup is slow enough to be
measurable and fast enough that you will only care if you are doing it
a lot of times.

Kent
___
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Astonishing timing result

2008-06-25 Thread Lie Ryan
I'm a bit curious about how you do the timing. I think there is a flaw
in how you measured the time. I made this code and the result is
inconclusive.

## CODE: test.py
#!/usr/bin/env python

import imported
import time
from imported import *


def b():
a = 1

r = range(500)
t_a, t_b, t_c, t_d = 1000, 1000, 1000, 1000
for n in xrange(20):
# a - direct, no function call
start = time.time()
for _ in r:
a = 1
end = time.time()
t_A = end - start

# b - function call
start = time.time()
for _ in r:
b()
end = time.time()
t_B = end - start

# c - imported module
start = time.time()
for _ in r:
imported.c()
end = time.time()
t_C = end - start

# d - imported function
start = time.time()
for _ in r:
c()
end = time.time()
t_D = end - start

t_a = min(t_A, t_a)
t_b = min(t_A, t_b)
t_c = min(t_A, t_c)
t_d = min(t_A, t_d)

print t_a
print t_b
print t_c
print t_d

## CODE: imported.py
def c():
a = 1

## OUTPUT
# 1.02956604958
# 1.02956604958
# 1.02956604958
# 1.02956604958



___
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Astonishing timing result

2008-06-25 Thread Kent Johnson
On Wed, Jun 25, 2008 at 12:05 PM, Lie Ryan [EMAIL PROTECTED] wrote:

t_a = min(t_A, t_a)
t_b = min(t_A, t_b)
t_c = min(t_A, t_c)
t_d = min(t_A, t_d)

What is this for? It should at least be t_B, t_C, t_D.

 ## OUTPUT
 # 1.02956604958
 # 1.02956604958
 # 1.02956604958
 # 1.02956604958

It's *very easy* to write bogus timing tests, as this thread
demonstrates. Some protections:
- when comparing different implementations of a function, make sure
each implementation returns the correct result by checking the return
value. You probably want to make this check outside the actual timing
test.
- when your results don't make sense, suspect your tests.

Kent
Kent
___
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Astonishing timing result

2008-06-25 Thread Lie Ryan
On Wed, 2008-06-25 at 12:56 -0400, Kent Johnson wrote:
 On Wed, Jun 25, 2008 at 12:05 PM, Lie Ryan [EMAIL PROTECTED] wrote:
 
 t_a = min(t_A, t_a)
 t_b = min(t_A, t_b)
 t_c = min(t_A, t_c)
 t_d = min(t_A, t_d)
 
 What is this for? It should at least be t_B, t_C, t_D.

A common pitfall in benchmarking is averaging the benchmark result. That
is WRONG, FLAT WRONG. Why? Variations of how long a code is executed is
caused by the environment, not the code itself. The correct way to
benchmark is to use the one with the lowest time (i.e. min() function),
since the lowest one is the one that is least interfered by the
environment.

  ## OUTPUT
  # 1.02956604958
  # 1.02956604958
  # 1.02956604958
  # 1.02956604958
 
 It's *very easy* to write bogus timing tests, as this thread
 demonstrates. Some protections:
 - when comparing different implementations of a function, make sure
 each implementation returns the correct result by checking the return
 value. 

Since the purpose of the test is to benchmark the difference of where
the code is located, we should use a very simple function, that doesn't
even do much of anything, thus 'a = 1'. If that simple code is
substituted with anything else, I'm still confident that the result
won't be far off.

 You probably want to make this check outside the actual timing
 test.

Actually the timing is all equal because of the timer's resolution. I
don't have a high-precision timer on hand.

 - when your results don't make sense, suspect your tests.



 Kent
 Kent

___
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Astonishing timing result

2008-06-25 Thread Kent Johnson
On Wed, Jun 25, 2008 at 2:06 PM, Lie Ryan [EMAIL PROTECTED] wrote:
 On Wed, 2008-06-25 at 12:56 -0400, Kent Johnson wrote:
 On Wed, Jun 25, 2008 at 12:05 PM, Lie Ryan [EMAIL PROTECTED] wrote:

 t_a = min(t_A, t_a)
 t_b = min(t_A, t_b)
 t_c = min(t_A, t_c)
 t_d = min(t_A, t_d)

 What is this for? It should at least be t_B, t_C, t_D.

 A common pitfall in benchmarking is averaging the benchmark result. That
 is WRONG, FLAT WRONG.

Yes, I agree. I missed the outer loop that this is in. But your code
is still WRONG, FLAT WRONG!
 t_b = min( *** t_A ***, t_b) // should be t_B, etc.

  ## OUTPUT
  # 1.02956604958
  # 1.02956604958
  # 1.02956604958
  # 1.02956604958

 It's *very easy* to write bogus timing tests, as this thread
 demonstrates. Some protections:
 - when comparing different implementations of a function, make sure
 each implementation returns the correct result by checking the return
 value.

 Actually the timing is all equal because of the timer's resolution. I
 don't have a high-precision timer on hand.

Or maybe they are all equal because they are all t_A...

Kent
___
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Astonishing timing result

2008-06-25 Thread Lie Ryan
On Wed, 2008-06-25 at 15:53 -0400, Kent Johnson wrote:
 On Wed, Jun 25, 2008 at 2:06 PM, Lie Ryan [EMAIL PROTECTED] wrote:
  On Wed, 2008-06-25 at 12:56 -0400, Kent Johnson wrote:
  On Wed, Jun 25, 2008 at 12:05 PM, Lie Ryan [EMAIL PROTECTED] wrote:
 
  t_a = min(t_A, t_a)
  t_b = min(t_A, t_b)
  t_c = min(t_A, t_c)
  t_d = min(t_A, t_d)
 
  What is this for? It should at least be t_B, t_C, t_D.
 
  A common pitfall in benchmarking is averaging the benchmark result. That
  is WRONG, FLAT WRONG.
 
 Yes, I agree. I missed the outer loop that this is in. But your code
 is still WRONG, FLAT WRONG!
  t_b = min( *** t_A ***, t_b) // should be t_B, etc.

Ah, yes sorry, a slip of hand when copying the code.

The corrected timing.

Outer loop: 10x
Inner Loop: 500x
per Innerloop   Overall
a | 1.05028605461 | 10.6743688583
b | 2.21457099915 | 22.3394482136
c | 3.53437685966 | 35.6701157093
d | 2.5965359211  | 26.1492891312

Overall Running Time: 94.8337771893

Well, it's obvious that direct-method is the fastest, simply because it
bypasses function calling and module name look up overhead. Method C
(module) is the slowest because the name look up is done twice, the
module's name then the function's name inside the module, then function
calling. But anyway, considering that this overhead of (3.5 - 1 = 2.5
second) is accumulated over 5 000 000 iteration, it is silly to to use
method-a (not using function and method) for reason of speed. The
difference between method a and c is 2.5 second / 5 000 000 = 0.005
second = 0.5 microsecond. (DISCLAIMER: Timing is valid on my machine
only)

Sure, at a glance, it seems that the saving is good enough 1:3.5, that's
28.6% a saving of 71.4%, but remember that most functions are much more
complex than this 'a = 1', to put it into perspective:

a = n**2
Outer loop: 10x
Inner Loop: 500x
a | 2.1795668602 | 21.9916498661
b | 3.4880130291 | 35.1593179703
c | 4.97427606583 | 50.6705505848
d | 3.84812307358 | 39.1990897655

time: 43%, saving 57%

'a = math.sqrt(n ** 2 + n ** 2)'
'print 1'
Outer loop: 10x
Inner Loop: 5x
a | 0.805603027344 | 8.24900960922
b | 0.921233177185 | 9.31604623795
c | 1.03809094429 | 10.4301710129
d | 0.956300973892 | 9.58661794662
Total Time:  37.582244873

time: 78%, saving: 22%

'print 1'
Outer loop: 10x
Inner Loop: 5x
per Innerloop   Overall
a | 0.573838949203 | 6.04536104202
b | 0.578473091125 | 6.05607891083
c | 0.579005002975 | 6.08867025375
d | 0.570523023605 | 5.93990397453

Negligible.

So unless your function is extremely simple like 'a = 1', there is no
benefit of avoiding function/methods. A single print statement (print is
a very slow function/statement) would immediately nullify the speed
gain. Even an intermediate complexity function would make the saving
useless, and to think about it, I think nobody would make 'a = 1' to be
a function right? 

   ## OUTPUT
   # 1.02956604958
   # 1.02956604958
   # 1.02956604958
   # 1.02956604958
 
  It's *very easy* to write bogus timing tests, as this thread
  demonstrates. Some protections:
  - when comparing different implementations of a function, make sure
  each implementation returns the correct result by checking the return
  value.
 
  Actually the timing is all equal because of the timer's resolution. I
  don't have a high-precision timer on hand.
 
 Or maybe they are all equal because they are all t_A...
 
 Kent

___
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Astonishing timing result

2008-06-25 Thread Marilyn Davis
On Tue, June 24, 2008 10:16 pm, Dick Moores wrote:

 At 07:00 PM 6/24/2008, Marilyn Davis wrote:


 Has anyone ever timed the difference between using a function that was
 imported with:

 from my_module import MyFunction

 and:


 import my_module

 Here are 2 comparisons: http://py77.python.pastebin.com/f53ab3769,
 and  http://py77.python.pastebin.com/f68346b28

 I don't see a significant difference.

Good.  Thank you.

I'm attaching another astonishing timing result, also wrong.

It's probably always true that if a timing result is astonishing, there's
a mistake somewhere, maybe in your thinking.

This one compares using os.popen, os.listdir, and subprocess.Popen.

Marilyn Davis




 Dick


 ___
 Tutor maillist  -  Tutor@python.org
 http://mail.python.org/mailman/listinfo/tutor


#!/usr/bin/env python
lab13_1.py -- Adding up the file sizes in the current directory,
three ways, and comparing them.
import os
import subprocess
__pychecker__ = 'no-local'

def AccuracyTest():
print os.listdir:, AddFilesOsListdir()
print os.popen:  , AddFilesOsPopen()
print subprocess:, AddFilesSubprocess()

def AddFilesOsListdir():
total = 0
files = os.listdir('.')
for f in files:
if os.path.isdir('./' + f):
continue
total += os.path.getsize('./' + f)
return total

def AddFilesOsPopen():
return TotalLsSize(os.popen(ls -al))

def AddFilesSubprocess():
return TotalLsSize(subprocess.Popen([ls, -al], 
   stdout=subprocess.PIPE).stdout)
def ProfileTest():
for i in range(100):
AddFilesOsListdir()
AddFilesOsPopen()
AddFilesSubprocess()

def TotalLsSize(file_obj):
total = 0
for line in file_obj:
if line[0] == 'd':
continue
parts = line.split()
if len(parts) != 9:
continue
total += int(parts[4])
return total

def main():
AccuracyTest()
import profile
profile.run('ProfileTest()')

if __name__ == '__main__':
main()

$ lab13_1.py
os.listdir: 26298
os.popen:   26298
subprocess: 26298
 30376 function calls in 1.872 CPU seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
  1010.0040.0000.0040.000 :0(WEXITSTATUS)
  1010.0120.0000.0120.000 :0(WIFEXITED)
  1010.0000.0000.0000.000 :0(WIFSIGNALED)
   770.0040.0000.0040.000 :0(append)
  3000.0160.0000.0160.000 :0(close)
  2000.0120.0000.0120.000 :0(fcntl)
  1000.0040.0000.0040.000 :0(fdopen)
  1000.0680.0010.0680.001 :0(fork)
  2000.0120.0000.0120.000 :0(isinstance)
 54000.1000.0000.1000.000 :0(len)
  1000.0320.0000.0320.000 :0(listdir)
  2000.0000.0000.0000.000 :0(pipe)
  1000.1080.0010.1080.001 :0(popen)
10.0000.0000.0000.000 :0(range)
  1000.0160.0000.0160.000 :0(read)
   780.0040.0000.0040.000 :0(remove)
10.0040.0040.0040.004 :0(setprofile)
 54000.1040.0000.1040.000 :0(split)
 53000.2360.0000.2360.000 :0(stat)
  1780.0040.0000.0040.000 :0(waitpid)
10.0000.0001.8681.868 string:1(module)
  1000.1560.0020.8720.009 lab13_1.py:12(AddFilesOsListdir)
  1000.0240.0000.4400.004 lab13_1.py:21(AddFilesOsPopen)
  1000.0360.0000.5400.005 lab13_1.py:24(AddFilesSubprocess)
... The rest of the output is irrelevant.
It seems fishy that os.listdir() takes longer than both subprocess.Popen()
and os.popen().  Maybe somehow we are comparing apples and oranges?
$ ___
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Astonishing timing result

2008-06-24 Thread Kent Johnson
On Tue, Jun 24, 2008 at 1:43 PM, Dick Moores [EMAIL PROTECTED] wrote:
 Output:
 t1 is 0.000104, no function
 t2 is 5.87e-006, function explicit
 t3 is 0.000126, function imported
 t1/t2 is 17.8
 t1/t3 is 0.827
 t3/t2 is 21.5

 Now, I'd heard that code in a function runs faster than the same code not in
 a function, but even so I was surprised at the t1/t2 ratio of 17.8.

 The astonishing (to me, anyway) result was the t3/t2 ratio. I had no idea
 that importing from mycalc slowed a script down at all, let alone by a
 factor of 21!

Note that t1 and t3 are pretty close to each other. Perhaps you should
be suspicious of t2. What if __name__ != '__main__' ?

Kent
___
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Astonishing timing result

2008-06-24 Thread Dick Moores

At 12:44 PM 6/24/2008, Kent Johnson wrote:

On Tue, Jun 24, 2008 at 1:43 PM, Dick Moores [EMAIL PROTECTED] wrote:
 Output:
 t1 is 0.000104, no function
 t2 is 5.87e-006, function explicit
 t3 is 0.000126, function imported
 t1/t2 is 17.8
 t1/t3 is 0.827
 t3/t2 is 21.5

 Now, I'd heard that code in a function runs faster than the same 
code not in

 a function, but even so I was surprised at the t1/t2 ratio of 17.8.

 The astonishing (to me, anyway) result was the t3/t2 ratio. I had no idea
 that importing from mycalc slowed a script down at all, let alone by a
 factor of 21!

Note that t1 and t3 are pretty close to each other. Perhaps you should
be suspicious of t2. What if __name__ != '__main__' ?


With that,
t1 is 0.000104, no function
t2 is 0.000117, function explicit
t3 is 0.000113, function imported
t1/t2 is 0.885
t1/t3 is 0.914
t3/t2 is 0.969

Explain?

Dick


___
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Astonishing timing result

2008-06-24 Thread broek


- Message from [EMAIL PROTECTED] -
Date: Tue, 24 Jun 2008 13:44:00 -0700
From: Dick Moores [EMAIL PROTECTED]


At 12:44 PM 6/24/2008, Kent Johnson wrote:

On Tue, Jun 24, 2008 at 1:43 PM, Dick Moores [EMAIL PROTECTED] wrote:

Output:
t1 is 0.000104, no function
t2 is 5.87e-006, function explicit
t3 is 0.000126, function imported
t1/t2 is 17.8
t1/t3 is 0.827
t3/t2 is 21.5

Now, I'd heard that code in a function runs faster than the same

code not in

a function, but even so I was surprised at the t1/t2 ratio of 17.8.

The astonishing (to me, anyway) result was the t3/t2 ratio. I had no idea
that importing from mycalc slowed a script down at all, let alone by a
factor of 21!


Note that t1 and t3 are pretty close to each other. Perhaps you should
be suspicious of t2. What if __name__ != '__main__' ?


With that,
t1 is 0.000104, no function
t2 is 0.000117, function explicit
t3 is 0.000113, function imported
t1/t2 is 0.885
t1/t3 is 0.914
t3/t2 is 0.969

Explain?

Dick



Hey Dick,

I'm not too clear on what it is that you want explained.

It seems to me that the difference between t2 and t3 is 1) is so small  
as to be  most likely due to (effectively) random fluctuations of your  
environment (the demands that other processes were making on your  
system at the time) and 2) so small so as to not be worth worrying  
about (http://c2.com/cgi/wiki?PrematureOptimization).


I'd further wager that if you repeat the timing a few times, you'll  
find that on some runs t2 is less than t3.


Best,

Brian vdB
___
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Astonishing timing result

2008-06-24 Thread Dick Moores

At 02:06 PM 6/24/2008, [EMAIL PROTECTED] wrote:


- Message from [EMAIL PROTECTED] -
Date: Tue, 24 Jun 2008 13:44:00 -0700
From: Dick Moores [EMAIL PROTECTED]


At 12:44 PM 6/24/2008, Kent Johnson wrote:

On Tue, Jun 24, 2008 at 1:43 PM, Dick Moores [EMAIL PROTECTED] wrote:

Output:
t1 is 0.000104, no function
t2 is 5.87e-006, function explicit
t3 is 0.000126, function imported
t1/t2 is 17.8
t1/t3 is 0.827
t3/t2 is 21.5

Now, I'd heard that code in a function runs faster than the same

code not in

a function, but even so I was surprised at the t1/t2 ratio of 17.8.

The astonishing (to me, anyway) result was the t3/t2 ratio. I had no idea
that importing from mycalc slowed a script down at all, let alone by a
factor of 21!


Note that t1 and t3 are pretty close to each other. Perhaps you should
be suspicious of t2. What if __name__ != '__main__' ?


With that,
t1 is 0.000104, no function
t2 is 0.000117, function explicit
t3 is 0.000113, function imported
t1/t2 is 0.885
t1/t3 is 0.914
t3/t2 is 0.969

Explain?

Dick



Hey Dick,

I'm not too clear on what it is that you want explained.


Well, Kent suggested trying   if __name__ != '__main__' . Why would 
that make such a difference?



It seems to me that the difference between t2 and t3 is 1) is so small
as to be  most likely due to (effectively) random fluctuations of your
environment (the demands that other processes were making on your
system at the time) and 2) so small so as to not be worth worrying
about (http://c2.com/cgi/wiki?PrematureOptimization).


Basically, I'm not worried, just curious. Not about the small 
differences, but why did the use of the standardif __name__ == 
'__main__' result it such speed?  This was not a fluke. Before 
posting, I got similar results with different functions, albeit not 
quite as extreme.


Am I not doing the timing correctly?

Dick


___
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Astonishing timing result

2008-06-24 Thread Marilyn Davis
On Tue, June 24, 2008 2:06 pm, [EMAIL PROTECTED] wrote:

 - Message from [EMAIL PROTECTED] -
 Date: Tue, 24 Jun 2008 13:44:00 -0700
 From: Dick Moores [EMAIL PROTECTED]


 At 12:44 PM 6/24/2008, Kent Johnson wrote:

 On Tue, Jun 24, 2008 at 1:43 PM, Dick Moores [EMAIL PROTECTED] wrote:

 Output:
 t1 is 0.000104, no function t2 is 5.87e-006, function explicit t3 is
 0.000126, function imported
 t1/t2 is 17.8 t1/t3 is 0.827 t3/t2 is 21.5

 Now, I'd heard that code in a function runs faster than the same

 code not in
 a function, but even so I was surprised at the t1/t2 ratio of 17.8.


 The astonishing (to me, anyway) result was the t3/t2 ratio. I had
 no idea that importing from mycalc slowed a script down at all, let
 alone by a factor of 21!

 Note that t1 and t3 are pretty close to each other. Perhaps you
 should be suspicious of t2. What if __name__ != '__main__' ?

 With that,
 t1 is 0.000104, no function t2 is 0.000117, function explicit t3 is
 0.000113, function imported
 t1/t2 is 0.885 t1/t3 is 0.914 t3/t2 is 0.969

 Explain?

Does this mean that if __name__ == __main__ takes the extra time? and
that  that's brings t2 in line with the others? and that the difference
represents the time it takes to set up a code-block?

Something like that?

Marilyn Davis


 Dick



 Hey Dick,


 I'm not too clear on what it is that you want explained.


 It seems to me that the difference between t2 and t3 is 1) is so small
 as to be  most likely due to (effectively) random fluctuations of your
 environment (the demands that other processes were making on your system
 at the time) and 2) so small so as to not be worth worrying about
 (http://c2.com/cgi/wiki?PrematureOptimization).


 I'd further wager that if you repeat the timing a few times, you'll
 find that on some runs t2 is less than t3.

 Best,


 Brian vdB
 ___
 Tutor maillist  -  Tutor@python.org
 http://mail.python.org/mailman/listinfo/tutor


___
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Astonishing timing result

2008-06-24 Thread Dick Moores

At 04:49 PM 6/24/2008, Marilyn Davis wrote:


Does this mean that if __name__ == __main__ takes the extra time? and
that  that's brings t2 in line with the others?


I don't think so.  Please refer to the code again: 
http://py77.python.pastebin.com/f152b6c14.  Line 21  is   if 
__name__ == '__main__':   .  Changing  this line to
if __name__ != '__main__':  increases the time dramatically.  But 
maybe you meant   if __name__ != '__main__':   ?  If so, you must be 
correct. But what's going on here??  Hey, Kent?



 and that the difference
represents the time it takes to set up a code-block?


What's a code-block?

Dick



Something like that?

Marilyn Davis


___
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Astonishing timing result

2008-06-24 Thread Kent Johnson
On Tue, Jun 24, 2008 at 5:20 PM, Dick Moores [EMAIL PROTECTED] wrote:

 Basically, I'm not worried, just curious. Not about the small differences,
 but why did the use of the standardif __name__ == '__main__' result
 it such speed?

Because __name__ is not equal to __main__, so you were basically
skipping the whole test. The most common cause of unexpected timing
results is tests that don't do what you think they do.

The test code is not run in the main module. You can dig into the
timeit module if you want the details.

 Am I not doing the timing correctly?

Right.

Kent
___
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Astonishing timing result

2008-06-24 Thread Dick Moores

At 05:35 PM 6/24/2008, Kent Johnson wrote:

On Tue, Jun 24, 2008 at 5:20 PM, Dick Moores [EMAIL PROTECTED] wrote:

 Basically, I'm not worried, just curious. Not about the small differences,
 but why did the use of the standardif __name__ == '__main__' result
 it such speed?

Because __name__ is not equal to __main__, so you were basically
skipping the whole test.


Ah.


The most common cause of unexpected timing
results is tests that don't do what you think they do.

The test code is not run in the main module. You can dig into the
timeit module if you want the details.


OK, I'll dig.

Thanks,

Dick 


___
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Astonishing timing result

2008-06-24 Thread Dick Moores

At 07:00 PM 6/24/2008, Marilyn Davis wrote:


Has anyone ever timed the difference between using a function that was
imported with:

from my_module import MyFunction

and:

import my_module


Here are 2 comparisons: http://py77.python.pastebin.com/f53ab3769, 
and  http://py77.python.pastebin.com/f68346b28


I don't see a significant difference.

Dick 


___
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor