Re: [Python-Dev] Python Benchmarks

2006-06-13 Thread M.-A. Lemburg
FYI: I've just checked in pybench 2.0 under Tools/pybench/. Please give it a go and let me know whether the new calibration strategy and default timers result in better repeatability of the benchmark results. I've tested the release extensively on Windows and Linux and found that the test times a

Re: [Python-Dev] Python Benchmarks

2006-06-13 Thread M.-A. Lemburg
Fredrik, could you check whether the get_machine_details() function is causing the hand on your machine ? Does anyone else observe this as well ? I'm about to check in version 2.0 of pybench, but would like to get this resolved first, if possible. Thanks, -- Marc-Andre Lemburg eGenix.com Prof

Re: [Python-Dev] Python Benchmarks

2006-06-09 Thread M.-A. Lemburg
Fredrik Lundh wrote: > M.-A. Lemburg wrote: > >> You can download a current snapshot from: >> >> http://www.egenix.com/files/python/pybench-2.0-2006-06-09.zip > > believe it or not, but this hangs on my machine, under 2.5 trunk. and > it hangs hard; nether control-c, break, or the task manager

Re: [Python-Dev] Python Benchmarks

2006-06-09 Thread Fredrik Lundh
M.-A. Lemburg wrote: > You can download a current snapshot from: > > http://www.egenix.com/files/python/pybench-2.0-2006-06-09.zip believe it or not, but this hangs on my machine, under 2.5 trunk. and it hangs hard; nether control-c, break, or the task manager manages to kill it. if it's any

Re: [Python-Dev] Python Benchmarks

2006-06-09 Thread M.-A. Lemburg
Fredrik Lundh wrote: > M.-A. Lemburg wrote: > >> The results were produced by pybench 2.0 and use time.time >> on Linux, plus a different calibration strategy. As a result >> these timings are a lot more repeatable than with pybench 1.3 >> and I've confirmed the timings using several runs to make

Re: [Python-Dev] Python Benchmarks

2006-06-08 Thread Thomas Wouters
On 6/8/06, Georg Brandl <[EMAIL PROTECTED]> wrote: Does 4.0 show a general slowdown on your test machines? I saw a dropof average Pystones from 44000 to 4 and from 42000 to 39000 onmy boxes switching from GCC 3.4.6 to 4.1.1.Yep, looks like it does. Don't have time to run more extensive tests, t

Re: [Python-Dev] Python Benchmarks

2006-06-08 Thread M.-A. Lemburg
Thomas Wouters wrote: > On 6/8/06, M.-A. Lemburg <[EMAIL PROTECTED]> wrote: > >> Perhaps it's a new feature in gcc 4.0 that makes the slow-down I see >> turn into a speedup :-) > > > It seems so. I tested with gcc 2.95, 3.3 and 4.0 on FreeBSD 4.10 (only > machine I had available with those gcc v

Re: [Python-Dev] Python Benchmarks

2006-06-08 Thread Georg Brandl
Thomas Wouters wrote: > > > On 6/8/06, *M.-A. Lemburg* <[EMAIL PROTECTED] > > wrote: > > Perhaps it's a new feature in gcc 4.0 that makes the slow-down I see > turn into a speedup :-) > > > It seems so. I tested with gcc 2.95, 3.3 and 4.0 on FreeBSD 4.10 (onl

Re: [Python-Dev] Python Benchmarks

2006-06-08 Thread Thomas Wouters
On 6/8/06, M.-A. Lemburg <[EMAIL PROTECTED]> wrote: Perhaps it's a new feature in gcc 4.0 that makes the slow-down I seeturn into a speedup :-)It seems so. I tested with gcc 2.95, 3.3 and 4.0 on FreeBSD 4.10 (only machine I had available with those gcc versions) and both 2.95 and 4.0 show a 10-20%

Re: [Python-Dev] Python Benchmarks

2006-06-08 Thread M.-A. Lemburg
Thomas Wouters wrote: > On 6/8/06, M.-A. Lemburg <[EMAIL PROTECTED]> wrote: > >> All this on AMD64, Linux2.6, gcc3.3. > > > FWIW, my AMD64, linux 2.6, gcc 4.0 machine reports 29.0-29.5 usec for 2.5, > 30.0-31.0 for 2.4 and 30.5-31.5 for 2.3, using the code you attached. In > other words, 2.5 is

Re: [Python-Dev] Python Benchmarks

2006-06-08 Thread Thomas Wouters
On 6/8/06, M.-A. Lemburg <[EMAIL PROTECTED]> wrote: All this on AMD64, Linux2.6, gcc3.3.FWIW, my AMD64, linux 2.6, gcc 4.0 machine reports 29.0-29.5 usec for 2.5, 30.0-31.0 for 2.4 and 30.5-31.5 for 2.3, using the code you attached. In other words, 2.5 is definately not slower here. At least, not

Re: [Python-Dev] Python Benchmarks

2006-06-08 Thread Fredrik Lundh
M.-A. Lemburg wrote: > Huh ? They do show the speedups you achieved at the sprint. the results you just posted appear to show a 20% slowdown for function calls, and a 10% speedup for exceptions. both things were optimized at the sprint, and the improvements were confirmed on several machines.

Re: [Python-Dev] Python Benchmarks

2006-06-08 Thread M.-A. Lemburg
Fredrik Lundh wrote: > M.-A. Lemburg wrote: > >> The pybench results match those of timeit.py on my test machine >> in both cases. > > but they don't match the timeit results on similar machines, nor do they > reflect what was done at the sprint. Huh ? They do show the speedups you achieved at

Re: [Python-Dev] Python Benchmarks

2006-06-08 Thread Fredrik Lundh
M.-A. Lemburg wrote: > The results were produced by pybench 2.0 and use time.time > on Linux, plus a different calibration strategy. As a result > these timings are a lot more repeatable than with pybench 1.3 > and I've confirmed the timings using several runs to make sure. can you check in 2.0 ?

Re: [Python-Dev] Python Benchmarks

2006-06-08 Thread Fredrik Lundh
M.-A. Lemburg wrote: > The pybench results match those of timeit.py on my test machine > in both cases. but they don't match the timeit results on similar machines, nor do they reflect what was done at the sprint. > Tools/pybench> ~/projects/Python/Installation/bin/python Calls.py > 10 loops,

Re: [Python-Dev] Python Benchmarks

2006-06-08 Thread M.-A. Lemburg
Fredrik Lundh wrote: > M.-A. Lemburg wrote: > >> The pybench results match those of timeit.py on my test machine >> in both cases. I just mixed up the headers when I wrote the email. > > on a line by line basis ? No idea what you mean ? I posted the corrected version after Nick told me about the

Re: [Python-Dev] Python Benchmarks

2006-06-08 Thread Fredrik Lundh
M.-A. Lemburg wrote: > The pybench results match those of timeit.py on my test machine > in both cases. I just mixed up the headers when I wrote the email. on a line by line basis ? > Testnamesminimum run-timeaverage run-time > th

Re: [Python-Dev] Python Benchmarks

2006-06-08 Thread M.-A. Lemburg
Fredrik Lundh wrote: > M.-A. Lemburg wrote: > >> I put the headings for the timeit.py output on the >> wrong blocks. Thanks for pointing this out. > > so how do you explain the Try/Except results, where timeit and pybench > seems to agree? The pybench results match those of timeit.py on my tes

Re: [Python-Dev] Python Benchmarks

2006-06-08 Thread Fredrik Lundh
M.-A. Lemburg wrote: > I put the headings for the timeit.py output on the > wrong blocks. Thanks for pointing this out. so how do you explain the Try/Except results, where timeit and pybench seems to agree? ___ Python-Dev mailing list Python-Dev@py

Re: [Python-Dev] Python Benchmarks

2006-06-08 Thread Fredrik Lundh
M.-A. Lemburg wrote: > Still, here's the timeit.py measurement of the PythonFunctionCall > test (note that I've scaled down the test in terms of number > of rounds for timeit.py): > > Python 2.4: > 10 loops, best of 3: 21.9 msec per loop > 10 loops, best of 3: 21.8 msec per loop > 10 loops, best

Re: [Python-Dev] Python Benchmarks

2006-06-08 Thread M.-A. Lemburg
Nick Coghlan wrote: > M.-A. Lemburg wrote: >> Still, here's the timeit.py measurement of the PythonFunctionCall >> test (note that I've scaled down the test in terms of number >> of rounds for timeit.py): >> Python 2.5 as of last night: >> 10 loops, best of 3: 21.9 msec per loop >> 10 loops, best

Re: [Python-Dev] Python Benchmarks

2006-06-08 Thread Nick Coghlan
M.-A. Lemburg wrote: > Still, here's the timeit.py measurement of the PythonFunctionCall > test (note that I've scaled down the test in terms of number > of rounds for timeit.py): > > Python 2.4: > 10 loops, best of 3: 21.9 msec per loop > 10 loops, best of 3: 21.8 msec per loop > 10 loops, best o

Re: [Python-Dev] Python Benchmarks

2006-06-08 Thread M.-A. Lemburg
Fredrik Lundh wrote: > M.-A. Lemburg wrote: > >> Some more interesting results from comparing Python 2.4 (other) against >> the current SVN snapshot (this): > > been there, done that, found the results lacking. > > we spent a large part of the first NFS day to investigate all > reported slowdown

Re: [Python-Dev] Python Benchmarks

2006-06-07 Thread Fredrik Lundh
M.-A. Lemburg wrote: > Some more interesting results from comparing Python 2.4 (other) against > the current SVN snapshot (this): been there, done that, found the results lacking. we spent a large part of the first NFS day to investigate all reported slowdowns, and found that only one slowdown c

Re: [Python-Dev] Python Benchmarks

2006-06-07 Thread M.-A. Lemburg
M.-A. Lemburg wrote: > Some more interesting results from comparing Python 2.4 (other) against > the current SVN snapshot (this): Here's the list again, this time without wrapping (sigh): Testnamesminimum run-timeaverage run-time t

Re: [Python-Dev] Python Benchmarks

2006-06-07 Thread M.-A. Lemburg
Some more interesting results from comparing Python 2.4 (other) against the current SVN snapshot (this): Testnamesminimum run-timeaverage run-time thisother diffthisother diff -

Re: [Python-Dev] Python Benchmarks

2006-06-07 Thread Fredrik Lundh
M.-A. Lemburg wrote: > One interesting difference I found while testing on Windows > vs. Linux is that the StringMappings test have quite a different > run-time on both systems: around 2500ms on Windows vs. 590ms > on Linux (on Python 2.4). UnicodeMappings doesn't show such > a signficant differen

Re: [Python-Dev] Python Benchmarks

2006-06-07 Thread M.-A. Lemburg
Steve Holden wrote: > M.-A. Lemburg wrote: > [...] >> Overall, time.clock() on Windows and time.time() on Linux appear >> to give the best repeatability of tests, so I'll make those the >> defaults in pybench 2.0. >> >> In short: Tim wins, I lose. >> >> Was a nice experiment, though ;-) >> > Perhap

Re: [Python-Dev] Python Benchmarks

2006-06-07 Thread Steve Holden
M.-A. Lemburg wrote: [...] > Overall, time.clock() on Windows and time.time() on Linux appear > to give the best repeatability of tests, so I'll make those the > defaults in pybench 2.0. > > In short: Tim wins, I lose. > > Was a nice experiment, though ;-) > Perhaps so, but it would have been ni

Re: [Python-Dev] Python Benchmarks

2006-06-07 Thread M.-A. Lemburg
Michael Chermside wrote: > Marc-Andre Lemburg writes: >> Using the minimum looks like the way to go for calibration. >> >> I wonder whether the same is true for the actual tests; since >> you're looking for the expected run-time, the minimum may >> not necessarily be the choice. > > No, you're not

Re: [Python-Dev] Python Benchmarks

2006-06-06 Thread M.-A. Lemburg
M.-A. Lemburg wrote: > FWIW, these are my findings on the various timing strategies: Correction (due to a bug in my pybench dev version): > * Windows: > > time.time() > - not usable; I get timings with an error interval of roughly 30% > > GetProcessTimes() > - not usable; I get timi

Re: [Python-Dev] Python Benchmarks

2006-06-06 Thread Fredrik Lundh
M.-A. Lemburg wrote: > * Linux: > > time.clock() > - not usable; I get timings with error interval of about 30% > with differences in steps of 100ms > resource.getrusage() > - error interval of less than 10%; overall < 0.5% > with differences in steps of 10ms hmm. I cou

Re: [Python-Dev] Python Benchmarks

2006-06-06 Thread M.-A. Lemburg
FWIW, these are my findings on the various timing strategies: * Windows: time.time() - not usable; I get timings with an error interval of roughly 30% GetProcessTimes() - not usable; I get timings with an error interval of up to 100% with differences in steps of 15.626ms tim

Re: [Python-Dev] Python Benchmarks

2006-06-06 Thread Fredrik Lundh
M.-A. Lemburg wrote: > This example is a bit misleading, since chances are high that > the benchmark will get a good priority bump by the scheduler. which makes it run infinitely fast ? what planet are you buying your hardware on ? ;-) ___ Python-D

Re: [Python-Dev] Python Benchmarks

2006-06-06 Thread M.-A. Lemburg
Fredrik Lundh wrote: > Martin v. Löwis wrote: > >>> since process time is *sampled*, not measured, process time isn't exactly >>> in- >>> vulnerable either. >> I can't share that view. The scheduler knows *exactly* what thread is >> running on the processor at any time, and that thread won't chan

Re: [Python-Dev] Python Benchmarks

2006-06-06 Thread M.-A. Lemburg
Fredrik Lundh wrote: > M.-A. Lemburg wrote: > >> I just had an idea: if we could get each test to run >> inside a single time slice assigned by the OS scheduler, >> then we could benefit from the better resolution of the >> hardware timers while still keeping the noise to a >> minimum. >> >> I sup

Re: [Python-Dev] Python Benchmarks

2006-06-05 Thread Steve Holden
M.-A. Lemburg wrote: > Fredrik Lundh wrote: > >>M.-A. Lemburg wrote: >> >> >>>Seriously, I've been using and running pybench for years >>>and even though tweaks to the interpreter do sometimes >>>result in speedups or slow-downs where you wouldn't expect >>>them (due to the interpreter using the P

Re: [Python-Dev] Python Benchmarks

2006-06-03 Thread Andrew Dalke
On 6/3/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > - I would average the timings of runs instead of taking the minimum value as > sometimes bench marks could be running code that is not deterministic in its > calculations (could be using random numbers that effect convergence). I would rewr

Re: [Python-Dev] Python Benchmarks

2006-06-03 Thread john . m . camara
Here are my suggestions: - While running bench marks don't listen to music, watch videos, use the keyboard/mouse, or run anything other than the bench mark code. Seams like common sense to me. - I would average the timings of runs instead of taking the minimum value as sometimes bench marks c

Re: [Python-Dev] Python Benchmarks

2006-06-03 Thread Martin v. Löwis
Tim Peters wrote: > Without the sleep, it gets charged 6 CPU seconds. With the sleep, 0 > CPU seconds. > > But life would be more boring if people believed you the first time ;-) This only proves that it uses clock ticks for the accounting, and not something with higher resolution. To find out w

Re: [Python-Dev] Python Benchmarks

2006-06-03 Thread Martin v. Löwis
Tim Peters wrote: >> then >> process time *is* measured, not sampled, on any modern operating >> system: it is updated whenever the scheduler schedules a different >> thread. > > That doesn't seem to agree with, e.g., > >http://lwn.net/2001/0412/kernel.php3 > > under "No more jiffies?": [..

Re: [Python-Dev] Python Benchmarks

2006-06-03 Thread Tim Peters
[Fredrik Lundh] > but it's always the thread that runs when the timer interrupt > arrives that gets the entire jiffy time. for example, this script runs > for ten seconds, usually without using any process time at all: > > import time > for i in range(1000): > for i in rang

Re: [Python-Dev] Python Benchmarks

2006-06-03 Thread Fredrik Lundh
Tim Peters wrote: > Maybe this varies by Linux flavor or version? While the article above > was published in 2001, Googling didn't turn up any hint that Linux > jiffies have actually gone away, or become better loved, since then. well, on x86, they have changed from 10 ms in 2.4 to 1 ms in early

Re: [Python-Dev] Python Benchmarks

2006-06-03 Thread Tim Peters
[Fredrik Lundh] >> ... >> since process time is *sampled*, not measured, process time isn't exactly in- >> vulnerable either. [Martin v. Löwis] > I can't share that view. The scheduler knows *exactly* what thread is > running on the processor at any time, and that thread won't change > until the s

Re: [Python-Dev] Python Benchmarks

2006-06-03 Thread Fredrik Lundh
Martin v. Löwis wrote: > Sure: when a thread doesn't consume its entire quantum, accounting > becomes difficult. Still, if the scheduler reads the current time > when scheduling, it measures the time consumed. yeah, but the point is that it *doesn't* read the current time: all the system does it

Re: [Python-Dev] Python Benchmarks

2006-06-03 Thread Andrew Dalke
Tim: > A lot of things get mixed up here ;-) The _mean_ is actually useful > if you're using a poor-resolution timer with a fast test. In which case discrete probability distributions are better than my assumption of a continuous distribution. I looked at the distribution of times for 1,000 repe

Re: [Python-Dev] Python Benchmarks

2006-06-03 Thread Martin v. Löwis
Fredrik Lundh wrote: >> it is updated whenever the scheduler schedules a different thread. > > updated with what? afaik, the scheduler doesn't have to wait for a > timer interrupt to reschedule things (think blocking, or interrupts that > request rescheduling, or new processes, or...) -- but it

Re: [Python-Dev] Python Benchmarks

2006-06-03 Thread Fredrik Lundh
Martin v. Löwis wrote: >> since process time is *sampled*, not measured, process time isn't exactly in- >> vulnerable either. > > I can't share that view. The scheduler knows *exactly* what thread is > running on the processor at any time, and that thread won't change > until the scheduler makes

Re: [Python-Dev] Python Benchmarks

2006-06-03 Thread Michael Hudson
Greg Ewing <[EMAIL PROTECTED]> writes: > Tim Peters wrote: > >> I liked benchmarking on Crays in the good old days. ... > > Test times were reproducible to the >> nanosecond with no effort. Running on a modern box for a few >> microseconds at a time is a way to approximate that, provided you

Re: [Python-Dev] Python Benchmarks

2006-06-03 Thread Martin v. Löwis
Fredrik Lundh wrote: > since process time is *sampled*, not measured, process time isn't exactly in- > vulnerable either. I can't share that view. The scheduler knows *exactly* what thread is running on the processor at any time, and that thread won't change until the scheduler makes it change. So

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Josiah Carlson
Greg Ewing <[EMAIL PROTECTED]> wrote: > > Tim Peters wrote: > > > I liked benchmarking on Crays in the good old days. ... > > Test times were reproducible to the > > nanosecond with no effort. Running on a modern box for a few > > microseconds at a time is a way to approximate that, provide

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Greg Ewing
A.M. Kuchling wrote: > (At work we're trying to move toward this approach for doing realtime > audio: devote one CPU to the audio computation and use other CPUs for > I/O, web servers, and whatnot.) Speaking of creative uses for multiple CPUs, I was thinking about dual-core Intel Macs the other d

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Greg Ewing
Tim Peters wrote: > I liked benchmarking on Crays in the good old days. ... > Test times were reproducible to the > nanosecond with no effort. Running on a modern box for a few > microseconds at a time is a way to approximate that, provided you > measure the minimum time with a high-resolutio

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread A.M. Kuchling
On Fri, Jun 02, 2006 at 07:44:07PM -0400, Tim Peters wrote: > Fortran code could scream. Test times were reproducible to the > nanosecond with no effort. Running on a modern box for a few > microseconds at a time is a way to approximate that, provided you > measure the minimum time with a high-re

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Tim Peters
[MAL] >>> Using the minimum looks like the way to go for calibration. [Terry Reedy] >> Or possibly the median. [Andrew Dalke] > Why? I can't think of why that's more useful than the minimum time. A lot of things get mixed up here ;-) The _mean_ is actually useful if you're using a poor-resolut

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Andrew Dalke
On 6/2/06, M.-A. Lemburg <[EMAIL PROTECTED]> wrote: > It's interesting that even pressing a key on your keyboard > will cause forced context switches. When niceness was first added to multiprocessing OSes people found their CPU intensive jobs would go faster by pressing enter a lot.

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Andrew Dalke
On 6/2/06, Terry Reedy <[EMAIL PROTECTED]> wrote: > Hardly a setting in which to run comparison tests, seems to me. The point though was to show that the time distribution is non-Gaussian, so intuition based on that doesn't help. > > Using the minimum looks like the way to go for calibration. > >

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread M.-A. Lemburg
M.-A. Lemburg wrote: That's why the timers being used by pybench will become a parameter that you can then select to adapt pybench it to the OS your running pybench on. >>> Wasn't that decision a consequence of the problems found during >>> the sprint? >> It's a consequence of a disc

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Fredrik Lundh
M.-A. Lemburg wrote: > I just had an idea: if we could get each test to run > inside a single time slice assigned by the OS scheduler, > then we could benefit from the better resolution of the > hardware timers while still keeping the noise to a > minimum. > > I suppose this could be achieved by:

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Fredrik Lundh
Terry Reedy wrote: > But even better, the way to go to run comparison timings is to use a system > with as little other stuff going on as possible. For Windows, this means > rebooting in safe mode, waiting until the system is quiescent, and then run > the timing test with *nothing* else active

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Terry Reedy
"M.-A. Lemburg" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] >> Granted, I hit a couple of web pages while doing this and my spam >> filter processed my mailbox in the background... Hardly a setting in which to run comparison tests, seems to me. > Using the minimum looks like the

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Michael Chermside
Marc-Andre Lemburg writes: > Using the minimum looks like the way to go for calibration. > > I wonder whether the same is true for the actual tests; since > you're looking for the expected run-time, the minimum may > not necessarily be the choice. No, you're not looking for the expected run-time.

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread M.-A. Lemburg
M.-A. Lemburg wrote: >>> That's why the timers being used by pybench will become a >>> parameter that you can then select to adapt pybench it to >>> the OS your running pybench on. >> Wasn't that decision a consequence of the problems found during >> the sprint? > > It's a consequence of a discuss

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Fredrik Lundh
M.-A. Lemburg wrote: > I believe that using wall-clock timers > for benchmarking is not a good approach due to the high > noise level. Process time timers typically have a lower > resolution, but give a better picture of the actual > run-time of your code and also don't exhibit as much noise > as

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread M.-A. Lemburg
Andrew Dalke wrote: > M.-A. Lemburg: >> The approach pybench is using is as follows: > ... >> The calibration step is run multiple times and is used >> to calculate an average test overhead time. > > One of the changes that occured during the sprint was to change this > algorithm > to use the be

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Andrew Dalke
M.-A. Lemburg: The approach pybench is using is as follows: ... The calibration step is run multiple times and is used to calculate an average test overhead time. One of the changes that occured during the sprint was to change this algorithm to use the best time rather than the average. Us

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread M.-A. Lemburg
Fredrik Lundh wrote: > M.-A. Lemburg wrote: > >> Of course, but then changes to try-except logic can interfere >> with the performance of setting up method calls. This is what >> pybench then uncovers. > > I think the only thing PyBench has uncovered is that you're convinced that > it's > always

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Fredrik Lundh
M.-A. Lemburg wrote: > Of course, but then changes to try-except logic can interfere > with the performance of setting up method calls. This is what > pybench then uncovers. I think the only thing PyBench has uncovered is that you're convinced that it's always right, and everybody else is always

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread M.-A. Lemburg
Fredrik Lundh wrote: > M.-A. Lemburg wrote: > >> Seriously, I've been using and running pybench for years >> and even though tweaks to the interpreter do sometimes >> result in speedups or slow-downs where you wouldn't expect >> them (due to the interpreter using the Python objects), >> they are r

Re: [Python-Dev] Python Benchmarks

2006-06-01 Thread Fredrik Lundh
M.-A. Lemburg wrote: > Seriously, I've been using and running pybench for years > and even though tweaks to the interpreter do sometimes > result in speedups or slow-downs where you wouldn't expect > them (due to the interpreter using the Python objects), > they are reproducable and often enough h

Re: [Python-Dev] Python Benchmarks

2006-05-31 Thread M.-A. Lemburg
Fredrik Lundh wrote: > M.-A. Lemburg wrote: > >> AFAIK, there were no real issues with pybench, only with the >> fact that time.clock() (the timer used by pybench) is wall-time >> on Windows and thus an MP3-player running in the background >> will cause some serious noise in the measurements > >

Re: [Python-Dev] Python Benchmarks

2006-05-31 Thread M.-A. Lemburg
[EMAIL PROTECTED] wrote: > MAL> I'm aware of that thread, but Fredrik only posted some vague > MAL> comment to the checkins list, saying that they couldn't use > MAL> pybench. I asked for some more details, but he didn't get back to > MAL> me. > > I'm pretty sure I saw him (or mayb

Re: [Python-Dev] Python Benchmarks

2006-05-31 Thread Fredrik Lundh
M.-A. Lemburg wrote: > AFAIK, there were no real issues with pybench, only with the > fact that time.clock() (the timer used by pybench) is wall-time > on Windows and thus an MP3-player running in the background > will cause some serious noise in the measurements oh, please; as I mentioned back

Re: [Python-Dev] Python Benchmarks

2006-05-31 Thread skip
MAL> I'm aware of that thread, but Fredrik only posted some vague MAL> comment to the checkins list, saying that they couldn't use MAL> pybench. I asked for some more details, but he didn't get back to MAL> me. I'm pretty sure I saw him (or maybe Andrew Dalke) post some timing com

Re: [Python-Dev] Python Benchmarks

2006-05-31 Thread M.-A. Lemburg
[EMAIL PROTECTED] wrote: > MAL> Could you please forward such questions to me ? > > I suppose, though what question were you referring to? Not sure - I thought you knew ;-) > I was referring to > Fredrik's thread about stringbench vs pybench for string/unicode tests, > which I thought was p

Re: [Python-Dev] Python Benchmarks

2006-05-31 Thread skip
MAL> Could you please forward such questions to me ? I suppose, though what question were you referring to? I was referring to Fredrik's thread about stringbench vs pybench for string/unicode tests, which I thought was posted to python-dev. I assumed you were aware of the issue. Skip _

Re: [Python-Dev] Python Benchmarks

2006-05-31 Thread M.-A. Lemburg
[EMAIL PROTECTED] wrote: > (This is more appropriate for comp.lang.python/[EMAIL PROTECTED]) > > Niko> After reading through recent Python mail regarding dictionaries > Niko> and exceptions, I wondered, what is the current state of the art > Niko> in Python benchmarks? > > Pybench was

Re: [Python-Dev] Python Benchmarks

2006-05-31 Thread skip
(This is more appropriate for comp.lang.python/[EMAIL PROTECTED]) Niko> After reading through recent Python mail regarding dictionaries Niko> and exceptions, I wondered, what is the current state of the art Niko> in Python benchmarks? Pybench was recently added to the repository and

[Python-Dev] Python Benchmarks

2006-05-31 Thread Niko Matsakis
Hello, After reading through recent Python mail regarding dictionaries and exceptions, I wondered, what is the current state of the art in Python benchmarks? I've tried before to find a definite set of Python benchmarks but failed. There doesn't seem to be an up to date reference, though