Re: A Perl Task - Benchmarking

2004-03-06 Thread Sebastian Riedel
Hi Leo,

Attached is a quick'n dirty parrotbench, instead of a complicated
test harness it uses bash to make time measurements, so that
new languages are very simple to add.
Currently it's just a proof of concept but if you like it i will make
a better version with pretty printing, extended reports and stuff.
Here's an example run:

parrotperl
python  ruby
addit   8.469  7.379
- -
arriter-   1.657
- -
bench_newp   1.827  -   -
 -
fib   -
0.594   - -
freeze 0.783   1.65
- -
gc_alloc_new  0.191  --
-
gc_alloc_reuse   4.068   -   -
  -
gc_generations  6.363   -   -
  -
gc_header_new  1.168   -   -
 -
gc_header_reuse   5.772   -   -
 -
gc_waves_headers 1.302   -   - -
gc_waves_sizeable_data1.074   -   - -
gc_waves_sizeable_headers  3.702   -   - -
oo1  3.571   1.189
0.689  -
primes   27.991  383.851-
  -
primes2 17.325   -
44.379   -
primes2_p 29.753   -
-  -
prop0.14  -
   -  -
shared_ref  0.55211.563 -
   -
stress  1.988  0.905
-  -
stress1  27.53917.312 -
 -
stress23.908  3.440
-  -
stress3  19.050   -
-  -
utf8 0.13  -
-  -
vpm- 40.057
-  -
Cheers,
Sebastian
Leopold Toetsch wrote:

I'd a short look at perlbench from CPAN. This inspired me to the 
following idea:

examples/benchmarks/* has a bunch of programs e.g.
  oo1.pasm
  oo1.pl
  oo1.py
  stress.pasm
  stress.pl
  ...
Now like perlbench is able to compare run times of different perl 
versions, the goal of this task is to provide a script that compares 
different interpreters and finally spits out:

 Parrot-j Parrot-C   PerlPython   Ruby
oo1  100%103% 75%   50% -
mops 100%200%  4%
stress   ... -  -
or some such.

To simplify the task, we could of course move used tests into a 
separate directory. Unavailable interpreters (or missing scripts for 
that language) are just skipped.

Any takers?

leo





parrotbench.patch.gz
Description: application/tgz


Re: A Perl Task - Benchmarking

2004-03-07 Thread Sebastian Riedel
Leopold Toetsch wrote:

Sebastian Riedel <[EMAIL PROTECTED]> wrote:
 
  objective-ook? - SCNR
:)

 

Attached is a quick'n dirty parrotbench, instead of a complicated
test harness it uses bash to make time measurements, so that
new languages are very simple to add.
   

bash isn't really available on all systems, so it should better use one
of the time functions.
 

Attached is a new version using times()

 

Currently it's just a proof of concept but if you like it i will make
a better version with pretty printing, extended reports and stuff.
   

Good. Please have a look at lib/Parrot/Test.pm:_run_command and
Config.pm for executable names. Maybe a config file could simplify the
task (user can put in executable names once).
 

Working on it

 

Here's an example run:
   

$ perl tools/dev/parrotbench.pl -regex '(oo|str|mops).*' \
   -parrot='./parrot -j' -perl=`which perl` -python=`which python` \
   -ruby=`which ruby`
   parrot  perlpython  ruby
mops0.260   96.140  9.830   9.860
oo1 1.700   0.820   0.510   -
oo2 8.410   4.602.400   -
stress  0.980   0.640   -   -
stress1 13.970  12.400  -   -
stress2 1.670   2.450   -   -
stress3 10.540  -   -   -
(Python and Ruby mops are running 1/10th of loops - files linked into
examples/benchmarks)
Nice. Yes please.

Any one out there, who speaks ruby and can translate tests, for which
we have a '.pl' file?.
Thanks,
leo
 

Cheers,
Sebastian


parrotbench.patch.gz
Description: application/tgz


Re: A Perl Task - Benchmarking

2004-03-07 Thread Sebastian Riedel
Sebastian Riedel wrote:

Leopold Toetsch wrote:

Sebastian Riedel <[EMAIL PROTECTED]> wrote:
 
  objective-ook? - SCNR
:)

 

Attached is a quick'n dirty parrotbench, instead of a complicated
test harness it uses bash to make time measurements, so that
new languages are very simple to add.
  


bash isn't really available on all systems, so it should better use one
of the time functions.
 

Attached is a new version using times()

 

Currently it's just a proof of concept but if you like it i will make
a better version with pretty printing, extended reports and stuff.
  


Good. Please have a look at lib/Parrot/Test.pm:_run_command and
Config.pm for executable names. Maybe a config file could simplify the
task (user can put in executable names once).
 

Working on it
The attached version should do most of the things you wanted.

[EMAIL PROTECTED]:~/parrot$ tools/dev/parrotbench.pl -regex oo -conf 
../parrotbench.conf
   parrot  perlpython  ruby
oo1 100%39% 23% -
oo2 100%40% 22% -

I studied the config system, and now i wonder if it would make sense
to write a configure step to probe for enemies, or is that overkill?

 

Here's an example run:
  


$ perl tools/dev/parrotbench.pl -regex '(oo|str|mops).*' \
   -parrot='./parrot -j' -perl=`which perl` -python=`which python` \
   -ruby=`which ruby`
   parrot  perlpython  ruby
mops0.260   96.140  9.830   9.860
oo1 1.700   0.820   0.510   -
oo2 8.410   4.602.400   -
stress  0.980   0.640   -   -
stress1 13.970  12.400  -   -
stress2 1.670   2.450   -   -
stress3 10.540  -   -   -
(Python and Ruby mops are running 1/10th of loops - files linked into
examples/benchmarks)
Nice. Yes please.

Any one out there, who speaks ruby and can translate tests, for which
we have a '.pl' file?.
Thanks,
leo
 

Cheers,
Sebastian
Cheers,
Sebastian


parrotbench.patch.gz
Description: application/tgz


Re: A Perl Task - Benchmarking

2004-03-08 Thread Sebastian Riedel
Leopold Toetsch wrote:

Sebastian Riedel <[EMAIL PROTECTED]> wrote:

 

Sebastian Riedel wrote:
   

 

The attached version should do most of the things you wanted.
   

 

[EMAIL PROTECTED]:~/parrot$ tools/dev/parrotbench.pl -regex oo -conf
../parrotbench.conf
   parrot  perlpython  ruby
oo1 100%39% 23% -
oo2 100%40% 22% -
   

Good. Just some more notes:

I'd like the program to take a config like this:

parrot-C: ./parrot -C
parrot-j: ./parrot -j
perl:   /usr/bin/perl
perl-58_th: /opt/perl-th/bin/perl
...
scheme: /usr/bin/rep: .scm
that is to be able to compare more different programs. Timings should be
relative to the first given program.
 

The attached patch adds this, conf files now look so:

parrot: /home/sri/parrot/parrot: .pasm .imc
ruby: /usr/bin/ruby: .rb
python: /usr/bin/python: .py
python-C: /usr/bin/python -C: .py
perl: /usr/bin/perl -w: .pl
Output looks so (could be prettier):

[EMAIL PROTECTED]:~/parrot$ tools/dev/parrotbench.pl -b 'oo|addit' -e 
'python|parrot|perl'
   parrot(pasm)parrot(imc) python(py)  perl(pl)
addit   100%131%-   87%
addit2  -   100%-   -
oo1 100%-   23% 39%
oo2 100%-   21% 39%

 

I studied the config system, and now i wonder if it would make sense
to write a configure step to probe for enemies, or is that overkill?
   

^^^
Please ...
 

Sorry, just kidding... everybody loves ruby!!! :)

No config step necessary. The -conf option is good enough.

leo

 

Cheers,
Sebastian


parrotbench.patch.gz
Description: application/tgz


Re: A Perl Task - Benchmarking

2004-03-08 Thread Sebastian Riedel
Abhijit A. Mahabal wrote:

A very basic newbeish question..

 

[EMAIL PROTECTED]:~/parrot$ tools/dev/parrotbench.pl -regex oo -conf
../parrotbench.conf
  parrot  perlpython  ruby
oo1 100%39% 23% -
oo2 100%40% 22% -
   

Are bigger numbers more desirable (as they would be if they mean number of
times code run per second) or lower numbers are more desirable (as they
would be if they mean time taken per run)? So what do these numbers mean?
 

All numbers are relative to the first one, lower is better.

The -t switch explains it:

[EMAIL PROTECTED]:~/parrot$ tools/dev/parrotbench.pl -b oo -t
parrot(pasm)parrot(imc) python(py)  perl(pl)
oo1 3.030s  -   0.690s  1.200s
oo2 15.020s -   3.310s  6.020s
Yes, parrot looks not so good at the moment, maybe Dan will get his 
cream pie at OSCON. :)

--Abhijit

 

Cheers,
Sebastian Riedel


Re: A Perl Task - Benchmarking

2004-03-08 Thread Sebastian Riedel
Dan Sugalski wrote:

At 10:10 PM +0100 3/8/04, Sebastian Riedel wrote:

Abhijit A. Mahabal wrote:

A very basic newbeish question..


[EMAIL PROTECTED]:~/parrot$ tools/dev/parrotbench.pl -regex oo -conf
../parrotbench.conf
  parrot  perlpython  ruby
oo1 100%39% 23% -
oo2 100%40% 22% -
 

Are bigger numbers more desirable (as they would be if they mean 
number of
times code run per second) or lower numbers are more desirable (as they
would be if they mean time taken per run)? So what do these numbers 
mean?

All numbers are relative to the first one, lower is better.


Can we add a caption to the output? Otherwise I'll end up forgetting.

Will be added in the next version.

The -t switch explains it:

[EMAIL PROTECTED]:~/parrot$ tools/dev/parrotbench.pl -b oo -t
parrot(pasm)parrot(imc) python(py)  perl(pl)
oo1 3.030s  -   0.690s  1.200s
oo2 15.020s -   3.310s  6.020s
Yes, parrot looks not so good at the moment, maybe Dan will get his 
cream pie at OSCON. :)


Well... we'll see about that. :) There are a few tricks yet to be 
played here, and this is definitely a first-cut of the object 
implementation. There's an awful lot of unnecessary indirection at the 
moment, which needs fixing up.

We also need to implement a method cache, which'll probably increase 
parrot's speed just a touch, if every single other late-binding OO 
language is anything to judge by. :)
Seems Larry knows well how to motivate you. :)

Cheers,
Sebastian


Re: A Perl Task - Benchmarking

2004-03-09 Thread Sebastian Riedel
Leopold Toetsch wrote:

Sebastian Riedel <[EMAIL PROTECTED]> wrote:

 

The attached patch adds this, conf files now look so:
   

 

parrot: /home/sri/parrot/parrot: .pasm .imc
   

Good.

 

ruby: /usr/bin/ruby: .rb
python: /usr/bin/python: .py
python-C: /usr/bin/python -C: .py
   

 ^
That's probably parrot-C, anyway:
 

Just an example. ;)

Output looks so (could be prettier):
   

Yes ;)

 

   parrot(pasm)parrot(imc) python(py)  perl(pl)
   

 ^^^

These should just be one column: Some benchmarks are written in PASM some
in PIR, but they are totally equivalent and run through one parrot.
OTOH:

parrot-j: ./parrot -j: .imc .pasm
parrot-C: ./parrot -C: .imc .pasm
Should give two columns, one for 'parrot -j' and one for 'parrot -C' -
these are two different run loops with different timings.
So a line in the config specifies one program (with possibly multiple
file extensions) and is one column of timing report.
 

Also when comparing (percentage output) a benchmark shouldn't be run, if
there is only one program to run it.
 

Attached patch should fix that all.

[ please provide patches againt parrot root's directory ]
 

Roger that.

Applied and thanks,
leo
 

Sebastian


parrotbench.patch.gz
Description: application/tgz


Re: [PATCH] Prettifying parrotbench output

2004-03-09 Thread Sebastian Riedel
chromatic wrote:

On Tue, 2004-03-09 at 05:15, Luke Palmer wrote:

 

-system "$pathes{$names[$i]} $directory/"
-  . "$benchmark.$suffixes[$i][$j]"
-  . '>/dev/null';
   

File::Spec has a devnull() method.  I'd use that to improve portability,
though I'm never sure how shell redirection breaks on weird platforms.
-- c

 

Thanks for the tip!

Cheers,
Sebastian


Re: [PATCH] Prettifying parrotbench output

2004-03-09 Thread Sebastian Riedel
Leopold Toetsch wrote:

Luke Palmer <[EMAIL PROTECTED]> wrote:
 

Leopold Toetsch writes:
   

Must be soemthing wrong here. Output is now totally messed up.
 

 

It's possible that your terminal isn't wide enough.
   

80 columns is wide enough for me ;)

 

Indeed, something needs to be modified to keep the output to 72 columns,
somehow.
   

Yep. But before messing with the format, please fix the columns, as I've
described in another mail.
 

It's fixed with my last patch.

Getting the table pretty will be a PITA.
Maybe we should just use a Benchmark.pm like format?
Luke
   

leo

 

Cheers,
Sebastian


Re: Website maintainer needed!

2004-04-21 Thread Sebastian Riedel
Dan Sugalski wrote:

So, parrotcode.org's getting a bit crusty in its content (though with 
a spiffy-keen new look if you've not looked in a while) and we need to 
fix that.

Rather than putting this on my essentially infinitely long todo list, 
this'd be a good spot for someone who wants to get involved to, well, 
get involved. Spiff the place up, swamp out the older bits, and 
generally get it more nifty. (like, say, adding info on the tinderbox, 
links to parrot stuff around the web, and various whatnots like that)

Volunteers very welcome--chime in, we'll hook you up with Robert and 
the appropriate access, and you can have at it.
*chime*

I could also sacrifice some time to that.

Cheers,
Sebastian


Re: What happened to

2004-05-24 Thread Sebastian Riedel
Dan Sugalski wrote:
At 11:55 AM -0700 5/24/04, Joshua Gatcomb wrote:
The FAQ at http://www.parrotcode.org

That's a good question, and one worth poking around at. Volunteers? 
(Those things are autogenerated from files in the repository, so it's 
likely something broke there)
Yes, it's a wrong url in the template, will fix it.
I'm already reworking the whole docs section, it will be back very soon.

Also - is there any reason why some messages I send to
the list don't make it?  I am not sure who does
maintenance on the list but the message I sent today
in regards to JIT on Cygwin did not make it.

Well, if you posted from an account other than the one subscribed it 
can take a while before someone sees it in the moderation queue and 
deals with it.

In this case, though, I think it's more likely perl.org got 
overwhelmed with spam and virus mail again, as I saw quite a few of my 
own messages stuck in my server's outbound queue. Unfortunately there 
are a *lot* of perl.org email addresses out there, and when the waves 
of virus/spam mail crash down on it, well... it gets a bit behind and 
mail sits in local queues. This isn't anything unusual, unfortunately. 
(The perl.org machines also have a somewhat heavyweight front-end on 
them, so when they get blasted with a few dozen messages a second 
things tend to get behind)



Re: Benchmark Stuff

2004-08-20 Thread Sebastian Riedel
Joshua Gatcomb wrote:
I recently noticed that the benchmarks in
examples/benchmarks was running significantly slower. 
I update Cygwin and Parrot daily - so there have been
a lot of changes to account for.  I idly asked on IRC
if anyone was regularly tracking benchmark performance
because I was feeling lazy.

Dan said not that he was aware of but if I was willing
to whip something up in Perl he would be more than
happy to do so on a regular basis.  Well, I can't get
Pg working on Cygwin anymore (should have regression
tests everytime I update) so after messing with it for
an hour I decided to use SQLite.  I also couldn't get
File::Basename to work correctly so I gave up and
rolled my own regex.  Since at this point it was no
longer portable I just threw caution to the wind and
coded it *nix centric.
It assumes the parrot executable is in your $PATH env
It assumes it is being run from examples/benchmark
It is also a very quick hack because I was on my lunch
break.
 

Take a look at tools/dev/parrotbench.pl
It already does most of the things you want, you just have to parse it's 
output and feed it to your database.

Enjoy
Joshua Gatcomb
a.k.a. Limbic~Region
 

Cheers,
Sebastian

___
Do you Yahoo!?
Win 1 of 4,000 free domain names from Yahoo! Enter now.
http://promotions.yahoo.com/goldrush



Re: Towards 0.1.1 - timetable

2004-10-05 Thread Sebastian Riedel
Leopold Toetsch:
> - nice release name wanted

Firebird or Phoenix

sebastian