On Tuesday, 23 November 2010 at 14:43, Jon Gentle wrote:
There was some discussions last week about the speed of parrot and
being able to benchmark it. As it turns out, I had time for a quick and dirty
project, so I built http://isparrotfastyet.com/ . I whipped up a quick sqlite
database and catalyst app, and thanks to dukeleto, I'm using Tool::Bench to
actually generate the benchmarks. I've posted the code onto github
at https://github.com/atrodo/itfy and I went ahead and attempted to populate it
with benchmarks from each release tag since 2.0.0.
Obviously, it's still pretty young and it has a few flaws. First of all, there
is no way to submit benchmarks to it, and in fact, at this point, it's ran
manually. Second, the data points need some context information so it's
obvious exactly what they represent.
Perhaps most importantly, it only has 1 real benchmark. The rakudo start time
benchmark is broken since I don't have a good way to checkout a previous rakudo
that matches the parrot checkout for a release. It's not a problem when I do
daily benchmarking, and I could do it manually for the previous releases, but
I'd rather not.
If anyone has any questions or suggestions, I'm all ears. Even better, pull
requests are welcomed.Perhaps it would be helpful to reuse existing code. The
PyPy developers have http://speed.pypy.org/, which is an instance
of https://github.com/tobami/codespeed_______________________________________________
http://lists.parrot.org/mailman/listinfo/parrot-dev