On Sat, Dec 31, 2011 at 12:29:40PM +0200, Maciej Fijalkowski wrote: Hi Maciej,
> Overall great work, but I have to point out one thing - if you want us to > look into benchmarks, you have to provide a precise benchmark and a precise > way to run it (few examples would be awesome), otherwise for people who are > not aware of the language this might pose a challenge. At the moment, I'm not even sure I know what representative benchmarks might be - it's certainly something I'd like advice on! I did mention one simple benchmark in my message, which is "make regress" - it compiles the compiler, standard library, examples, and tests. This exercises most (not all, but most) of the infrastructure, so it's a pretty decent test (though it may not be entirely JIT friendly, as it's lots of small-ish tests; whether the JIT will warm up sufficiently is an open question in my mind). To run this test, I'd suggest something like: $ make clean ; cd vm ; make ; cd .. ; time make regress The reason for that order is that it 1) cleans everything (in order that it'll be built later) 2) compiles the VM on its own (we don't want to include that in the timings!) 3) executes "make regress" and times it (without including the VM build in the figures). If you're interested in creating new micro-benchmarks, the language manual is hopefully instructive: http://convergepl.org/documentation/1.2/quick_intro/ Let's say, for arguments sake, that you thought testing integer addition in a loop is a good idea. This program will do the trick: func main(): i := 0 while i < 100000: i += 1 You can then compile it: $ convergec -m f.cv and then execute it: $ time converge f That way, you know you're not including the time needed to compile the program. Yours, Laurie -- Personal http://tratt.net/laurie/ The Converge programming language http://convergepl.org/ https://github.com/ltratt http://twitter.com/laurencetratt _______________________________________________ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev