On Wed, Jul 2, 2014 at 11:43 AM, kilon alios <kilon.al...@gmail.com> wrote:

> 0.00000000549181 seconds per message is just insanely low number . This is
> not even in the realm of nanoseconds.  Is this number real ? Because I am
> very skeptical that one can get this kind of performance from a dynamic
> language even with a JIT VM.
>

Here's a measurement on my 2.2GHz Mac Book Pro Core i7:

| r | {[r := 34 benchFib] timeToRun. r} #(140 18454929)

benchFib is nfib that adds one for each call, so the answer is the number
of calls necessary to compute the answer.

benchFib ^self < 2 ifTrue: [1] ifFalse: [(self-1) benchFib + (self-2)
benchFib + 1]

So that's 18 million calls in 140 milliseconds, or about 132 calls per
microsecond.  That's indeed in the realm of nanoseconds, 13.2 nsecs per
call.



>
> On Wed, Jul 2, 2014 at 9:08 PM, Camille Teruel <camille.ter...@gmail.com>
> wrote:
>
>>
>> A spur image :)  awesome !!
>> Thanks Esteban & Guille!
>> I'll start adapting the new class builder to Spur next wednesday when I
>> come back from holidays.
>>
>> On 2 juil. 2014, at 16:13, Esteban Lorenzano <esteba...@gmail.com> wrote:
>>
>> > Hi,
>> >
>> > I’ve been working on prepare Pharo to run with the new Spur VM… and
>> finally this week Guille and I sit together and make a huge advance :)
>> > here a screenshot of pharo 4 running with spur:
>> > <Screen Shot 2014-07-02 at 15.58.17.png>
>> > for more information, same image before migration, with a regular cogvm
>> gives this numbers: '1297023432 bytecodes/sec; 161029354 sends/sec’, so
>> that means that the tiny benchmarks run at 166% the speed of the old vm…
>> > Of course this is just one benchmark, but I’m very impressed :)
>> >
>> > Now, it is still not usable but I’m confident I can put a jenkins job
>> to work very soon :)
>> >
>> > Esteban
>> >
>> > ps: before you start asking: Spur WILL NOT be available for Pharo3, it
>> will be part of Pharo4.
>> >
>> >
>> >
>>
>>
>>
>


-- 
best,
Eliot

Reply via email to