On Wed, 22 Dec 2010, Igor Stasenko wrote:

2010/12/22 Levente Uzonyi <le...@elte.hu>:
On Wed, 22 Dec 2010, Stéphane Ducasse wrote:

Hi guys

Ideally I would love to be able to use accessors as the abstraction layer
that they can bring us:
I mean the fact that we could avoid to have offset based bytecode means
that we could reuse a lot more
the methods (in special case - mixins and others).

It's simply a bad idea. If you don't want instance variables, just change
the VM's object representation, but then don't call your system Smalltalk
anymore. ;)

Why?  For me smalltalk is a syntax and everything is an object. The
rest is optional.

Aren't instance variables part of the syntax? Or is Self Smalltalk?



Btw without instance variables you don't need mixins, cause you have traits.

If you only want mixins (instead of stateful traits), then there's at least
one mixin implementation for Squeak out there.


Now I have a question does the JIT or the shortcut (not sure if this is in
stackVM) blurry the cost of accessors
vs. direct accesses?

Bytecodes are still 10-12x faster with Cog than sends.

even those, which are optimized by jit?
i mean, could

| pt |
pt := 1...@2.
[ pt x ] bench

'2.789668866226755e6 per second.'


| pt |
pt := 1...@2.
[ pt xx ] bench
'2.642108378324335e6 per second.'

where Point>>xx is:
xx
^ self x

so, what are you mean by 10-12 times faster?


You benchmark has several flaws. It uses bench which is a message send by itself and does several other sends, block activations, whatever. Just evaluate
[] bench.
to see the problem.

Here is the benchmark I based my idea about 10-12x performance difference:

0 tinyBenchmarks.
'540940306 bytecodes/sec; 50274171 sends/sec'

It shows 10.76x difference. You may say that it's inaccurate, so I wrote another myself: http://leves.web.elte.hu/squeak/SendBenchmark.st

To run it evaluate the following:
SendBenchmark run.

My result is:
#(#(109 16) #(105 17) #(105 18) #(108 18) #(106 19)).
To get the difference (may not work in Pharo):
#(#(109 16) #(105 17) #(105 18) #(108 18) #(106 19)) sum in: [ :sum |
        sum first / sum second roundTo: 0.01 ].
6.06

So it's 6x faster to use instance variables, than accessors.


Levente


Levente

P.S.: IIRC one of V8's optimizations is to use a common representation
(class) for objects that have the same slots (instance variables).



Does anybody run a benchmarck about
       self x vs x in Cog recently
on a real app?

Stef




--
Best regards,
Igor Stasenko AKA sig.

Reply via email to