Hi Thierry,

On Oct 25, 2014, at 2:52 AM, Thierry Goubier <thierry.goub...@gmail.com> wrote:

> Le 24/10/2014 19:50, Eliot Miranda a écrit :
>> 
>> 
>> On Fri, Oct 24, 2014 at 10:12 AM, Thierry Goubier
>> <thierry.goub...@gmail.com <mailto:thierry.goub...@gmail.com>> wrote:
>> 
>>    Le 24/10/2014 19:07, Eliot Miranda a écrit :
>> 
>> 
>> 
>>        On Fri, Oct 24, 2014 at 7:34 AM, Esteban Lorenzano
>>        <esteba...@gmail.com <mailto:esteba...@gmail.com>
>>        <mailto:esteba...@gmail.com <mailto:esteba...@gmail.com>>> wrote:
>> 
>> 
>>                 On 24 Oct 2014, at 16:21, Thierry Goubier
>>                 <thierry.goub...@gmail.com
>>            <mailto:thierry.goub...@gmail.com>
>>            <mailto:thierry.goubier@gmail.__com
>>            <mailto:thierry.goub...@gmail.com>>> wrote:
>> 
>> 
>> 
>>                 2014-10-24 15:50 GMT+02:00 Clément Bera
>>            <bera.clem...@gmail.com <mailto:bera.clem...@gmail.com>
>>                 <mailto:bera.clem...@gmail.com
>>            <mailto:bera.clem...@gmail.com>__>>:
>> 
>> 
>>                     The current x2 speed boost is due only to spur, not
>>            to sista.
>>                     Sista will provide additional performance, but we
>>            have still
>>                     things to do before production.
>> 
>>                     The performance gain reported is due to (from most
>>            important
>>                     to less important):
>>                     - the new GC has less overhead. 30% of the
>>            execution time used
>>                     to be spent in the GC.
>>                     - the new object format speeds up some VM internal
>>            caches
>>                     (especially inline caches for message sends due to an
>>                     indirection for object classes with a class table).
>>                     - the new object format allows some C code to be
>>            converted
>>                     into machine code routines, including block
>>            creation, context
>>                     creation, primitive #at:put:, which is faster because
>>                     switching from jitted code to C then back to jitted
>>            code
>>                     generate a little overhead.
>>                     - characters are now immediate objects, which
>>            speeds up String
>>                     accessing.
>>                     - the new object format has a larger hash which
>>            speeds up big
>>                     hashed collections such as big sets and dictionaries.
>>                     - become is faster.
>> 
>> 
>>                 All this is really cool :) And if I remember well,
>>            there is 64
>>                 bitness coming as well.
>> 
>>                 Will Spur also cover ARM ?
>> 
>> 
>>             Spur is an object format, it does not have anything to do with
>>             underlying architecture (well, at least in theory… Eliot
>>        should be
>>             able to say more on this).
>>             Cog, in the other side is a jitter, and it has everything
>>        to do with
>>             the architecture so is difficult to have it running on ARM (but
>>             there is work on that direction, so we hope it will be there
>>             eventually).
>> 
>>             It looks like there is a misunderstanding (probably not you,
>>             Thierry, but since I’ve seen it time to time, I take the
>>        chance to
>>             clarify): Spur is not a replacement for Cog, both are
>>        orthogonal (in
>>             fact, Spur runs in Stack vm too).
>>             Real new VM is not “Spur” vm, is "Cog+Spur" vm.
>> 
>> 
>>        +1.  Spur changes the object representation, so it has a new heap
>>        layout, a new layout for objects, and a new garbage collector.
>>        Because
>>        the object format is simpler it allows the Cog JIT to generate
>>        machine
>>        code versions of more operations, in particular basicNew,
>>        basicNew: and
>>        closure and context creation.  This is the main reason for the
>>        speedups
>>        in Cog+Spur.  As far as the Stack VM goes if you see speedups for
>>        Stack+Spur vs Stack+V3 that's all due to the Spur object
>>        representation
>>        & GC, because there's no JIT.
>> 
>>        Now at the moment the Cog JIT only has an x86 back-end in
>>        production.
>>        Tim Rowledge is working on finishing the ARM back end started by
>>        Lars
>>        Wassermann in the GSoC a few years ago.  So soonish we should be
>>        able to
>>        have Cog+V3 or Cog+Spur on e.g. Android.
>> 
>>        As part of 64-bit Spur I will be doing a back end for x86-64.
>> 
>> 
>>    Which is then a 64bits Spur+Cog+Sista, right?
>> 
>> 
>> It should be able to be used for Spur+Cog or Spur+Cog+Sista.  Depends
>> how quickly I can write the 64-bit Spur and how quickly Clément can put
>> Sista into production.
> 
> Ok.
> 
>>        And Doug McPherson is also in the mix, having written the ARM
>>        version of
>>        the new FFI plugin, and is going to be building Stack ARM VMs
>>        and soon
>>        enough Cog ARM VMs.
>> 
>> 
>>    Thanks for all those news, this is really great.
>> 
>> 
>> Yes, I'm very excited.  It's so great to have strong collaborators like
>> Clément, Ronie, Doug and Tim.  But there's lots of room for more to join
>> us.  For a really cool project how about grabbing Bert Freudenberg's
>> VMMakerJS Squeak-vm-on-JavaScript, extract the event handling and
>> rendering part and connect it to the Cog VM via sockets to give us a
>> really fast web plugin?
> 
> Is that one really needed? I had the feeling web plugins were so last year 
> and that we could just do: Amber, or even a remote desktop client in the web 
> browser with Squeak/Pharo RDP support (which is a bit more general than a 
> really fast web plugin).

If Amber is useful then a Squeak/Pharo/Scratch/whatever plugin is useful.  
Amber has limitations (no thisContext, no become, no run-time class 
redefinition, no instance migration there-on, limited performance, especially 
for non-local return), but it is still useful enough for people to go through 
that extra deployment and verification step to check that the code still works 
under Amber.  If one can't live with those limitations or if one doesn't want 
to pay the cost of the extra deployments step then a plugin solves that problem.

Don't confuse an absence as a lack of need.  Because there has been no "plugin" 
solution for a few years doesn't mean it's not needed.  That people have put 
significant effort into alternatives like Amber and Jalapeño proves it is 
useful.


> Moreover, I see that the only thing with potential linked to the web today is 
> to handle tablets/smartphones, and this is a bit why I'm asking about ARM 
> support (also embedding in small stuff, like Cortex M0-M4 stuff with BLE, 
> solar cells and batteries). For now, the only use we have for Smalltalk in 
> there is as C code generation / deployment IDEs on the desktop (aka B. 
> Pottier Netgen), and as back-end on web deployment with Seaside + others.

There doesn't need to be an either/or in a growing community.


> 
> Thierry
> 

Reply via email to