> But  we are in 2013 and if we have one thing than it's memory.

I wouldn't oppose to having the sources (or the AST or whatever ;-) in 
[object-]memory, although retaining the option of _also_ still writing all the 
changes into an external changelog (in case of VM, or the less rare case, 
Windows crashes).

But no, we don't have "enough memory" (although now is perhaps the time to 
announce that "64GB ought to be enough for anybody" ;-))).

> (the $25 Raspi comes with 256MB, the $35 with 512MB.)

Incidentally, ~512 MB is the max you can get on Windows (otherwise crash after 
an ugly ~ 30 sec freeze; I checked sometime around the 2.0 release), and this 
is not enough for some applications.

The Moose people complained a few years back (unable to, on Windows, load a 
large but otherwise fine image saved on Mac), the only solution offered was 
"change a #DEFINE or makefile or something like that and rebuild the VM, then 
pray it works".

Currently I avoid using Pharo in case I even suspect the app might come 
anywhere near that limit (to avoid nasty surprises / hasty rewrites later).

Only problems I've heard of concerning DLL's mapped at "weird" offsets in 
address space (this was cited as reason for the memory limit) were related to 
trojan horses / stealth viruses tampering with system, and anti-virus / 
anti-malware SW doing basically the same ;-))) Or something getting loaded 
really early, geting the DLL - in this case it was kernel.dll - mapped at 
unusual offset which then got re-used for all other applications, but this was 
mentioned in context where in the end they were unable to allocate sufficiently 
large block of contiguous memory because something was siting in the middle of 
address space - and i _think_ think it was concerning LuaJIT.

Would it be possible to prod Jenkins to produce (additional) windows builds 
with the limit lifted (or set to 2 GB; optionally perhaps a 1GB build) ? So 
interested testers could try them - as the problem was reported as only 
manifesting on some comps., while others ran completely fine. I think somebody 
mentioned that thanks to Jenkins it is/will be a childs play to add/clone 
another build(s).

IMO it would be great to do some larger-scale testing whether this problem 
still manifests itself and if so whether it could be worked around, or traced 
to whatever fucks windows up so they load DLL into the middle of the address 
space ... And lot more people would participate if "build your own VM" would be 
an optional step (I know, it's not that hard, and setting up mingw or msvc is 
not that hard either, but it takes a few hours to get it all up & running).

I've "played with" (used to solve actual problems / write applications which 
are used in production, i just got to choose the language ...) a LOT of prog. 
langs, and I don't remember this problem mentioned elsewhere.

For example SBCL, Smalltalk/X, Factor, SWI Prolog, Haskell (a few very 
different - on the inside as much as on the outside - examples) are able to 
utilise the whole 2GB (if your windows are configured to the standard 2GB/2GB 
user/kernel space split). Even that abomination that starts with letter J and 
is touted as The_Only_VM_and_Language is able to utilize ~1.5 GB on a 32bit 
system.

I'd be able & willing to test on XP/32 and Win7/32+64. I can even test on ~300 
machines, although there are only a few hardware configurations, and all are 
installed according to the same template and mainly use the same software and 
antivirus. So it's probably not nearly as interesting as 30 totally random 
machines ...

Anybody else thinks this sounds like a good idea ? 

-----Original Message-----
From: pharo-project-boun...@lists.gforge.inria.fr 
[mailto:pharo-project-boun...@lists.gforge.inria.fr] On Behalf Of Marcus Denker
Sent: Saturday, April 27, 2013 7:08 PM
To: Pharo-project@lists.gforge.inria.fr
Subject: Re: [Pharo-project] Opal Decompiler status


On Apr 27, 2013, at 6:42 PM, Nicolas Cellier 
<nicolas.cellier.aka.n...@gmail.com> wrote:

> Thanks Marcus.
> I'm among the skepticals concerning the AST, but I'd like to be proven wrong, 
> because AST level would certainly simplify things a lot.
> My main reserve is that interruption can occur at BC level, so for debugging 
> purpose, what are your plans?
> Will you move the PC at some AST node boundary (if it is possible given side 
> effects of some BC)?
> 
For the Debugger, everything stays as it is (until Pharo 4 or so). if you look 
at it, the current Debugger never decompiles if there is source available.
It *compiles* to get 
        -> the mapping BC -> text 
        -> the information how temporary variables in the byte code are 
actually related to temps in the source.
            (this is very complex, temps can be e.g. stored on the heap if they 
are written to in a closure, and when they are just
             read they have a different offset in each closure they are in. 
Conversely, some temps  in the byte code are used
            to store the array that hold the variables that are on the heap.)

The old compiler recorded mappings while compiling that where encoded. quite 
complex, at least for my tiny brain.

So the new just keeps the AST annotated with all the needed infos, this is much 
easier to debug (and we do have the
memory these days, and as we cache the AST, it's even fast).

So the debugger needs the compiler. The decompiler now exists just to make text 
so that we can call the compiler
in the case there is no .sources. The debugger *always* compiles to get the 
mappings, as soon as there is source code
the decompiler will never be used. (and even if the decompiler is used, the 
compiler is called right after on it's results so
one can record the mappings).

So if you make sure there is always source-code (or a representation with the 
right amount of meta data), you don't need the decompiler.

So at the start it will be the  source code. And yes, this takes memory. But  
we are in 2013 and if we have one thing than it's memory.
(the $25 Raspi comes with 256MB, the $35 with 512MB.).

We could have shipped 2.0 with 8MB of useless Monticello meta data and nobody 
would have even realized it. (like we
have now megabytes of fonts in the image.). Yet the source is special. I really 
wonder why. 

(And yes, there should be solutions to not need unused data in main memory and 
solutions to share across multiple images
data that is the same. but for all kinds of stuff, not just source code).

        Marcus 

Reply via email to