On Tue, May 16, 2017 at 11:24 PM, Eliot Miranda <eliot.mira...@gmail.com> wrote:
> Hi Ben,
>
>
>> On May 16, 2017, at 7:42 AM, Ben Coman <b...@openinworld.com> wrote:
>>
>> On Tue, May 2, 2017 at 10:36 PM, Raffaello Giulietti
>> <raffaello.giulie...@lifeware.ch> wrote:
>>> Hello,
>>>
>>> I'm on Pharo 6 (32 bit)/Windows 10 (64 bit).
>>>
>>> I'm trying to load and use the (32 bit) JVM DLL into a running Pharo image
>>> via the UFFI.
>>>
>>> Everything works correctly, including JVM method invocations, except when
>>> trying to set the minimum and maximum JVM heap size by passing the
>>> -Xms<size> and -Xmx<size> options upon JVM creation. With these options,
>>> Pharo simply crashes without leaving any trace.
>>>
>>> Apparently, memory management requests from the JVM interfere with Pharo's
>>> own memory management.
>>>
>>> Please note that we successfully use the same mechanism for both VisualWorks
>>> (32 bit)/Windows 10 (64 bit) and Gemstone 64 bit/Linux, so it seems it has
>>> to do with a Pharo limitation somehow.
>>>
>>> Further, I couldn't find any documentation on how to increase Pharo's
>>> working set.
>>>
>>> So the questions are:
>>> * What is the most probable cause of the crash described above?
>>> * Where is there more doc about Pharo's memory configuration settings?
>>
>> I just bumped into this which you may find interesting...
>> https://github.com/OpenSmalltalk/opensmalltalk-vm/blob/Cog/platforms/unix/vm/sqUnixMemory.c#L57-L60
>> * The upshot of all this is that Squeak will claim (and hold on to)
>> * ALL of the available virtual memory (or at least 75% of it) when
>> * it starts up. If you can't live with that, use the -memory
>> * option to allocate a fixed size heap.
>
> Please be careful to read the code.

yes, guilty.  I was skimming the code looking at something else and
when it caught my eye. I didn't stop to examine it, just drop the link
here. Thanks for the correction.

cheers -ben

> This is the situation on *non-Spur* VMs (Pharo 5 and earlier, Squeak 4.x and 
> earlier).  It is *not* the case on Spur.
>
> In Spur on start up the VM allocates enough memory for the heap plus one 
> "growth increment" (currently 16mb) and new space (current default about 5mb 
> on 32 bits, 9mb on 64 bits), and for the native code zone (1mb on x86, about 
> 1.4mb on arm and x64).
>
> Spur *does not* reserve address space for the heap.  It requests memory for 
> the heap in segments (default 16mb; controllable via a vmParameterAt:put: 
> send).  It returns those segments to the is when GC frees segments (the 
> threshold being controllable via a vmParameterAt:put: send).
>
>>
>> cheers -ben
>>
>

Reply via email to