On Feb 4, 2007, at 8:13 PM, Ivan Volosyuk wrote:

On 2/4/07, Geir Magnusson Jr. <[EMAIL PROTECTED]> wrote:

On Feb 4, 2007, at 8:41 AM, Gregory Shimansky wrote:

> Leo Li wrote:
>> On 2/4/07, Gregory Shimansky <[EMAIL PROTECTED]> wrote:
>>>
>>> I see no difference in the approach I've suggested already. If we
>>> have
>>> to take care about all the native resource allocation inside of
>>> classlib
>>> APIs, then there is no difference which code calls GC, be it
>>> gc_native_malloc, or the API native code like this:
>>>
>>> void *p = malloc(size);
>>> if (NULL == p)
>>> {
>>>   gc_force_gc();
>>>   p = malloc(size);
>>>   if (NULL == p)
>>>   {
>>>       jni_env->ThrowNew("java/lang/OutOfMemoryError");
>>>       return;
>>>   }
>>> }
>>
>>
>> But I am worried whether it is too late to force gc when memory
>> allocation
>> fails. It will take some time to complete the release of
>> resources, for
>> example, in finalizer() and this period of time is not
>> predictable. Thus
>> the
>> malloc will still fails...:(
>> Futhermore, if we wait till memory allocation fails then to force
>> gc, it
>> might be possible that more than one thread will encounter such a
>> problem
>> and force gc. I am not sure whether it will lead to problem in vm.
>> So, I think it may be wiser to trigger a gc when free memory ratio is
>> low and before the failure in malloc actually occurs.
>
> Yes, I shouldn't have called the above code a "solution". I just
> wanted
> to point to Ivan that there is no difference with taking care about
> memory allocation in GC code or in classlib code.
>
> You've found another problem in such approach, just running GC is not
> enough in this case because finalization may not be done in time to
> free
> needed native resources.
>
> Speaking in DRLVM terms, the above code could also contain call to
> vm_run_pending_finalizers, but it is still not a solution for
> multithreaded case.

I'm getting confused on where people think that this problem needs to
be managed.  Am I misunderstanding that there should be some
awareness of this in the classlib or VM natives?  Seems like it can
be localized.

I think that if whoever calls malloc() finds that the system is out
of memory, then it's too late.  I always thought that VMs manage
their conceptual native "heap" (all native resources consumed from
OS, including as Robin mentioned, filehandles) in a proactive manner,
like they do their java heap.  it's clear now that they don't.

So when a caller to malloc() gets a null, it should simply throw an
OOM, assuming that the runtime environment did all it could to
prevent that.

Can this be done through portlib, letting the VMs implementation of
it manage to whatever degree of sophistication is warranted?

Actually, the portlib is the lowest layer we rely on. If we can have
here a callback to VM's GC we can place the code in the portlib. I
would also distinguish memory allocations which associated with java
objects and that which doesn't. No need to account resources which
will not be freed after garbage collection.

That's true. I was just thinking about keeping a clear idea of the native resource usage, but that's actually a separate concept from coupling to java objects and their lifecycle.


As was mentioned we have different types of critical system resources.
It makes sense to make a list of such resources:
  malloc'ed memory
  mmap'ed memory
  shared memory (?)
  file descriptors (files, sockets)
  XWin / GDI resources (?)
  more?

What would be the accounting strategy for the resources? We could
allocate them via some functions in this facility or we could only
account them here and call GC using some buildin criteria.

It's clear to me that accounting and "free when associated java object is free" are two different notions...

geir


--
Ivan
Intel Enterprise Solutions Software Division

Reply via email to