You are welcome to it.  How should I get the file to you?  It's quite large
at ~180 MB.  If you don't have a solution at hand, I will talk to Steve.  I
think we have a public FTP server I can put it on, I just don't remember
who/what/where.


On Sun, May 16, 2010 at 8:31 PM, Adam Murdoch <[email protected]> wrote:

>
>
> On 14/05/10 4:09 AM, John Murph wrote:
>
> Adam,
>
> I ran with -XX:+HeapDumpOnOutOfMemoryError and got a ~180 MB dump file.  I
> can get it to you if you like, but Steve and my analysis only turned up the
> concerns I expressed above.  About 25 MB was the "seralizedValue" byte array
> of BTreePersistentIndexedCache$DataBlock.  About 30 MB (I think) was the
> file full paths stored as String keys in
> DefaultFileSnapshotter$FileCollectionSnapshotImpl.  This does not fully
> explain the OOM when running with -Xmx256m.
>
>
> Indeed. Some code that has a transient need for 30 odd MBs is not
> necessarily a memory leak, when it's only 10% of the heap. It's only a
> problem if it is not releasing the heap (which may or may not be happening
> in this case). I'm curious, what is using the other 226MB of heap? Gradle
> shouldn't, in general, be hanging on to much heap at all. Perhaps if I could
> look at the heap dump, something might be apparent.
>
>
>   I bumped it to 300m and it still was not enough, but 350m was.  So, we
> are running right now with -Xmx384m, but at some point that will probably
> start failing as well.  I don't think this is a "drop everything" type of
> problem, but I would like to get it addressed before 1.0 ships.
>
>
> On Wed, May 12, 2010 at 2:21 AM, Adam Murdoch <[email protected]>wrote:
>
>>
>>
>> On 11/05/10 7:36 AM, John Murph wrote:
>>
>>> We are getting OOMs occasionally.  These seems to happen for many (if not
>>> all) our various builds on our project, but not consistently for any of
>>> them.  Each of these builds is run with a -Xmx of 256m.  An example stack
>>> trace is given below.  It seems related to caching of task archives, and my
>>> .bin file is ~30 MB right now.  I think (but might be wrong) that this file
>>> is cached in memory (with other Java overhead), meaning that a lot of memory
>>> might be used by this cache.
>>>
>>
>>  Chunks of it are loaded into memory as they are needed and discarded. The
>> whole file is not loaded into memory.
>>
>>
>>   Do you guys think this is a potential problem area, or should I be
>>> looking elsewhere?
>>>
>>
>>  It might be a problem. Or it might be something else and you are seeing
>> the trigger rather than the cause. Can you run Gradle with
>> GRADLE_OPTS=-XX:+HeapDumpOnOutOfMemoryError and we can see what the heap
>> dump reveals?
>>
>>
>>
>>  P.S. It looks like default serialization is used for much of this data,
>>> leading to a bloated .bin file.  I zipped it and got ~4.7 MB for the same
>>> file, and looking into the .bin file shows lots of repeated information.
>>>  First, the class names of the serialized objects get repeated a lot.
>>>  Secondly, the pathing for the input/output files are the same so much of
>>> that is also redundant.  This might not be related to the OOMs, but it
>>> seemed interesting.
>>>
>>
>>  These are good points.
>>
>> Default serialization is easy, particularly when things are changing. At
>> some point we could look at customising the serialization, to make things
>> more efficient and/or to get some better control over versioning. Right now,
>> it's good enough, I think. Same with the absolute paths - we could encode
>> them relative to some base dir, say, and save some space, but it's good
>> enough for now.
>>
>>
>> --
>> Adam Murdoch
>> Gradle Developer
>> http://www.gradle.org
>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe from this list, please visit:
>>
>>   http://xircles.codehaus.org/manage_email
>>
>>
>>
>
>
> --
> John Murph
> Automated Logic Research Team
>
>
> --
> Adam Murdoch
> Gradle Developerhttp://www.gradle.org
>
>


-- 
John Murph
Automated Logic Research Team

Reply via email to