On Thu, 4 Mar 2021 13:22:47 GMT, Lin Zang <lz...@openjdk.org> wrote:

>> Thanks for the explanation in the CR. That helps a lot. I didn't have time 
>> to get through the review today, but will do so tomorrow.
>
> Hi Chris,
> Thanks a lot, I am still wip to reduce the memory consumption. So I think you 
> could help review after my next update :)
> 
> BRs,
> Lin

Update a new patch that reduce the memory consumption issue.
As mentioned in the CR,  there is internal buffer used for segmented heap dump. 
 The dumped object data firstly write into this buffer and then flush() when 
the size is known. when the internal buffer is full, the current implementation 
do:

- new a larger buffer,  copy data from old buffer into the new one, and then 
use it as the internal buffer.

This could cause large memory consumption because the old buffer data are 
copied, and also the old buffer can not be "free" until next GC. 

For example, if the internel buffer's length is 1MB, when it is full, a new 2MB 
buffer is allocated so there is actually 3MB memory taken (old buffer + new 
buffer). And in this case, for the ~4GB large array, it keeps generating new 
buffers and do copying, which takes both CPU and memory intensively and cause 
the timeout issue.

This patch optimize it by creating a array list of byte[]. when old buffer is 
full, it saved into the list and the new one is created and used as the 
internal buffer. In this case, the above example takes 2MB(1MB for old, saved 
in the list; and 1MB for the new buffer)

Together with the "write through" mode introduced in this PR, by which all 
arrays are write through to underlying stream and hence no extra buffer 
requried. The PR could help fix the LargeArray issue and also save memory.

Thanks!
Lin

-------------

PR: https://git.openjdk.java.net/jdk/pull/2803

Reply via email to