On Fri, 5 Mar 2021 11:05:57 GMT, Lin Zang <lz...@openjdk.org> wrote:

>> Hi Chris,
>> Thanks a lot, I am still wip to reduce the memory consumption. So I think 
>> you could help review after my next update :)
>> 
>> BRs,
>> Lin
>
> Update a new patch that reduce the memory consumption issue.
> As mentioned in the CR,  there is internal buffer used for segmented heap 
> dump.  The dumped object data firstly write into this buffer and then flush() 
> when the size is known. when the internal buffer is full, the current 
> implementation do:
> 
> - new a larger buffer,  copy data from old buffer into the new one, and then 
> use it as the internal buffer.
> 
> This could cause large memory consumption because the old buffer data are 
> copied, and also the old buffer can not be "free" until next GC. 
> 
> For example, if the internel buffer's length is 1MB, when it is full, a new 
> 2MB buffer is allocated so there is actually 3MB memory taken (old buffer + 
> new buffer). And in this case, for the ~4GB large array, it keeps generating 
> new buffers and do copying, which takes both CPU and memory intensively and 
> cause the timeout issue.
> 
> This patch optimize it by creating a array list of byte[]. when old buffer is 
> full, it saved into the list and the new one is created and used as the 
> internal buffer. In this case, the above example takes 2MB(1MB for old, saved 
> in the list; and 1MB for the new buffer)
> 
> Together with the "write through" mode introduced in this PR, by which all 
> arrays are write through to underlying stream and hence no extra buffer 
> requried. The PR could help fix the LargeArray issue and also save memory.
> 
> Thanks!
> Lin

As discussed in CR https://bugs.openjdk.java.net/browse/JDK-8262386, the byte[] 
list is much more like an optimization. Revert it in the PR and I will create a 
separate CR and PR for it. 

Thanks,
Lin

-------------

PR: https://git.openjdk.java.net/jdk/pull/2803

Reply via email to