Hi,

On 2016-02-08 13:28, Dmitry Samersoff wrote:
Andreas,

Sorry for delay.

Code changes looks good for me.

But behavior of non-segmented dump is not clean for me (1074, if dump is
not segmented than the size of the dump is always less than max_bytes
and code below (1086) will not be executed.

I think today we may always write a segmented heap dump and
significantly simplify logic.

Do you mean remove support for dumping non-segmented entirely?
Could I do that as part of this fix? And would it require a CCC request?


Also, I think that current_record_length() don't need as much asserts.
one assert(dump_end == (size_t)current_offset(), "checking"); is enough.

I'd like to keep the assert(dump_len <= max_juint, "bad dump length"); in case there is ever a similar problem on some other code path if that's ok with you. I have no problem with removing the dump_start assert though.

Regards,
Andreas


-Dmitry

On 2016-02-01 19:20, Andreas Eriksson wrote:
Hi,

Please review this fix for dumping of long arrays.

Bug:
8144732: VM_HeapDumper hits assert with bad dump_len
https://bugs.openjdk.java.net/browse/JDK-8144732

Webrev:
http://cr.openjdk.java.net/~aeriksso/8144732/webrev.00/

Problem:
The hprof format uses an u4 as a record length field, but arrays can be
longer than that (counted in bytes).

Fix:
Truncate the dump length of the array using a new function,
calculate_array_max_length. For a given array it returns the number of
elements we can dump. That length is then used to truncate arrays that
are too long.
Whenever an array is truncated a warning is printed:
Java HotSpot(TM) 64-Bit Server VM warning: cannot dump array of type
object[] with length 1,073,741,823; truncating to length 536,870,908

Much of the change is moving functions needed by
calculate_array_max_length to the DumpWriter or DumperSupport class so
that they can be accessed.

Regards,
Andreas


Reply via email to