Hi Ralf,

thanks for providing this enhancement to parallel gzip-compress heap dumps!

I reckon it's safe to say that the coding is sophisticated. It would be awesome 
if you could sketch
the idea of how HeapDumper, DumpWriter and CompressionBackend work together to 
produce the gzipped
dump in a source code comment. Just enough to get started if somebody should 
ever have to track down
a bug -- an unlikely event, I know ;)

Please find the details of my review below.

Thanks, Richard.
// Not Reviewer

--

### src/hotspot/share/services/diagnosticCommand.cpp

510   _gzip_level("-gz-level", "The compression level from 0 (store) to 9 
(best) when writing in gzipped format.",
511               "INT", "FALSE", "1") {

    "FALSE" should be probably false.

### src/hotspot/share/services/diagnosticCommand.hpp
Ok.

### src/hotspot/share/services/heapDumper.cpp

390: Typo: initized

415: Typo: GZipComressor

477: Could you please add a comment, how the "HPROF BLOCKSIZE" comment is 
helpful?

539: Member variables of WriteWork are missing the '_' prefix.

546: Just a comment: WriteWork::in_max is actually a compile time constant. 
Would be nice if it could be
     declared so. One could use templates for this, but then my favourite ide 
(eclipse cdt) doesn't
     show me references and call hierarchies anymore. So I don't think it is 
worth it.

591: Typo: Removes the first element. Returns NULL is empty.

663: _writer, _compressor, _lock could be const.

762: assert(_active, "Must be active");

     It appears to me that the assertion would fail, if an error occurred 
creating the CompressionBackend.

767: I think _current->in_used doesn't take the final 9 bytes into account that 
are written in
     DumperSupport::end_of_dump() after the last dump segment has been finished.
     You could call get_new_buffer() instead of the if clause.

903: Typo: Check if we don not waste more than _max_waste

1064: DumpWriter::DumpWriter()

     There doesn't seem to be enough error handling if _buffer cannot be 
allocated.
     E.g. DumpWriter::write_raw() at line 1091 will enter an endless loop.

2409: A comment, why Shenandoah is not supported, would be good.
      In general I'd say it is good and natural to use the GC work threads.

### src/hotspot/share/services/heapDumper.hpp
Ok.

### src/java.base/share/native/libzip/zip_util.c

I'm not familiar with zlib, but here are my .02€ :)

1610: Will be hard to beat zlib_block_alloc() and zlib_block_free() performance 
wise. But have you
      measured the performance gain? In other words: is it worth it? :)

1655: The result of deflateBound() seems to depend on the header comment, which 
is not given
      here. Could this be an issue, because ZIP_GZip_Fully() can take a comment?

1658: deflateEnd() should not be called if deflateInit2Wrapper() failed. I 
think this can lead
      otherwise to a double free() call.

### test/hotspot/jtreg/serviceability/dcmd/gc/HeapDumpCompressedTest.java

66: Maybe additionally check the exit value?

73: It's unclear to me, why this fails. Because the dump already exists? 
Because the level is
    invalid? Reading the comment I'd expect success, not failure.

### test/hotspot/jtreg/serviceability/dcmd/gc/HeapDumpCompressedTestEpsilon.java
Ok.

### 
test/hotspot/jtreg/serviceability/dcmd/gc/HeapDumpCompressedTestShenandoah.java
Ok.

### test/hotspot/jtreg/serviceability/dcmd/gc/HeapDumpCompressedTestZ.java
Ok.

### test/lib/jdk/test/lib/hprof/parser/GzipRandomAccess.java
Ok.

### test/lib/jdk/test/lib/hprof/parser/HprofReader.java
Ok.

### test/lib/jdk/test/lib/hprof/parser/Reader.java

93: is the created GzipRandomAccess instance closed somewhere?

-----Original Message-----
From: serviceability-dev <serviceability-dev-boun...@openjdk.java.net> On 
Behalf Of Schmelter, Ralf
Sent: Freitag, 13. März 2020 12:43
To: Ioi Lam <ioi....@oracle.com>; Langer, Christoph <christoph.lan...@sap.com>; 
Yasumasa Suenaga <suen...@oss.nttdata.com>; serguei.spit...@oracle.com; 
hotspot-runtime-...@openjdk.java.net runtime 
<hotspot-runtime-...@openjdk.java.net>
Cc: serviceability-dev@openjdk.java.net
Subject: [CAUTION] RE: RFR(L) 8237354: Add option to jcmd to write a gzipped 
heap dump

Hi,

I have updated the webrev: 
http://cr.openjdk.java.net/~rschmelter/webrevs/8237354/webrev.1/

It has the following significant changes:

- The jcmd now uses two separate flags. The -gz flag is now a boolean flag 
which toggles the compression  on/off. And the new -gz-level flag can be used 
to change the compression level. If tried to change the jlong flag coding to 
allow the old behavior (only one flag, which acts both as a boolean flag and a 
jlong flag), but decided against it, since it changes the semantic of a jlong 
flag. And I don't expect the -gz-level flag to be used all that much.

- I no longer use my own threads. Instead I use the WorkGang returned from 
CollectedHeap:: get_safepoint_workers(). This works fine, apart from Shenandoah 
GC, which runs into assertions when calling the CollectedHeap::object_iterate() 
method from a worker thread. I'm not sure if the assertion is too strong, but 
since the GC is currently experimental, I switch back to single threading in 
this case (as would be the case for serial GC or epsilon GC). Using the worker 
threads removes the problems the original code had regarding destruction of the 
monitor used.

- The reported number of bytes is now the one written to disk.

Best regards,
Ralf

-----Original Message-----
From: Ioi Lam <ioi....@oracle.com> 
Sent: Dienstag, 25. Februar 2020 18:03
To: Langer, Christoph <christoph.lan...@sap.com>; Schmelter, Ralf 
<ralf.schmel...@sap.com>; Yasumasa Suenaga <suen...@oss.nttdata.com>; 
serguei.spit...@oracle.com; hotspot-runtime-...@openjdk.java.net runtime 
<hotspot-runtime-...@openjdk.java.net>
Cc: serviceability-dev@openjdk.java.net
Subject: Re: RFR(L) 8237354: Add option to jcmd to write a gzipped heap dump

Hi Christoph,

This sounds fair. I will remove my objection :-)

Thanks
- Ioi

Reply via email to