probably need the named pipe name to be unique "per" dump... s/pipe/$PID/
- Larry
On 2/19/20 9:40 AM, Ioi Lam wrote:
On 2/19/20 7:24 AM, Schmelter, Ralf wrote:
Hi Ioi,
This seems to be an edge case (where your environment has more
RAM than disk)
I would not say it's an edge case. Especially in a cloud environment,
your container does not need much free diskspace, since the data is
stored in a database and logging goes to stdout.
I think it would be better to handle this outside of the JVM
(using a named pipe and and external program such as the parallel gzip
"pigz") to limit the maintenance overhead of the JVM.
But then you would have to implement writing the heap dump to a named
pipe (and not only on Unix, but on Windows too). And you would still
want to do the writing in background threads, so most of the code
would stay. You need something like netcat on Windows. And it doesn't
cover writing a heap dump on OOM via the VM flag.
And you should to compress the hprof file in a specific way, since it
will make it much faster to random access the gzipped hprof file
directly.
Note that I think it is a good idea to be able to write the dump to
non-file destination. But removing the compression will not save much
code and will make the handling messier.
I was thinking of doing something like this:
$ mkfifo /tmp/pipe
$ cat /tmp/pipe | gzip -c - > /tmp/zipped &
$ jcmd $PID GC.heap_dump filename=/tmp/pipe
You can replace the "> /tmp/zipped" part with a program that reads
from stdin and send it over the network.
I tried the above with a recent JDK build (with your changes in
JDK-8234510: Remove file seeking requirement for writing a heap dump),
but it doesn't seem to work, probably because we need to change this
code a little bit
http://hg.openjdk.java.net/jdk/jdk/file/7ef41e83066b/src/hotspot/share/services/heapDumper.cpp#l465
DumpWriter::DumpWriter(const char* path) : _fd(-1), _bytes_written(0),
_pos(0),
_in_dump_segment(false), _error(NULL) {
...
_fd = os::create_binary_file(path, false); // don't replace
existing file <<<
I also saw a post saying that the JVM can write to named pipes on
Windows:
https://stackoverflow.com/questions/634564/how-to-open-a-windows-named-pipe-from-java
There's no built-in mkfifo command on Windows, but the above link
points to a .NET example that creates a named pipe and uses that to
communicate with the JVM.
I don't know whether this will be a better solution than your proposed
changes, but I think it should be explored as a possible alternative.
It does seem to require a little work to get your whole data
collection system working, but it also seems more flexible and
extensible.
Thanks
- Ioi
Best regards,
Ralf
-----Original Message-----
From: Ioi Lam <ioi....@oracle.com>
Sent: Mittwoch, 19. Februar 2020 01:16
To: serguei.spit...@oracle.com; Schmelter, Ralf
<ralf.schmel...@sap.com>; hotspot-runtime-...@openjdk.java.net
runtime <hotspot-runtime-...@openjdk.java.net>
Cc: Laurence Cable <larry.ca...@oracle.com>;
serviceability-dev@openjdk.java.net
Subject: Re: RFR(L) 8237354: Add option to jcmd to write a gzipped
heap dump
Hi Ralf,
We are usually pretty picky about adding new features into the JVM. This
seems to be an edge case (where your environment has more RAM than
disk). I think it would be better to handle this outside of the JVM
(using a named pipe and and external program such as the parallel gzip
"pigz") to limit the maintenance overhead of the JVM.
This would also have the benefit that you can do it with almost no local
storage -- you can read from the named pipe, optionally compress the
data, and send that over the network.
Thanks
- Ioi