On 6/21/19 12:14 PM, Peter Levart wrote:
On 6/21/19 8:38 PM, Chris Plummer wrote:
On 6/21/19 8:57 AM, David Holmes wrote:
Hi Peter,
On 21/06/2019 7:55 am, Peter Levart wrote:
As far as I know, cron jobs that cleanup /tmp typically remove
files that have not been modified for a while.
On Fedora for example, there is a systemd timer that triggers once
per day and executes systemd-tmpfiles which manages volatile and
temporary files and directories. The configuration for /tmp is the
following:
# Clear tmp directories separately, to make them easier to override
q /tmp 1777 root root 10d
q /var/tmp 1777 root root 30d
The age field (10 days for /tmp) has the following meaning:
The age of a file system entry is determined from its last
modification timestamp (mtime), its last access timestamp (atime),
and (except for directories) its last status change
timestamp (ctime). Any of these three (or two) values will
prevent cleanup if it is more recent than the current time minus
the age field.
So the solution could be for attach thread (if it is already
started) to update mtime or ctime of the .java_pid<pid> socket file
periodically so cleanup job would leave it alone.
What do you think?
I'm not keen on having the attach listener thread periodically
wakeup just to do this. Can we not change permissions to protect the
file form external deletion and ensure the VM cleans it up itself?
I think the JVM has a mechanism for cleaning up stale java_pid files
on startup (deleting those for which there is currently no java
process running). We need to make sure that changing permissions does
not break that.
Rather than having the JVM update mtime on the file, perhaps the
onus should be on the sys admin that is running scripts to clean up
these files in the first place. The scripts could first update mtime
on an java_pid files for which there is a java process still running.
Problem is that those scripts are usually already set-up by default.
OTOH it is relatively easy to create exceptions for files with a
particular pattern. If /tmp/.java_pid* files are already managed by
java launcher at every execution, then they need not be managed by a
cron job. So there exists a workaround. It would only be more
convenient if it was not needed.
I'm starting to second guess my thinking that there is any such
.java_pid removal support. I thought I saw it once, but can't find it
now. All I can find is that AttachListener::vm_start() will remove any
stale .java_pid file in place for the current pid. And judging by the
large collection of .java_pid files on my linux dev box, I doubt there
is any removal.
What I might have been remembering is perfdata file removal. In
/tmp/hsperfdata_<uid>/ I'm only seeing perfdata files for existing java
processes.
Chris
Regards, Peter
Chris
David
Regards, Peter
On 6/20/19 10:49 PM, David Holmes wrote:
Sorry it took me a while to understand the specifics of the
problem. :)
David
On 20/06/2019 3:37 am, nijiaben wrote:
Yes Alan, I mean this
------------------ Original ------------------
*From: * "Alan Bateman"<alan.bate...@oracle.com>;
*Date: * Thu, Jun 20, 2019 02:54 PM
*To: * "nijiaben"<nijia...@perfma.com>; "David
Holmes"<david.hol...@oracle.com>;
"serviceability-dev"<serviceability-dev@openjdk.java.net>;
"jdk8u-dev"<jdk8u-...@openjdk.java.net>;
"hotspot-runtime-dev"<hotspot-runtime-...@openjdk.java.net>;
*Subject: * Re: A Bug about the JVM Attach mechanism
On 20/06/2019 05:10, nijiaben wrote:
> :
> I know this mechanism, can we provide means of recovery to
avoid unavailability caused by accidental deletion?
>
Are you concerned about tmpreaper or cron jobs that periodically
cleanup
/tmp? There may indeed be an issue for applications that run for
weeks
or months. If someone is using jmap, jcmd or other tools using the
attach API then it will trigger the attach listener to start.
When they
come back in a few weeks then the .java_pid<pid> file may have been
removed so they cannot attach. Is this what what you are pointing
out?
-Alan