On 02/10/2015 12:35 PM, Peter Levart wrote:
ProcessHandle.completableFuture().cancel(true) forcibly destorys
(destroyForcibly()) the process *and* vice versa: destory[Forcibly]()
cancels the CompletableFuture. I don't know if this is the best way -
can't decide yet. In particular, in the implementation it would be
hard to achieve the atommicity of both destroying the process and
canceling the future. Races are inevitable. So it would be better to
think of a process (and a ProcessHandle representing it) as the 1st
stage in the processing pipeline, where
ProcessHandle.completableFuture() is it's dependent stage which tracks
real changes of the process. Which means the behaviour would be
something like the following:
- ProcessHandle.destroy[Forcibly]() triggers destruction of the
process which in turn (when successful) triggers completion of
CompletableFuture, exceptionally with CompletionException, wrapping
the exception indicating the destruction of the process
(ProcessDestroyedException?).
- ProcessHandle.completableFuture().cancel(true/false) just cancels
the CompletableFuture and does not do anything to the process itself.
In that variant, then perhaps it would be more appropriate for
ProcessHandle.completableFuture() to be a "factory" for
CompletableFuture(s) so that each call would return new independent
instance.
What do you think?
Contemplating on this a little more, then perhaps the singleton-per pid
CompletionStage could be OK if it was a "mirror" of real process state.
For that purpose then, instead of .completableFuture() the method would be:
public CompletionStage<ProcessHandle> completionStage()
Returns a CompletionStage<ProcessHandle> for the process. The
CompletionStage provides supporting dependent functions and actions that
are run upon process completion.
Returns:
a CompletionStage<ProcessHandle> for the ProcessHandle; the same
instance is returned for each unique pid.
This would provide the most clean API I think, as CompletionStage does
not have any cancel(), complete(), obtrudeXXX() or get() methods. One
could still obtain a CompletableFuture by calling .toCompletableFuture()
on the CompletionStage, but that future would be a 2nd stage future
(like calling .thenApply(x -> x)) which would not propagate cancel(true)
to the process destruction.
The implementation could still use CompletableFuture under the hood, but
exposed wrapped in a delegating CompletionStage proxy.
So the javadoc might be written as:
public abstract void destroy()
Kills the process. Whether the process represented by this Process
object is forcibly terminated or not is implementation dependent. If the
process is not alive, no action is taken.
If/when the process dies as the result of calling destroy(), the
completionStage() completes exceptionally with CompletionException,
wrapping ProcessDestroyedException.
public abstract ProcessHandle destroyForcibly()
Kills the process. The process represented by this ProcessHandle object
is forcibly terminated. If the process is not alive, no action is taken.
If/when the process dies as the result of calling destroyForcibly(), the
completionStage() completes exceptionally with CompletionException,
wrapping ProcessDestroyedException.
But I'm still unsure of whether it would be better for the
completionStage() to complete normally in any case. Unless the fact that
the process died as a result of killing it could be reliably
communicated regardless of who was responsible for the killing (either
via ProcessHandle.destoroy() or by a KILL/TERMINATE signal originating
from outside the VM).
Peter