Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-20 Thread Roger Riggs

Hi Peter,

yes, your patch does reduce the code quite a bit except where the 
complexity is needed.


The commonPool does have a soft limit (256/system property) on the 
number of

compensating processes it will start.
That limit would only come into play for subclasses of Process that did
not properly override onExit to delegate to the underlying builtin 
Process implementation.

So, probably not a big risk.

It might be desirable to put the loop and exception handling inside the 
ManagedBlocker
but the difference is only in the creation of additional inner class 
instances in the case

of InterruptedExceptions.

Thanks, Roger



On 5/20/2015 4:39 AM, Peter Levart wrote:

Hi Roger,

I looked at Martin's idea and I think that we don't need the 
AsyncExecutor at all (it already sounds like I hate it ;-). Using 
ManagedBlocker, a ForkJoinPoll can compensate and grow it's pool as 
needed when Process.waitFor() blocks. So we could leverage this 
feature and simplify things even further:


http://cr.openjdk.java.net/~plevart/jdk9-sandbox/JDK-8046092-branch/webrev.03/ 



Passing a commonPool() to xxxAsync() methods is unneeded as the 
default is exactly the same. If CompletableFuture ever gets a feature 
to specify a default Executor for all it's descendants, then we can 
revisit this if needed.


What do you think?

Regards, Peter

On 05/19/2015 10:15 PM, Roger Riggs wrote:
The webrev, javadoc, and specdiffs have been updated to address 
recent recommendations:


Please review and comment:

Webrev:
http://cr.openjdk.java.net/~rriggs/webrev-ph/ (May 19)

javadoc:
http://cr.openjdk.java.net/~rriggs/ph-apidraft/ (May 19)

Diffs of the spec/javadoc from previous draft:
http://cr.openjdk.java.net/~rriggs/ph-diffs-2015-05-19/overview-summary.html 



Thanks, Roger






Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-20 Thread Peter Levart

Hi Roger,

I looked at Martin's idea and I think that we don't need the 
AsyncExecutor at all (it already sounds like I hate it ;-). Using 
ManagedBlocker, a ForkJoinPoll can compensate and grow it's pool as 
needed when Process.waitFor() blocks. So we could leverage this feature 
and simplify things even further:


http://cr.openjdk.java.net/~plevart/jdk9-sandbox/JDK-8046092-branch/webrev.03/

Passing a commonPool() to xxxAsync() methods is unneeded as the default 
is exactly the same. If CompletableFuture ever gets a feature to specify 
a default Executor for all it's descendants, then we can revisit this if 
needed.


What do you think?

Regards, Peter

On 05/19/2015 10:15 PM, Roger Riggs wrote:
The webrev, javadoc, and specdiffs have been updated to address recent 
recommendations:


Please review and comment:

Webrev:
http://cr.openjdk.java.net/~rriggs/webrev-ph/ (May 19)

javadoc:
http://cr.openjdk.java.net/~rriggs/ph-apidraft/ (May 19)

Diffs of the spec/javadoc from previous draft:
http://cr.openjdk.java.net/~rriggs/ph-diffs-2015-05-19/overview-summary.html 



Thanks, Roger




Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-19 Thread Ivan Gerasimov



On 19.05.2015 23:15, Roger Riggs wrote:
The webrev, javadoc, and specdiffs have been updated to address recent 
recommendations:


Please review and comment:

Webrev:
http://cr.openjdk.java.net/~rriggs/webrev-ph/ (May 19)


src/java.base/windows/classes/java/lang/ProcessImpl.java :

37 import java.lang.Override;

must be some over-obliging IDE? :-)

Sincerely yours,
Ivan


javadoc:
http://cr.openjdk.java.net/~rriggs/ph-apidraft/ (May 19)

Diffs of the spec/javadoc from previous draft:
http://cr.openjdk.java.net/~rriggs/ph-diffs-2015-05-19/overview-summary.html 



Thanks, Roger






Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-19 Thread Roger Riggs

Hi Peter,

On 5/19/2015 4:34 AM, Peter Levart wrote:

Hi Roger,

On 05/18/2015 11:49 PM, Roger Riggs wrote:
The earlier spec specifically used the common pool but from a 
specification point of view
it allows more flexibility in the implementation not to bind onExit 
to the commonPool.
Its not clear that the attributes of threads used to handle output or 
managing Processes

are the same as those attributed to the commonPool.

The default sizing of the commonPool is based on the number of 
processors

and does not grow (except via the ManagedBlockers) mechanism.
The number of threads waiting for processes is unrelated to the 
number of CPUs.
It is simpler to maintain to use a separate pool and not depend on 
consistent

assumptions between Process handling and the commonPool.


I'm not advocating to use commonPool for waiting on the process 
exit(s). We have special low-stack-size reaper threads for this 
purpose and in future there could be just one thread maybe. I'm also 
not advocating to use commonPool in default Process.onExit() 
implementation where the thread must wait for process to exit. I'm 
just questiong the use of commonPool vs. AsyncExecutor in 
ProcessImpl.onExit() and ProcessHandleImpl.onExit() where there is an 
async CompletableFuture stage inserted to shield from exposing 
internal low-stack-size threads.


The code that is executed by commonPool (or AsyncExecutor) by onExit() 
continuations doesn't wait for process to exit. The number of 
processes spawned does not matter directly. Only the rate of process 
exits does. The code executed will be a cleanup-type / 
post-exit-actions-type of code which seems to be typical code for 
commonPool. Code dealing with process input/output will typically be 
executed concurrently with the Process while the process is still 
running and not by onExit() continuations.

makes sense



On 05/18/2015 11:49 PM, Roger Riggs wrote:
One alternative is for onExit to return a CF subclass that maps 
synchronous methods
to the corresponding async methods.  Each call to onExit would return 
a new proxy
that would delegate to a new CF created from the 
ExitCompletion.handle(...).
For all requests that involved calling app code it would delegate to 
the corresponding

async method.  For example,

CompletableFuture cf =
ProcessHandleImpl.completion(getPid(), false)
.handle((exitStatus, unusedThrowable) -> this);
return new CompletionProxy(cf);

The CompletionProxy can ensure that all CFs that call application 
code are Async.
It would override the syncronous methods and redirect to the 
corresponding Async method.


public  CompletableFuture thenApply(FunctionProcessHandle, ? extends U> fn) {

return thenApplyAsync(fn);
}

The CompletionProxy can also supply the Executor as needed unless 
supplied by the application


It adds a simple delegation but removed the extra task dispatch 
except where needed.


Does that work as you would expect?

Thanks, Roger


I thought about that  too. Unfortunately, there are also xxxAsync(..., 
Executor) methods that appear to attach asynchronous continuations. 
And they do, if passed-in Executor dispatches to separate threads. But 
users can pass in a "SynchronousExecutor" like the following:


public class SynchronousExecutor implements Executor {
@Override
public void execute(Runnable command) {
command.run();
}
}

...which makes xxxAsync(..., Executor) methods equivalent to 
synchronous-continuation-attaching methods.
It might be a bit interventionist, but its possible check the concrete 
type of the executors
and allow direct use of those that are built-in;  or to replace unknown 
ones with a built-in.
It would be lighter weight than always queuing a task and potentially 
starting up a new thread.


It would be nice to have an API that guaranteed division of threads 
between internal-code and user-code threads, but such API would have 
to be constructed around something different than Executor interface. 
I think what we have with insertion of an async stage is the best we 
can do with CompletableFuture API.

ok, the implementation can be updated if something new becomes available.


The only question remaining is whether to use AsyncExecutor in 
ProcessImpl and ProcessHandleImpl .onExit() methods for executing the 
inserted async stage (and any synchronous continuation of it), or only 
in default implementation of Process.onExit() for waiting on process 
exit. The AsyncExecutor properties are such that it doesn't pre-start 
any threads and if not used, doesn't represent any system overhead. 
This is good if it is normally not used and would make it a good 
candidate for default Process.onExit() implementation which is used 
only in uncommon custom Process implementations. Using it in 
ProcessImpl and ProcessHandleImpl .onExit() methods makes it being 
used after each onExit() call, albeit potentially onl

Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-19 Thread Roger Riggs

Hi Paul,

a couple of followup responses...

On 5/18/2015 5:16 PM, Paul Sandoz wrote:




  251 /**
  252  * Kills the subprocess forcibly. The subprocess represented by this
  253  * {@code Process} object is forcibly terminated.
  254  * Forcible process destruction is defined as the immediate 
termination of a
  255  * process, whereas normal termination allows a process to shut down 
cleanly.
  256  * If the process is not alive, no action is taken.
  257  * 
  258  * The {@link java.util.concurrent.CompletableFuture} from {@link 
#onExit} is
  259  * {@link java.util.concurrent.CompletableFuture#complete completed}
  260  * when the process has terminated.
  261  *

insert @implSpec

  262  * The default implementation of this method invokes {@link 
#destroy}
  263  * and so may not forcibly terminate the process.

insert @implNote

  264  * Concrete implementations of this class are strongly encouraged to 
override
  265  * this method with a compliant implementation.

legacy spec  - move up before other @ tags

  266  * Invoking this method on {@code Process} objects returned by
  267  * {@link ProcessBuilder#start} and {@link Runtime#exec} forcibly 
terminate
  268  * the process.
  269  *

Use @ImplNote?

Most of this is spec, not informational.

Sorry i meant @implSpec, including for the "Concrete impls..." bit.

Perhaps I misunderstood the narrow scope of @implSpec.
The @implSpec is only for the specific implementation of the method 
being declared.

The concrete implementations are internal  subclasses are out of that scope.
...


  361  * If the process is {@link #isAlive not alive} the {@link 
CompletableFuture}
  362  * is immediately {@link 
java.util.concurrent.CompletableFuture#complete completed}.

s/is immediately/is returned/ ?

The important action to be specified is that the CF is completed() immediately.
Potentially it could say it is completed before the CF is returned from 
onExit().


Yeah, that is what i was trying to get across, it returns with an already 
completed CF.

ok;
 * If the process is not alive the CompletableFuture returned has 
been completed.







  406  * @return a {@code CompletableFuture} for the Process;
  407  * a unique instance is returned for each call to {@code onExit}.
  408  *

Any need to mention about unique instances?

yes, since the hierarchy of the CF instances is visible to the app and it makes
a difference whether a unique CF is returned for each call or whether
the same CF is returned from each call.

Perhaps say "Returns a new CF..." and "@return a new {@code ..." ?

ok




  429 /**
  430  * Returns a ProcessHandle for the Process.
  431  *

Some methods talk about "the subprocess" where as others talk about "the Process" and 
others "the process".

That variation of terminology predates this update.

I think it's best to try and be consistent throughout the class e.g. keep with 
the existing term or change to consistently use a new term.




  456 /**
  457  * Returns a snapshot of information about the process.
  458  *
  459  *  An {@link ProcessHandle.Info} instance has various accessor 
methods
  460  * that return information about the process, if the process is alive 
and
  461  * the information is available, otherwise {@code null} is returned.
  462  *
  463  * @implSpec
  464  * This implementation returns information about the process as:
  465  * {@link #toHandle toHandle().info()}.
  466  *
  467  * @return a snapshot of information about the process; non-null

Dangling "non-null". Do you mean it's can never be null or ", otherwise null if the 
process is not alive or such information is not available"? I suspect the former now all Info 
methods return Optional.

It is always non-null.

Ok, so the first paragraph needs tweaking.
ok, the description of the methods of the info class are better left to 
the Info class. ...


   86 /**
   87  * Returns the native process ID of the process. The native process 
ID is an
   88  * identification number that the operating system assigns to the 
process.
   89  *
   90  * @return the native process ID of the process
   91  * @throws UnsupportedOperationException if the implementation
   92  * does not support this operation
   93  */
   94 long getPid();

In what cases could this throw a USO if the static "of "method did not?

Its in an interface that might have arbitrary implementations. In a theoretical
Process implementation for remote processes their might not be a viable pid
or the access controls might be such that the PID is meaningless or restricted.


Ok, i think i get it: a call ProcessHandle.getPid obtained from 
ProcessHandle.of will never from a USO but other implementations of 
ProcessHandle might.

yes


  326  * @return an {@code Optional} for the process to be
  327 

Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-19 Thread Peter Levart

Hi Roger,

On 05/18/2015 11:49 PM, Roger Riggs wrote:
The earlier spec specifically used the common pool but from a 
specification point of view
it allows more flexibility in the implementation not to bind onExit to 
the commonPool.
Its not clear that the attributes of threads used to handle output or 
managing Processes

are the same as those attributed to the commonPool.

The default sizing of the commonPool is based on the number of processors
and does not grow (except via the ManagedBlockers) mechanism.
The number of threads waiting for processes is unrelated to the number 
of CPUs.
It is simpler to maintain to use a separate pool and not depend on 
consistent

assumptions between Process handling and the commonPool.


I'm not advocating to use commonPool for waiting on the process exit(s). 
We have special low-stack-size reaper threads for this purpose and in 
future there could be just one thread maybe. I'm also not advocating to 
use commonPool in default Process.onExit() implementation where the 
thread must wait for process to exit. I'm just questiong the use of 
commonPool vs. AsyncExecutor in ProcessImpl.onExit() and 
ProcessHandleImpl.onExit() where there is an async CompletableFuture 
stage inserted to shield from exposing internal low-stack-size threads.


The code that is executed by commonPool (or AsyncExecutor) by onExit() 
continuations doesn't wait for process to exit. The number of processes 
spawned does not matter directly. Only the rate of process exits does. 
The code executed will be a cleanup-type / post-exit-actions-type of 
code which seems to be typical code for commonPool. Code dealing with 
process input/output will typically be executed concurrently with the 
Process while the process is still running and not by onExit() 
continuations.


On 05/18/2015 11:49 PM, Roger Riggs wrote:
One alternative is for onExit to return a CF subclass that maps 
synchronous methods
to the corresponding async methods.  Each call to onExit would return 
a new proxy
that would delegate to a new CF created from the 
ExitCompletion.handle(...).
For all requests that involved calling app code it would delegate to 
the corresponding

async method.  For example,

CompletableFuture cf =
ProcessHandleImpl.completion(getPid(), false)
.handle((exitStatus, unusedThrowable) -> this);
return new CompletionProxy(cf);

The CompletionProxy can ensure that all CFs that call application code 
are Async.
It would override the syncronous methods and redirect to the 
corresponding Async method.


public  CompletableFuture thenApply(FunctionProcessHandle, ? extends U> fn) {

return thenApplyAsync(fn);
}

The CompletionProxy can also supply the Executor as needed unless 
supplied by the application


It adds a simple delegation but removed the extra task dispatch except 
where needed.


Does that work as you would expect?

Thanks, Roger


I thought about that  too. Unfortunately, there are also xxxAsync(..., 
Executor) methods that appear to attach asynchronous continuations. And 
they do, if passed-in Executor dispatches to separate threads. But users 
can pass in a "SynchronousExecutor" like the following:


public class SynchronousExecutor implements Executor {
@Override
public void execute(Runnable command) {
command.run();
}
}

...which makes xxxAsync(..., Executor) methods equivalent to 
synchronous-continuation-attaching methods.


It would be nice to have an API that guaranteed division of threads 
between internal-code and user-code threads, but such API would have to 
be constructed around something different than Executor interface. I 
think what we have with insertion of an async stage is the best we can 
do with CompletableFuture API.


The only question remaining is whether to use AsyncExecutor in 
ProcessImpl and ProcessHandleImpl .onExit() methods for executing the 
inserted async stage (and any synchronous continuation of it), or only 
in default implementation of Process.onExit() for waiting on process 
exit. The AsyncExecutor properties are such that it doesn't pre-start 
any threads and if not used, doesn't represent any system overhead. This 
is good if it is normally not used and would make it a good candidate 
for default Process.onExit() implementation which is used only in 
uncommon custom Process implementations. Using it in ProcessImpl and 
ProcessHandleImpl .onExit() methods makes it being used after each 
onExit() call, albeit potentially only briefly, so there's a chance that 
only one thread will ever be started and will die after every 60s of 
inactivity. commonPool() is no different from that perspective, but 
there is a chance, being a common pool, that it is used for other 
purposes too, so potentially less thread starts and stops can be 
achieved. There's also the lack of uniformity if AsyncExecutor is used 
which can have surprising effects. For example, a u

Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-18 Thread Roger Riggs

Hi Peter,

On 5/15/2015 6:35 AM, Peter Levart wrote:

Hi Roger,

On 05/14/2015 03:44 PM, Roger Riggs wrote:

Hi Peter,

On 5/14/15 8:19 AM, Peter Levart wrote:

Hi Roger,

The new API using Optional(s) looks fine. In particular for the 
ProcessHandle returning methods. They now either return 
Stream or Optional.


At some point in the development of this API, the implementation 
introduced the AsyncExecutor to execute synchronous continuations of 
the onExit() returned CompletableFuture(s). What was the main 
motivation for this given that:
- previously, ForkJoinPoll.commonPool() was used instead that by 
default possesses some similar characteristics (Innocuous threads 
when SecurityManager is active)

The AsyncExecutor also uses InnocuousThreads.


So that's not what is lost or gained by AsyncExecutor when comparing 
it with commonPool().
Are there other good attributes of commonPool that would be missed by 
using a separate pool?




- this AsyncExecutor is only effective for 1st "wave" of synchronous 
continuations. Asynchronous continuations and synchronous 
continuations following them will still use ForkJoinPoll.commonPool()
Unfortunately, the common ForkJoinPool assumes that tasks queued to 
it complete relatively
quickly and free the thread.  It does not grow the number of threads 
and is not appropriate
for tasks that block for indefinite periods as might be need to wait 
for a Process to exit.


Ok, AsyncExecutor might be required for default implementation of 
Process.onExit(). But waiting on process exit in ProcessImpl and 
ProcessHandle is performed by internal 32K stack-sized reaper threads 
and what we did in ProcessImpl.onExit() was this (before the 
AsyncExecutor):


@Override
public CompletableFuture onExit() {
return ProcessHandleImpl.completion(pid, false)
.handleAsync((exitStatus, unusedThrowable) -> this);
}

Which means that FJP.commonPool() thread is employed after 
ProcessHandleImpl.completion() is completed. Meaning that just user 
code is run by commonPool() and that code does not wait for process to 
exit because it already did exit. User code might run for extended 
period of time, but any use of CompletableFuture is susceptible to 
that abuse. Are you afraid that users of Process API are not common 
CompletableFuture users and are not aware of commonPool() constraints?
The earlier spec specifically used the common pool but from a 
specification point of view
it allows more flexibility in the implementation not to bind onExit to 
the commonPool.
Its not clear that the attributes of threads used to handle output or 
managing Processes

are the same as those attributed to the commonPool.

The default sizing of the commonPool is based on the number of processors
and does not grow (except via the ManagedBlockers) mechanism.
The number of threads waiting for processes is unrelated to the number 
of CPUs.
It is simpler to maintain to use a separate pool and not depend on 
consistent

assumptions between Process handling and the commonPool.





Would an alternative be to define two overloaded onExit() methods in 
the style of CompletableFuture itself?


CompletableFuture onExit();
CompletableFuture onExit(Executor executor);

...and give the user a chance to supply it's own Executor if the 
default ForkJoinPoll.commonPool() does not fit?
It is only one more method in PH and Process but that function is 
available from CompletableFuture

though perhaps not as conveniently.
The onExit method returns a CompletableFuture that has the entire 
complement of
synchronous and async methods available to it.  The application can 
control

where subsequent computations are performed.


That's true for Async methods. Synchronous continuations are 
executed by whichever thread executed the previous stage. We insert an 
asynchronous stage to shield the internal 32K threads from user code. 
What we do by executing that stage in AsyncExecutor might be desirable 
when the user attaches a synchronous continuation. But when he 
attaches an asynchronous one, the thread from AsyncExecutor is used 
just for mapping the result into Process instance and triggering the 
execution of next stage. Can we eliminate this unnecessary 
asynchronous stage?
One alternative is for onExit to return a CF subclass that maps 
synchronous methods
to the corresponding async methods.  Each call to onExit would return a 
new proxy

that would delegate to a new CF created from the ExitCompletion.handle(...).
For all requests that involved calling app code it would delegate to the 
corresponding

async method.  For example,

CompletableFuture cf =
ProcessHandleImpl.completion(getPid(), false)
.handle((exitStatus, unusedThrowable) -> this);
return new CompletionProxy(cf);

The CompletionProxy can ensure that all CFs that call application code 
are Async.
It would override the syncronous methods and redirect to the 
corresponding 

Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-18 Thread Paul Sandoz

On May 18, 2015, at 10:49 PM, Roger Riggs  wrote:

> Hi Paul,
> 
> Thanks for the comments.
> 
> On 5/18/2015 7:58 AM, Paul Sandoz wrote:
>> Ho Roger,
>> 
>> I mostly focused on the specification.
>> 
>> Paul.
>> 
>> 
>> Process
>> --
>> 
>>  35  * {@code Process} provides control of native processes started by
>>  36  * ProcessBuilder.start and Runtime.exec.
>> 
>> Link to those methods?
> Links are not preferred in the first sentence used in the class summary.
> They are linked in the paragraph that follows

Ok, seems odd to me but oh well..


>> 
>> 
>>   92 /**
>>   93  * Default constructor for Process.
>>   94  */
>>   95 public Process() {}
>> 
>> Is that redundant?
> It seemed good form to document the constructor exists.
> The implicit public zero-arg constructor can't have javadoc.

But there is nothing to say :-)


>> 
>> 
>>  251 /**
>>  252  * Kills the subprocess forcibly. The subprocess represented by this
>>  253  * {@code Process} object is forcibly terminated.
>>  254  * Forcible process destruction is defined as the immediate 
>> termination of a
>>  255  * process, whereas normal termination allows a process to shut 
>> down cleanly.
>>  256  * If the process is not alive, no action is taken.
>>  257  * 
>>  258  * The {@link java.util.concurrent.CompletableFuture} from {@link 
>> #onExit} is
>>  259  * {@link java.util.concurrent.CompletableFuture#complete completed}
>>  260  * when the process has terminated.
>>  261  *
> insert @implSpec
>>  262  * The default implementation of this method invokes {@link 
>> #destroy}
>>  263  * and so may not forcibly terminate the process.
> insert @implNote
>>  264  * Concrete implementations of this class are strongly encouraged 
>> to override
>>  265  * this method with a compliant implementation.
> legacy spec  - move up before other @ tags
>>  266  * Invoking this method on {@code Process} objects returned by
>>  267  * {@link ProcessBuilder#start} and {@link Runtime#exec} forcibly 
>> terminate
>>  268  * the process.
>>  269  *
>> 
>> Use @ImplNote?
> Most of this is spec, not informational.

Sorry i meant @implSpec, including for the "Concrete impls..." bit.


>> 
>> 
>>  270  * Note: The subprocess may not terminate immediately.
>>  271  * i.e. {@code isAlive()} may return true for a brief period
>>  272  * after {@code destroyForcibly()} is called. This method
>>  273  * may be chained to {@code waitFor()} if needed.
>>  274  *
>> 
>> Use @apiNote?
> seems reasonable
> But the resulting javadoc breaks up the flow of information. the API note 
> that a developer
> would want to know about is far from the rest of the method description.

Ok.


>> 
>> 
>>  361  * If the process is {@link #isAlive not alive} the {@link 
>> CompletableFuture}
>>  362  * is immediately {@link 
>> java.util.concurrent.CompletableFuture#complete completed}.
>> 
>> s/is immediately/is returned/ ?
> The important action to be specified is that the CF is completed() 
> immediately.
> Potentially it could say it is completed before the CF is returned from 
> onExit().
> 

Yeah, that is what i was trying to get across, it returns with an already 
completed CF.


>> 
>> 
>>  406  * @return a {@code CompletableFuture} for the Process;
>>  407  * a unique instance is returned for each call to {@code onExit}.
>>  408  *
>> 
>> Any need to mention about unique instances?
> yes, since the hierarchy of the CF instances is visible to the app and it 
> makes
> a difference whether a unique CF is returned for each call or whether
> the same CF is returned from each call.

Perhaps say "Returns a new CF..." and "@return a new {@code ..." ?


>> 
>> 
>>  429 /**
>>  430  * Returns a ProcessHandle for the Process.
>>  431  *
>> 
>> Some methods talk about "the subprocess" where as others talk about "the 
>> Process" and others "the process".
> That variation of terminology predates this update.

I think it's best to try and be consistent throughout the class e.g. keep with 
the existing term or change to consistently use a new term.


>> 
>> 
>>  456 /**
>>  457  * Returns a snapshot of information about the process.
>>  458  *
>>  459  *  An {@link ProcessHandle.Info} instance has various accessor 
>> methods
>>  460  * that return information about the process, if the process is 
>> alive and
>>  461  * the information is available, otherwise {@code null} is returned.
>>  462  *
>>  463  * @implSpec
>>  464  * This implementation returns information about the process as:
>>  465  * {@link #toHandle toHandle().info()}.
>>  466  *
>>  467  * @return a snapshot of information about the process; non-null
>> 
>> Dangling "non-null". Do you mean it's can never be null or ", otherwise null 
>> if the process is not alive or such information is not available"? I suspect 
>> the former now all Info

Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-18 Thread Roger Riggs

Hi Paul,

Thanks for the comments.

On 5/18/2015 7:58 AM, Paul Sandoz wrote:

Ho Roger,

I mostly focused on the specification.

Paul.


Process
--

  35  * {@code Process} provides control of native processes started by
  36  * ProcessBuilder.start and Runtime.exec.

Link to those methods?

Links are not preferred in the first sentence used in the class summary.
They are linked in the paragraph that follows



   92 /**
   93  * Default constructor for Process.
   94  */
   95 public Process() {}

Is that redundant?

It seemed good form to document the constructor exists.
The implicit public zero-arg constructor can't have javadoc.



  251 /**
  252  * Kills the subprocess forcibly. The subprocess represented by this
  253  * {@code Process} object is forcibly terminated.
  254  * Forcible process destruction is defined as the immediate 
termination of a
  255  * process, whereas normal termination allows a process to shut down 
cleanly.
  256  * If the process is not alive, no action is taken.
  257  * 
  258  * The {@link java.util.concurrent.CompletableFuture} from {@link 
#onExit} is
  259  * {@link java.util.concurrent.CompletableFuture#complete completed}
  260  * when the process has terminated.
  261  *

insert @implSpec

  262  * The default implementation of this method invokes {@link 
#destroy}
  263  * and so may not forcibly terminate the process.

insert @implNote

  264  * Concrete implementations of this class are strongly encouraged to 
override
  265  * this method with a compliant implementation.

legacy spec  - move up before other @ tags

  266  * Invoking this method on {@code Process} objects returned by
  267  * {@link ProcessBuilder#start} and {@link Runtime#exec} forcibly 
terminate
  268  * the process.
  269  *

Use @ImplNote?

Most of this is spec, not informational.



  270  * Note: The subprocess may not terminate immediately.
  271  * i.e. {@code isAlive()} may return true for a brief period
  272  * after {@code destroyForcibly()} is called. This method
  273  * may be chained to {@code waitFor()} if needed.
  274  *

Use @apiNote?

seems reasonable
But the resulting javadoc breaks up the flow of information. the API 
note that a developer

would want to know about is far from the rest of the method description.



  361  * If the process is {@link #isAlive not alive} the {@link 
CompletableFuture}
  362  * is immediately {@link 
java.util.concurrent.CompletableFuture#complete completed}.

s/is immediately/is returned/ ?
The important action to be specified is that the CF is completed() 
immediately.
Potentially it could say it is completed before the CF is returned from 
onExit().





  390  * When the {@link #waitFor()} returns successfully the 
CompletableFuture is
  391  * {@link java.util.concurrent.CompletableFuture#complete completed} 
regardless
  392  * of the exit status of the process.

s/When the/When /

fixed



  406  * @return a {@code CompletableFuture} for the Process;
  407  * a unique instance is returned for each call to {@code onExit}.
  408  *

Any need to mention about unique instances?
yes, since the hierarchy of the CF instances is visible to the app and 
it makes

a difference whether a unique CF is returned for each call or whether
the same CF is returned from each call.



  429 /**
  430  * Returns a ProcessHandle for the Process.
  431  *

Some methods talk about "the subprocess" where as others talk about "the Process" and 
others "the process".

That variation of terminology predates this update.



  437  * @implSpec
  438  * This implementation throws an instance of
  439  * {@link java.lang.UnsupportedOperationException} and performs no 
other action.
  440  * Sub-types should override this method to provide a ProcessHandle 
for the
  441  * process.  The methods {@link #getPid}, {@link #info}, {@link 
#children},
  442  * and {@link #allChildren}, unless overridden, operate on the 
ProcessHandle.

IIRC i incorrectly suggested Sub-types rather than Subclasses. There are other places 
with similar requirements where we can use the same language (e.g. destroyForcibly and 
onExit) "Subclasses should override this method"

Fixed



  456 /**
  457  * Returns a snapshot of information about the process.
  458  *
  459  *  An {@link ProcessHandle.Info} instance has various accessor 
methods
  460  * that return information about the process, if the process is alive 
and
  461  * the information is available, otherwise {@code null} is returned.
  462  *
  463  * @implSpec
  464  * This implementation returns information about the process as:
  465  * {@link #toHandle toHandle().info()}.
  466  *
  467  * @return a snapshot of information about the process; non-null

Dangling "non-null". Do you mean it's can never 

Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-18 Thread Paul Sandoz
Ho Roger,

I mostly focused on the specification.

Paul.


Process
--

 35  * {@code Process} provides control of native processes started by
 36  * ProcessBuilder.start and Runtime.exec.

Link to those methods?


  92 /**
  93  * Default constructor for Process.
  94  */
  95 public Process() {}

Is that redundant?


 251 /**
 252  * Kills the subprocess forcibly. The subprocess represented by this
 253  * {@code Process} object is forcibly terminated.
 254  * Forcible process destruction is defined as the immediate 
termination of a
 255  * process, whereas normal termination allows a process to shut down 
cleanly.
 256  * If the process is not alive, no action is taken.
 257  * 
 258  * The {@link java.util.concurrent.CompletableFuture} from {@link 
#onExit} is
 259  * {@link java.util.concurrent.CompletableFuture#complete completed}
 260  * when the process has terminated.
 261  *
 262  * The default implementation of this method invokes {@link 
#destroy}
 263  * and so may not forcibly terminate the process.
 264  * Concrete implementations of this class are strongly encouraged to 
override
 265  * this method with a compliant implementation.
 266  * Invoking this method on {@code Process} objects returned by
 267  * {@link ProcessBuilder#start} and {@link Runtime#exec} forcibly 
terminate
 268  * the process.
 269  *

Use @ImplNote?


 270  * Note: The subprocess may not terminate immediately.
 271  * i.e. {@code isAlive()} may return true for a brief period
 272  * after {@code destroyForcibly()} is called. This method
 273  * may be chained to {@code waitFor()} if needed.
 274  *

Use @apiNote?


 361  * If the process is {@link #isAlive not alive} the {@link 
CompletableFuture}
 362  * is immediately {@link 
java.util.concurrent.CompletableFuture#complete completed}.

s/is immediately/is returned/ ?


 390  * When the {@link #waitFor()} returns successfully the 
CompletableFuture is
 391  * {@link java.util.concurrent.CompletableFuture#complete completed} 
regardless
 392  * of the exit status of the process.

s/When the/When /


 406  * @return a {@code CompletableFuture} for the Process;
 407  * a unique instance is returned for each call to {@code onExit}.
 408  *

Any need to mention about unique instances?


 429 /**
 430  * Returns a ProcessHandle for the Process.
 431  *

Some methods talk about "the subprocess" where as others talk about "the 
Process" and others "the process".


 437  * @implSpec
 438  * This implementation throws an instance of
 439  * {@link java.lang.UnsupportedOperationException} and performs no 
other action.
 440  * Sub-types should override this method to provide a ProcessHandle 
for the
 441  * process.  The methods {@link #getPid}, {@link #info}, {@link 
#children},
 442  * and {@link #allChildren}, unless overridden, operate on the 
ProcessHandle.

IIRC i incorrectly suggested Sub-types rather than Subclasses. There are other 
places with similar requirements where we can use the same language (e.g. 
destroyForcibly and onExit) "Subclasses should override this method"


 456 /**
 457  * Returns a snapshot of information about the process.
 458  *
 459  *  An {@link ProcessHandle.Info} instance has various accessor 
methods
 460  * that return information about the process, if the process is alive 
and
 461  * the information is available, otherwise {@code null} is returned.
 462  *
 463  * @implSpec
 464  * This implementation returns information about the process as:
 465  * {@link #toHandle toHandle().info()}.
 466  *
 467  * @return a snapshot of information about the process; non-null

Dangling "non-null". Do you mean it's can never be null or ", otherwise null if 
the process is not alive or such information is not available"? I suspect the 
former now all Info methods return Optional.


 480  * Note that processes are created and terminate asynchronously.
 481  * There is no guarantee that a process is {@link #isAlive alive}.
 482  * 

s/terminate/terminated/


ProcessHandle
--

  79  * For example, EPERM on Linux.

I presume you are referring to native process-related operations that return an 
error code of EPERM?


  86 /**
  87  * Returns the native process ID of the process. The native process ID 
is an
  88  * identification number that the operating system assigns to the 
process.
  89  *
  90  * @return the native process ID of the process
  91  * @throws UnsupportedOperationException if the implementation
  92  * does not support this operation
  93  */
  94 long getPid();

In what cases could this throw a USO if the static "of "method did not?


 167 /**
 168  * Returns a snapshot of all processes visible to the current process.
 169  * 
 170  * No

Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-15 Thread Martin Buchholz
On Thu, May 14, 2015 at 6:44 AM, Roger Riggs  wrote:

>
>> At some point in the development of this API, the implementation
>> introduced the AsyncExecutor to execute synchronous continuations of the
>> onExit() returned CompletableFuture(s). What was the main motivation for
>> this given that:
>> - previously, ForkJoinPoll.commonPool() was used instead that by default
>> possesses some similar characteristics (Innocuous threads when
>> SecurityManager is active)
>>
> The AsyncExecutor also uses InnocuousThreads.
>
>  - this AsyncExecutor is only effective for 1st "wave" of synchronous
>> continuations. Asynchronous continuations and synchronous continuations
>> following them will still use ForkJoinPoll.commonPool()
>>
> Unfortunately, the common ForkJoinPool assumes that tasks queued to it
> complete relatively
> quickly and free the thread.  It does not grow the number of threads and
> is not appropriate
> for tasks that block for indefinite periods as might be need to wait for a
> Process to exit.
>
>
Given that there's no known alternative to tying up a thread to execute
waitpid, there's only small potential benefit to using forkjoinpool.  But
it might be reasonable if you use the ManagedBlocker mechanism to inform
the pool that you are about to block.  I liked the fact that we could use
threads with a small stack size to run waitpid in the special purpose
threadpool, but I don't know how valuable that is in practice.


Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-15 Thread Peter Levart

Hi Roger,

On 05/15/2015 12:35 PM, Peter Levart wrote:

public class AsyncCompletableFuture ... {

/**
 * Returns a new CompletionStage that, when this stage completes
 * normally, completes with the given replacementResult.
 *
 * @param replacementResult the successful completion value of the 
returned

 *   CompletionStage
 * @param  the value's type
 * @return the new CompletionStage
 */
public  AsyncCompletableFuture thenReplace(U 
replacementResult) { ...


I can ask on the concurrency-interest list about the feasibility of 
such [Async]CompletableFuture split. Would you be interested in using 
AsyncCompletableFuture if it was available?



...on a second thought, .xxxAsync(..., Executor) methods on 
[Async]CompletableFuture do not necessarily attach just asynchronous 
continuations if a SynchronousExecutor passed as Executor parameter, so 
we wouldn't get any guarantee of thread isolation. Only exposing 
xxxAsync() methods without custom Executor on the other hand precludes 
the possibility of supplying a custom executor.


Sorry, but this was not a good idea.

Regards, Peter



Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-15 Thread Peter Levart

Hi Roger,

On 05/14/2015 03:44 PM, Roger Riggs wrote:

Hi Peter,

On 5/14/15 8:19 AM, Peter Levart wrote:

Hi Roger,

The new API using Optional(s) looks fine. In particular for the 
ProcessHandle returning methods. They now either return 
Stream or Optional.


At some point in the development of this API, the implementation 
introduced the AsyncExecutor to execute synchronous continuations of 
the onExit() returned CompletableFuture(s). What was the main 
motivation for this given that:
- previously, ForkJoinPoll.commonPool() was used instead that by 
default possesses some similar characteristics (Innocuous threads 
when SecurityManager is active)

The AsyncExecutor also uses InnocuousThreads.


So that's not what is lost or gained by AsyncExecutor when comparing it 
with commonPool().




- this AsyncExecutor is only effective for 1st "wave" of synchronous 
continuations. Asynchronous continuations and synchronous 
continuations following them will still use ForkJoinPoll.commonPool()
Unfortunately, the common ForkJoinPool assumes that tasks queued to it 
complete relatively
quickly and free the thread.  It does not grow the number of threads 
and is not appropriate
for tasks that block for indefinite periods as might be need to wait 
for a Process to exit.


Ok, AsyncExecutor might be required for default implementation of 
Process.onExit(). But waiting on process exit in ProcessImpl and 
ProcessHandle is performed by internal 32K stack-sized reaper threads 
and what we did in ProcessImpl.onExit() was this (before the AsyncExecutor):


@Override
public CompletableFuture onExit() {
return ProcessHandleImpl.completion(pid, false)
.handleAsync((exitStatus, unusedThrowable) -> this);
}

Which means that FJP.commonPool() thread is employed after 
ProcessHandleImpl.completion() is completed. Meaning that just user code 
is run by commonPool() and that code does not wait for process to exit 
because it already did exit. User code might run for extended period of 
time, but any use of CompletableFuture is susceptible to that abuse. Are 
you afraid that users of Process API are not common CompletableFuture 
users and are not aware of commonPool() constraints?




Would an alternative be to define two overloaded onExit() methods in 
the style of CompletableFuture itself?


CompletableFuture onExit();
CompletableFuture onExit(Executor executor);

...and give the user a chance to supply it's own Executor if the 
default ForkJoinPoll.commonPool() does not fit?
It is only one more method in PH and Process but that function is 
available from CompletableFuture

though perhaps not as conveniently.
The onExit method returns a CompletableFuture that has the entire 
complement of
synchronous and async methods available to it.  The application can 
control

where subsequent computations are performed.


That's true for Async methods. Synchronous continuations are 
executed by whichever thread executed the previous stage. We insert an 
asynchronous stage to shield the internal 32K threads from user code. 
What we do by executing that stage in AsyncExecutor might be desirable 
when the user attaches a synchronous continuation. But when he attaches 
an asynchronous one, the thread from AsyncExecutor is used just for 
mapping the result into Process instance and triggering the execution of 
next stage. Can we eliminate this unnecessary asynchronous stage?


What would be if we stoped pretending that user can execute a 
synchronous onExit() continuation when in fact that continuation is 
always attached to an asynchronous stage so it is actually asynchronous? 
What if CompletableFuture had a public superclass called 
AsyncCompletableFuture that only exposed .Async() continuation 
methods which returned normal CompletableFuture(s)? Such class would by 
definition shield the thread that completes an AsyncCompletableFuture 
from user code that attaches continuations. And user code would have 
exclusive control on what type of thread executes it. Isn't this a 
desirable property that jsr166 could provide? It could be useful in 
other scenarios too.


The only problem with that approach is that we need a mapping stage that 
"attaches" the Proses[Handle] instance to 
AsyncCompletableFuture (which is generic and returns an exit 
status) to get AsyncCompletableFuture. Such operation 
could be provided by AsyncCompletableFuture as an "immediate" operation 
- not taking any lambda function which needs a thread to be executed, 
but a replacement object instead:


public class AsyncCompletableFuture ... {

/**
 * Returns a new CompletionStage that, when this stage completes
 * normally, completes with the given replacementResult.
 *
 * @param replacementResult the successful completion value of the 
returned

 *   CompletionStage
 * @param  the value's type
 * @return the new CompletionStage
 */
public  AsyncCompletableFuture thenReplac

Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-14 Thread Roger Riggs

Hi Alan,

For something to log from an inoperative destroy call, are you thinking 
it should
throw an exception?  In that case the return value could be void and the 
exception

message could expose the OS specific reason.

None of the currently available exceptions seem appropriate;
A bare RuntimeException might do but perhaps a new exception class is 
needed.


Thanks, Roger



On 5/14/15 8:30 AM, Alan Bateman wrote:



On 13/05/2015 16:10, Roger Riggs wrote:

Hi Alan,

Yes, the destroy use looks a bit odd. In Process, destroyForcibly
returns this so that the application can fluently wait for the 
termination to complete.

Returning Optional enables the case to fluently
perform an action on the process when it does exit.

ProcessHandle ph = ...;
ph.destroy().ifPresent(p -> p.onExit( ...));

Optional.isPresent() provides a boolean if that's needed.
Optional is good and the usages with parent(), etc. are nice update to 
the API. Chaining methods invocations is good too.


It's just destroy that seems a bit much as it's too easy to mix up 
"present" and "alive", e.g.: the process was present so we attempted 
to destroy it, it may or may not be present now. In that context using 
ifPresent in this example is confusing to read because it's really 
"was present, on its way to not being present" (if you know what I 
mean). That's the only one that might need to be looked at again to 
see if there are better candidates. Related to this is that the 
destroy methods don't have a way to communicate anything useful to log 
when they fail. With Process no long implementing ProcessHandle then 
there is an opportunity to look at that again.


-Alan.





Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-14 Thread Roger Riggs

Hi Peter,

On 5/14/15 8:19 AM, Peter Levart wrote:

Hi Roger,

The new API using Optional(s) looks fine. In particular for the 
ProcessHandle returning methods. They now either return 
Stream or Optional.


At some point in the development of this API, the implementation 
introduced the AsyncExecutor to execute synchronous continuations of 
the onExit() returned CompletableFuture(s). What was the main 
motivation for this given that:
- previously, ForkJoinPoll.commonPool() was used instead that by 
default possesses some similar characteristics (Innocuous threads when 
SecurityManager is active)

The AsyncExecutor also uses InnocuousThreads.

- this AsyncExecutor is only effective for 1st "wave" of synchronous 
continuations. Asynchronous continuations and synchronous 
continuations following them will still use ForkJoinPoll.commonPool()
Unfortunately, the common ForkJoinPool assumes that tasks queued to it 
complete relatively
quickly and free the thread.  It does not grow the number of threads and 
is not appropriate
for tasks that block for indefinite periods as might be need to wait for 
a Process to exit.




Would an alternative be to define two overloaded onExit() methods in 
the style of CompletableFuture itself?


CompletableFuture onExit();
CompletableFuture onExit(Executor executor);

...and give the user a chance to supply it's own Executor if the 
default ForkJoinPoll.commonPool() does not fit?
It is only one more method in PH and Process but that function is 
available from CompletableFuture

though perhaps not as conveniently.
The onExit method returns a CompletableFuture that has the entire 
complement of

synchronous and async methods available to it.  The application can control
where subsequent computations are performed.

That convenience method could be added later when the use case and 
frequency is clearer.


Is there expectation that ForkJoinPoll.commonPool() will not fit in 
the common case?
I don't have a sense of how onExit would be used or how often it would 
be used.

The common FJP seems like a good start for computations to be performed
after the Process has exited.  I'd wait and see what needs can be 
articulated.,

hopefully within the JDK 9 timeframe.

Thanks, Roger



Regards, Peter

On 05/13/2015 04:16 PM, Roger Riggs wrote:

Hi,

Are there any comments about the use of java.util.Optional in the 
ProcessHandle API?

Or a review of the changes?

Thanks, Roger


On 5/11/2015 11:49 AM, Roger Riggs wrote:

Please review clarifications and updates to the proposed Precess API.

A few loose ends in the ProcessHandle API were identified.

1) The ProcessHandle.parent() method currently returns null if the 
parent cannot
be determined and the ProcessHandle.of(PID) method returns null if 
the PID does not exist.

It has been suggested to return an Optional to make
these methods more flexible and allow a fluent style and work better 
with streams.


2) The behavior of Processhandle.destroy and destroyForcibly are 
different
than Process.destroy and destroyForcibly.  Those functions always 
succeed because

they are children of the spawning process.

In contrast, ProcessHandle.destroy and destroyForcible are requests to
destroy the process and may not succeed due to operating system 
restrictions such

as the process not being a child or not having enough privilege.
The description of the methods needs to be clarified that it is a 
request to destroy
and it may not succeed, In that case the destroy and destroyForcibly 
methods
should indicate that the request was not successful.  In particular, 
the caller

may not want to wait for the process to terminate (its not going to).

The proposed update is to return an Optional .
It can be streamed and can take advantage of the conditional 
operations on the Optional.


3) Using Optional is also attractive for the return values of the 
information
about a ProcessHandles, since not all values are available from 
every OS.
The returns values of Info.command, arguments, startInstant, 
totalDuration, and user

are proposed to be updated to return Optional.
It allows for more compact code and fewer explicit checks for null.

Please review and comment:

Webrev:
http://cr.openjdk.java.net/~rriggs/webrev-ph/

javadoc:
http://cr.openjdk.java.net/~rriggs/ph-apidraft/

Diffs of the spec/javadoc from previous draft:
http://cr.openjdk.java.net/~rriggs/ph-diffs-2015-05-11/overview-summary.html 



Thanks, Roger











Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-14 Thread Alan Bateman



On 13/05/2015 16:10, Roger Riggs wrote:

Hi Alan,

Yes, the destroy use looks a bit odd. In Process, destroyForcibly
returns this so that the application can fluently wait for the 
termination to complete.

Returning Optional enables the case to fluently
perform an action on the process when it does exit.

ProcessHandle ph = ...;
ph.destroy().ifPresent(p -> p.onExit( ...));

Optional.isPresent() provides a boolean if that's needed.
Optional is good and the usages with parent(), etc. are nice update to 
the API. Chaining methods invocations is good too.


It's just destroy that seems a bit much as it's too easy to mix up 
"present" and "alive", e.g.: the process was present so we attempted to 
destroy it, it may or may not be present now. In that context using 
ifPresent in this example is confusing to read because it's really "was 
present, on its way to not being present" (if you know what I mean). 
That's the only one that might need to be looked at again to see if 
there are better candidates. Related to this is that the destroy methods 
don't have a way to communicate anything useful to log when they fail. 
With Process no long implementing ProcessHandle then there is an 
opportunity to look at that again.


-Alan.



Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-14 Thread Peter Levart

Hi Roger,

The new API using Optional(s) looks fine. In particular for the 
ProcessHandle returning methods. They now either return 
Stream or Optional.


At some point in the development of this API, the implementation 
introduced the AsyncExecutor to execute synchronous continuations of the 
onExit() returned CompletableFuture(s). What was the main motivation for 
this given that:
- previously, ForkJoinPoll.commonPool() was used instead that by default 
possesses some similar characteristics (Innocuous threads when 
SecurityManager is active)
- this AsyncExecutor is only effective for 1st "wave" of synchronous 
continuations. Asynchronous continuations and synchronous continuations 
following them will still use ForkJoinPoll.commonPool()


Would an alternative be to define two overloaded onExit() methods in the 
style of CompletableFuture itself?


CompletableFuture onExit();
CompletableFuture onExit(Executor executor);

...and give the user a chance to supply it's own Executor if the default 
ForkJoinPoll.commonPool() does not fit?


Is there expectation that ForkJoinPoll.commonPool() will not fit in the 
common case?


Regards, Peter

On 05/13/2015 04:16 PM, Roger Riggs wrote:

Hi,

Are there any comments about the use of java.util.Optional in the 
ProcessHandle API?

Or a review of the changes?

Thanks, Roger


On 5/11/2015 11:49 AM, Roger Riggs wrote:

Please review clarifications and updates to the proposed Precess API.

A few loose ends in the ProcessHandle API were identified.

1) The ProcessHandle.parent() method currently returns null if the 
parent cannot
be determined and the ProcessHandle.of(PID) method returns null if 
the PID does not exist.

It has been suggested to return an Optional to make
these methods more flexible and allow a fluent style and work better 
with streams.


2) The behavior of Processhandle.destroy and destroyForcibly are 
different
than Process.destroy and destroyForcibly.  Those functions always 
succeed because

they are children of the spawning process.

In contrast, ProcessHandle.destroy and destroyForcible are requests to
destroy the process and may not succeed due to operating system 
restrictions such

as the process not being a child or not having enough privilege.
The description of the methods needs to be clarified that it is a 
request to destroy
and it may not succeed, In that case the destroy and destroyForcibly 
methods
should indicate that the request was not successful.  In particular, 
the caller

may not want to wait for the process to terminate (its not going to).

The proposed update is to return an Optional .
It can be streamed and can take advantage of the conditional 
operations on the Optional.


3) Using Optional is also attractive for the return values of the 
information
about a ProcessHandles, since not all values are available from every 
OS.
The returns values of Info.command, arguments, startInstant, 
totalDuration, and user

are proposed to be updated to return Optional.
It allows for more compact code and fewer explicit checks for null.

Please review and comment:

Webrev:
   http://cr.openjdk.java.net/~rriggs/webrev-ph/

javadoc:
   http://cr.openjdk.java.net/~rriggs/ph-apidraft/

Diffs of the spec/javadoc from previous draft:
http://cr.openjdk.java.net/~rriggs/ph-diffs-2015-05-11/overview-summary.html 



Thanks, Roger









Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-13 Thread Roger Riggs

Hi Alan,

Yes, the destroy use looks a bit odd. In Process, destroyForcibly
returns this so that the application can fluently wait for the 
termination to complete.

Returning Optional enables the case to fluently
perform an action on the process when it does exit.

ProcessHandle ph = ...;
ph.destroy().ifPresent(p -> p.onExit( ...));

Optional.isPresent() provides a boolean if that's needed.

Thanks, Roger


On 5/13/2015 10:55 AM, Alan Bateman wrote:

On 13/05/2015 15:16, Roger Riggs wrote:

Hi,

Are there any comments about the use of java.util.Optional in the 
ProcessHandle API?

Or a review of the changes?

Having parent return Optional looks good.

Having all methods in ProcessHandle.Info return Optional is probably 
right, it gives us a bit more flexibility compared to info() returning 
Optional.


I'm not sure abut the destroy* methods, the result of a destroy is 
essentially a boolean so this feels a little odd.


-Alan.




Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-13 Thread Alan Bateman

On 13/05/2015 15:16, Roger Riggs wrote:

Hi,

Are there any comments about the use of java.util.Optional in the 
ProcessHandle API?

Or a review of the changes?

Having parent return Optional looks good.

Having all methods in ProcessHandle.Info return Optional is probably 
right, it gives us a bit more flexibility compared to info() returning 
Optional.


I'm not sure abut the destory* methods, the result of a destroy is 
essentially a boolean so this feels a little odd.


-Alan.


Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-13 Thread Roger Riggs

Hi,

Are there any comments about the use of java.util.Optional in the 
ProcessHandle API?

Or a review of the changes?

Thanks, Roger


On 5/11/2015 11:49 AM, Roger Riggs wrote:

Please review clarifications and updates to the proposed Precess API.

A few loose ends in the ProcessHandle API were identified.

1) The ProcessHandle.parent() method currently returns null if the 
parent cannot
be determined and the ProcessHandle.of(PID) method returns null if the 
PID does not exist.

It has been suggested to return an Optional to make
these methods more flexible and allow a fluent style and work better 
with streams.


2) The behavior of Processhandle.destroy and destroyForcibly are 
different
than Process.destroy and destroyForcibly.  Those functions always 
succeed because

they are children of the spawning process.

In contrast, ProcessHandle.destroy and destroyForcible are requests to
destroy the process and may not succeed due to operating system 
restrictions such

as the process not being a child or not having enough privilege.
The description of the methods needs to be clarified that it is a 
request to destroy
and it may not succeed, In that case the destroy and destroyForcibly 
methods
should indicate that the request was not successful.  In particular, 
the caller

may not want to wait for the process to terminate (its not going to).

The proposed update is to return an Optional .
It can be streamed and can take advantage of the conditional 
operations on the Optional.


3) Using Optional is also attractive for the return values of the 
information

about a ProcessHandles, since not all values are available from every OS.
The returns values of Info.command, arguments, startInstant, 
totalDuration, and user

are proposed to be updated to return Optional.
It allows for more compact code and fewer explicit checks for null.

Please review and comment:

Webrev:
   http://cr.openjdk.java.net/~rriggs/webrev-ph/

javadoc:
   http://cr.openjdk.java.net/~rriggs/ph-apidraft/

Diffs of the spec/javadoc from previous draft:
http://cr.openjdk.java.net/~rriggs/ph-diffs-2015-05-11/overview-summary.html 



Thanks, Roger







Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-07 Thread Roger Riggs

fyi, with respect to using the start time to uniquely identify a process.

It turns out that on Linux[1] looking at st_mtime from 
stat(/proc/pid/status)

doesn't always produce the same value (and the inode number can change).
When spawning large number of processes (< 500)
it appears the pseudo inode entries are not kept around and are 
re-initialized.


In a brief peek at the Linux code in fs/proc/inode.c it appears always 
to initialize

st_mtime/st_atime/st_ctime from the current time.
It seems a slightly more expensive read and parse of the file will be 
needed

to get and check the start time.

fyi, Roger

[1] Ubuntu 14.04.  (Linux-3.13.0) sources



On 4/17/2015 4:05 AM, Peter Levart wrote:


At least on Linux (/proc//stat) and Solaris (/proc//status) 
it is not necessary to open those special files and read them. Just 
doing stat() on them and using the st_mtime will get you the process 
start time. I see AIX shares native code with Linux (unix), so perhaps 
AIX does the same. Mac OSX and Windows have special calls...




Re: RFR 9: 8077350 Process API Updates Implementation Review (Due 4/23)

2015-04-24 Thread Roger Riggs

Hi Alan,

On 4/24/2015 12:21 PM, Alan Bateman wrote:

On 24/04/2015 16:49, Roger Riggs wrote:

:


I'm not sure about the @implSpec in 
Process::supportsNormalTermination. Shouldn't that just specify that 
the default implementation throws UOE. An @implNote could comment on 
how ProcessBuilder works. Same comment on Process::toHandle.


There needs to be a specification for what ProcessBuilder does; 
@implNote cannot provide testable assertions.
Right, I'm not suggesting that, just pointing out @implSpec in the 
webrev doesn't specify how the default implementation works. It 
instead specifies what ProcessBuilder#start returns. Is the javadoc 
and webrev in sync, that might be part of the confusion on my part.
Right, Paul pointed out this as well and I got the javadoc out of sync 
with the webrev.
The javadoc was generated more recently.  I should have updated them at 
the same time or not at all.




...






The parent is optional and maybe it would make sense for parent() to 
return Optional. The javadoc for this method, isAlive 
too, mention "zombie state". I think that needs a bit of description 
to generally explain what it means.

Will providing Optional make it easier to use?

Perhaps the mention of zombie should be removed; it is os specific; 
though the description of the state may be useful.
If you keep it then some commentary would be helpful as the reader 
might not understand the term.

will do.


The parent is optional so something to consider.

I'm open to well reasoned recommendations.






This may have been discussed previously but totalCpuDuration might 
be clearer if renamed to totalCpuTimeAsDuration. Also startInstant 
would be clearer (to me anyway) as startTimeAsInstant.
I would have preferred not include the return type at all instead of 
making the method name longer.
Okay although not having "CpuTime" and "StartTime" in the method names 
means the names will be less clear.
These started out as totalCputime() and startTime(). Those seemed clear 
enough to me.
When you suggest 'AsDuration' is the qualification to indicate the 
return type or

the general nature of a time interval.

Thanks, Roger



-Alan




Re: RFR 9: 8077350 Process API Updates Implementation Review (Due 4/23)

2015-04-24 Thread Alan Bateman

On 24/04/2015 16:49, Roger Riggs wrote:

:


I'm not sure about the @implSpec in 
Process::supportsNormalTermination. Shouldn't that just specify that 
the default implementation throws UOE. An @implNote could comment on 
how ProcessBuilder works. Same comment on Process::toHandle.


There needs to be a specification for what ProcessBuilder does; 
@implNote cannot provide testable assertions.
Right, I'm not suggesting that, just pointing out @implSpec in the 
webrev doesn't specify how the default implementation works. It instead 
specifies what ProcessBuilder#start returns. Is the javadoc and webrev 
in sync, that might be part of the confusion on my part.




In Process::destroy then the wording is adjusted to use the term 
"normally terminated". The word "normally" doesn't seem right to me, 
"gracefully" might work better. I'm tempted to suggest 
supportsNormalTermination be renamed to supportsGracefulTermination 
if you haven't discounted it already.
There have been a variety of comments on the name and function and 
suggestions.
I arrived at 'normal' because it had a clearer mapping to the 
underlying OS behavior.
The descriptions of SIGTERM vary, but usually include a description of 
it as the normal way to terminate a process.

Okay, if you've settled on this.





The parent is optional and maybe it would make sense for parent() to 
return Optional. The javadoc for this method, isAlive 
too, mention "zombie state". I think that needs a bit of description 
to generally explain what it means.

Will providing Optional make it easier to use?

Perhaps the mention of zombie should be removed; it is os specific; 
though the description of the state may be useful.
If you keep it then some commentary would be helpful as the reader might 
not understand the term.


The parent is optional so something to consider.





This may have been discussed previously but totalCpuDuration might be 
clearer if renamed to totalCpuTimeAsDuration. Also startInstant would 
be clearer (to me anyway) as startTimeAsInstant.
I would have preferred not include the return type at all instead of 
making the method name longer.
Okay although not having "CpuTime" and "StartTime" in the method names 
means the names will be less clear.


-Alan


Re: RFR 9: 8077350 Process API Updates Implementation Review (Due 4/23)

2015-04-24 Thread Roger Riggs

Hi Alan,

On 4/24/2015 8:06 AM, Alan Bateman wrote:

On 17/04/2015 20:12, Roger Riggs wrote:
The webrev for ProcessAPI updates has been updated to reflect recent 
comments.

Please  review and comment by April 23rd.

The updates include:
 - Renaming Process/ProcessHandle supportsDestroyForcibly to 
supportsNormalTermination

   and updating related descriptions
- ProcessHandle.destroy/destroyForcible descriptions have more 
consistent descriptions

- ProcessHandle.destroy now returns ProcessHandle to enable fluent use.
- Corrected description of default implementation ProcessHandle.onExit

The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/

The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph

I'm finally getting to this.

One comment on terminology is that ProcessBuilder uses phrases like 
"Starts a new process" whereas the changes to Process (and the new 
ProcessHandle) have switched to "spawn". It's not a big deal but would 
be good to have consistency.
Spawn seemed more consistent with underlying OS; but I can use starts a 
process.


A typo at around L77 of Process, "iecr" has crept in the class 
description.

will fix.


I'm not sure about the @implSpec in 
Process::supportsNormalTermination. Shouldn't that just specify that 
the default implementation throws UOE. An @implNote could comment on 
how ProcessBuilder works. Same comment on Process::toHandle.


There needs to be a specification for what ProcessBuilder does; 
@implNote cannot provide testable assertions.


If @implSpec is scoped to exactly the behavior of the specific method, then
the behavior of ProcessBuilder will need to be in the main method 
description

as it is done for the existing methods. For example,
"Processes returned from ProcessBuilder override the default 
implementation ..."


In Process::destroy then the wording is adjusted to use the term 
"normally terminated". The word "normally" doesn't seem right to me, 
"gracefully" might work better. I'm tempted to suggest 
supportsNormalTermination be renamed to supportsGracefulTermination if 
you haven't discounted it already.
There have been a variety of comments on the name and function and 
suggestions.
I arrived at 'normal' because it had a clearer mapping to the underlying 
OS behavior.
The descriptions of SIGTERM vary, but usually include a description of 
it as the normal way to terminate a process.




The parent is optional and maybe it would make sense for parent() to 
return Optional. The javadoc for this method, isAlive 
too, mention "zombie state". I think that needs a bit of description 
to generally explain what it means.

Will providing Optional make it easier to use?

Perhaps the mention of zombie should be removed; it is os specific; 
though the description of the state may be useful.


This may have been discussed previously but totalCpuDuration might be 
clearer if renamed to totalCpuTimeAsDuration. Also startInstant would 
be clearer (to me anyway) as startTimeAsInstant.
I would have preferred not include the return type at all instead of 
making the method name longer.


A passing comment on the webrev (still working through it) is that it 
would be good to keep the line length on the javadoc somewhat 
consistent in the file.

I can work on that.

Thanks, Roger



-Alan





Re: RFR 9: 8077350 Process API Updates Implementation Review (Due 4/23)

2015-04-24 Thread Alan Bateman

On 17/04/2015 20:12, Roger Riggs wrote:
The webrev for ProcessAPI updates has been updated to reflect recent 
comments.

Please  review and comment by April 23rd.

The updates include:
 - Renaming Process/ProcessHandle supportsDestroyForcibly to 
supportsNormalTermination

   and updating related descriptions
- ProcessHandle.destroy/destroyForcible descriptions have more 
consistent descriptions

- ProcessHandle.destroy now returns ProcessHandle to enable fluent use.
- Corrected description of default implementation ProcessHandle.onExit

The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/

The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph

I'm finally getting to this.

One comment on terminology is that ProcessBuilder uses phrases like 
"Starts a new process" whereas the changes to Process (and the new 
ProcessHandle) have switched to "spawn". It's not a big deal but would 
be good to have consistency.


A typo at around L77 of Process, "iecr" has crept in the class description.

I'm not sure about the @implSpec in Process::supportsNormalTermination. 
Shouldn't that just specify that the default implementation throws UOE. 
An @implNote could comment on how ProcessBuilder works. Same comment on 
Process::toHandle.


In Process::destroy then the wording is adjusted to use the term 
"normally terminated". The word "normally" doesn't seem right to me, 
"gracefully" might work better. I'm tempted to suggest 
supportsNormalTermination be renamed to supportsGracefulTermination if 
you haven't discounted it already.


The parent is optional and maybe it would make sense for parent() to 
return Optional. The javadoc for this method, isAlive 
too, mention "zombie state". I think that needs a bit of description to 
generally explain what it means.


This may have been discussed previously but totalCpuDuration might be 
clearer if renamed to totalCpuTimeAsDuration. Also startInstant would be 
clearer (to me anyway) as startTimeAsInstant.


A passing comment on the webrev (still working through it) is that it 
would be good to keep the line length on the javadoc somewhat consistent 
in the file.


-Alan



Re: RFR 9: 8077350 Process API Updates Implementation Review (Due 4/23)

2015-04-21 Thread Roger Riggs

Hi Thomas,

Good point and more robust.

Thanks, Roger


On 4/21/2015 11:49 AM, Thomas Stüfe wrote:

Hi Roger,

small remark, I see you at a number of places using the same pattern 
when reading information from /proc:


 342 ret = fread(&psinfo, 1, (sizeof psinfo), fp);
 343 fclose(fp);
 344 if (ret < 0) {
 345 return;
 346 }

A better way would be to check for the whole size you intended to read:

 342 ret = fread(&psinfo, 1, (sizeof psinfo), fp);
 343 fclose(fp);
 344 if (ret < sizeof(psinfo)) {
 345 return;
 346 }

/proc may get stale while you read it, so make sure fread actually 
filled the whole output structure, otherwise you may act on partially 
random memory content. This is not theoretical :) I had exactly this 
problem on Solaris when reading structures (in our cases 
/proc/self/lstatus) from /proc.


Kind Regards, Thomas





On Fri, Apr 17, 2015 at 9:12 PM, Roger Riggs > wrote:


The webrev for ProcessAPI updates has been updated to reflect
recent comments.
Please  review and comment by April 23rd.

The updates include:
 - Renaming Process/ProcessHandle supportsDestroyForcibly to
supportsNormalTermination
   and updating related descriptions
- ProcessHandle.destroy/destroyForcible descriptions have more
consistent descriptions
- ProcessHandle.destroy now returns ProcessHandle to enable fluent
use.
- Corrected description of default implementation ProcessHandle.onExit

The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/


The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph


Issue: JDK-8077350
 Process API
Updates Implementation

The code is in the jdk9 sandbox on branch JDK-8046092-branch.

Two issues raised have been filed to be fixed after the current
commit:
 - JDK-8078099 
(process) ProcessHandle should uniquely identify processes
 - JDK-8078108 
(process) ProcessHandle.isAlive should be robust






Re: RFR 9: 8077350 Process API Updates Implementation Review (Due 4/23)

2015-04-21 Thread Paul Sandoz

On Apr 21, 2015, at 5:20 PM, Roger Riggs  wrote:

> Hi Paul,
> 
> The onExit() @implNote recommends the subclass delegate to the underlying 
> process.

I know, but is that recommendation just for our (Oracle's) JDK? Is it subject 
to change? What if another JDK vendor decides to have a different @implNote 
without such a recommendation?

Sorry to keep labouring the point but i still think you need to have some form 
of impl spec for this method.

Paul.

> For a simple subclass acting as a proxy or decorator that will provide the 
> best implementation.
> 

> I'll add the onExit()  method to the class level recommendation for 
> Subclasses of Process.
> 
> Thanks, Roger.



Re: RFR 9: 8077350 Process API Updates Implementation Review (Due 4/23)

2015-04-21 Thread Thomas Stüfe
Hi Roger,

small remark, I see you at a number of places using the same pattern when
reading information from /proc:

 342 ret = fread(&psinfo, 1, (sizeof psinfo), fp);
 343 fclose(fp);
 344 if (ret < 0) {
 345 return;
 346 }

A better way would be to check for the whole size you intended to read:

 342 ret = fread(&psinfo, 1, (sizeof psinfo), fp);
 343 fclose(fp);
 344 if (ret < sizeof(psinfo)) {
 345 return;
 346 }

/proc may get stale while you read it, so make sure fread actually filled
the whole output structure, otherwise you may act on partially random
memory content. This is not theoretical :) I had exactly this problem on
Solaris when reading structures (in our cases /proc/self/lstatus) from
/proc.

Kind Regards, Thomas





On Fri, Apr 17, 2015 at 9:12 PM, Roger Riggs  wrote:

> The webrev for ProcessAPI updates has been updated to reflect recent
> comments.
> Please  review and comment by April 23rd.
>
> The updates include:
>  - Renaming Process/ProcessHandle supportsDestroyForcibly to
> supportsNormalTermination
>and updating related descriptions
> - ProcessHandle.destroy/destroyForcible descriptions have more consistent
> descriptions
> - ProcessHandle.destroy now returns ProcessHandle to enable fluent use.
> - Corrected description of default implementation ProcessHandle.onExit
>
> The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/
>
> The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph
>
> Issue: JDK-8077350 
> Process API Updates Implementation
>
> The code is in the jdk9 sandbox on branch JDK-8046092-branch.
>
> Two issues raised have been filed to be fixed after the current commit:
>  - JDK-8078099 
> (process) ProcessHandle should uniquely identify processes
>  - JDK-8078108 
> (process) ProcessHandle.isAlive should be robust
>


Re: RFR 9: 8077350 Process API Updates Implementation Review (Due 4/23)

2015-04-21 Thread Roger Riggs

Hi Paul,

The onExit() @implNote recommends the subclass delegate to the 
underlying process.
For a simple subclass acting as a proxy or decorator that will provide 
the best implementation.


I'll add the onExit()  method to the class level recommendation for 
Subclasses of Process.


Thanks, Roger.

On 4/21/2015 9:16 AM, Paul Sandoz wrote:

On Apr 21, 2015, at 2:57 PM, Roger Riggs  wrote:


Hi Paul,

On 4/21/2015 8:29 AM, Paul Sandoz wrote:

There are statements in Process about the specified behavior of Processes 
created by ProcessBuilder. That's why I included them in the @implSpec clause. 
If @implSpec is only for the specifics of the method itself then where should 
the behavior of ProcessBuilder created instances be specified?

I think the @implNote you have on Process.onExit about Processes from 
ProcessBuilder being more efficient is fine, but that @implNote also appears to 
mix stuff that is @implSpec for the method itself.

I don't see anything in that @implNote that is spec; just informative.

Process.onExit triggering waitFor to be called asynchronously in another thread 
seems more spec-like to me (without defining what the executor of the CF is, 
but we can probably specify what it is not i.e. the CF should be executed on 
the f/j common pool).

Sorry, I meant to say "i.e. the CF should *not* be ...".


...

I am not suggesting over specifying, but currently it feels under specified by 
not specifying anything :-) Given that is only so much the Process class impl 
can do regardless of the JDK vendor it seems we could specify something so 
sub-types can be better informed about whether they should override or not.

Paul.



Re: RFR 9: 8077350 Process API Updates Implementation Review (Due 4/23)

2015-04-21 Thread Paul Sandoz

On Apr 21, 2015, at 2:57 PM, Roger Riggs  wrote:

> Hi Paul,
> 
> On 4/21/2015 8:29 AM, Paul Sandoz wrote:
>> There are statements in Process about the specified behavior of Processes 
>> created by ProcessBuilder. That's why I included them in the @implSpec 
>> clause. If @implSpec is only for the specifics of the method itself then 
>> where should the behavior of ProcessBuilder created instances be specified?
 I think the @implNote you have on Process.onExit about Processes from 
 ProcessBuilder being more efficient is fine, but that @implNote also 
 appears to mix stuff that is @implSpec for the method itself.
>>> I don't see anything in that @implNote that is spec; just informative.
>> Process.onExit triggering waitFor to be called asynchronously in another 
>> thread seems more spec-like to me (without defining what the executor of the 
>> CF is, but we can probably specify what it is not i.e. the CF should be 
>> executed on the f/j common pool).

Sorry, I meant to say "i.e. the CF should *not* be ...".


>> How could it be otherwise?
> The ForkJoinPool is particularly unsuitable for onExit.  It has a limited 
> number of threads
> and pretty much assumes that threads do not block indefinitely.

Yes. Even if a managed blocker is used with waitFor + timeouts i would advise 
against using the F/J common pool.


>  But for this use they block
> until the process exits.  In my testing,  the set of threads gets consumed by 
> the first set of
> onExit calls and then hangs because no more threads are available.
> The current implementation creates it own pool and spawn threads as needed, 
> but that's an implementation detail.
> I think the most can be said is that onExit does not block.  There's no point 
> in over specifying
> the default implementation of onExit.  

I am not suggesting over specifying, but currently it feels under specified by 
not specifying anything :-) Given that is only so much the Process class impl 
can do regardless of the JDK vendor it seems we could specify something so 
sub-types can be better informed about whether they should override or not.

Paul.

> Practically, it does not get used because the JDK
> provided subclasses have use a lighter weight mechanism to wait for process 
> exit and in time
> other mechanisms that do not consume a thread are likely.
> 

> Thanks, Roger
> 



Re: RFR 9: 8077350 Process API Updates Implementation Review (Due 4/23)

2015-04-21 Thread Roger Riggs

Hi Paul,

On 4/21/2015 8:29 AM, Paul Sandoz wrote:
There are statements in Process about the specified behavior of 
Processes created by ProcessBuilder. That's why I included them in the 
@implSpec clause. If @implSpec is only for the specifics of the method 
itself then where should the behavior of ProcessBuilder created 
instances be specified?

I think the @implNote you have on Process.onExit about Processes from 
ProcessBuilder being more efficient is fine, but that @implNote also appears to 
mix stuff that is @implSpec for the method itself.

I don't see anything in that @implNote that is spec; just informative.

Process.onExit triggering waitFor to be called asynchronously in another thread 
seems more spec-like to me (without defining what the executor of the CF is, 
but we can probably specify what it is not i.e. the CF should be executed on 
the f/j common pool). How could it be otherwise?
The ForkJoinPool is particularly unsuitable for onExit.  It has a 
limited number of threads
and pretty much assumes that threads do not block indefinitely.  But for 
this use they block
until the process exits.  In my testing,  the set of threads gets 
consumed by the first set of

onExit calls and then hangs because no more threads are available.
The current implementation creates it own pool and spawn threads as 
needed, but that's an implementation detail.
I think the most can be said is that onExit does not block.  There's no 
point in over specifying
the default implementation of onExit.  Practically, it does not get used 
because the JDK
provided subclasses have use a lighter weight mechanism to wait for 
process exit and in time

other mechanisms that do not consume a thread are likely.

Thanks, Roger



Re: RFR 9: 8077350 Process API Updates Implementation Review (Due 4/23)

2015-04-21 Thread Paul Sandoz

On Apr 20, 2015, at 8:42 PM, Roger Riggs  wrote:

> Hi Paul,
> 
> On 4/20/2015 2:26 PM, Paul Sandoz wrote:
>> On Apr 20, 2015, at 7:33 PM, Roger Riggs  wrote:
>> 
>>> Hi Paul,
>>> 
>>> There are statements in Process about the specified behavior of Processes
>>> created by ProcessBuilder.  That's why I included them in the @implSpec 
>>> clause.
>>> If @implSpec is only for the specifics of the method itself then where
>>> should the behavior of ProcessBuilder created instances be specified?
>>> 
>> I think the @implNote you have on Process.onExit about Processes from 
>> ProcessBuilder being more efficient is fine, but that @implNote also appears 
>> to mix stuff that is @implSpec for the method itself.
> I don't see anything in that @implNote that is spec; just informative.

Process.onExit triggering waitFor to be called asynchronously in another thread 
seems more spec-like to me (without defining what the executor of the CF is, 
but we can probably specify what it is not i.e. the CF should be executed on 
the f/j common pool). How could it be otherwise? 

Paul.


Re: RFR 9: 8077350 Process API Updates Implementation Review (Due 4/23)

2015-04-20 Thread Roger Riggs

Hi Paul,

On 4/20/2015 2:26 PM, Paul Sandoz wrote:

On Apr 20, 2015, at 7:33 PM, Roger Riggs  wrote:


Hi Paul,

There are statements in Process about the specified behavior of Processes
created by ProcessBuilder.  That's why I included them in the @implSpec clause.
If @implSpec is only for the specifics of the method itself then where
should the behavior of ProcessBuilder created instances be specified?


I think the @implNote you have on Process.onExit about Processes from 
ProcessBuilder being more efficient is fine, but that @implNote also appears to 
mix stuff that is @implSpec for the method itself.

I don't see anything in that @implNote that is spec; just informative.


For Process.supportsNormalTermination i would presume the @implSpec should be similar to 
that of toHandle. I don't think there needs to be anything said about Process from 
ProcessBuilder on this method this as it should "naturally" conform to what is 
stated.

ok on the first part.
But somewhere, it needs to state that ProcessBuilder created Processes 
are overloaded

to return true/false; otherwise it would be assumed they also throw UOE.



Perhaps you can state something in the class doc of Process and or 
ProcessBuilder about the behaviour of such Process instances? It would seem to 
flow from the general statement you added to Process about overriding.

It may be more a question about consistency of where things are specified.
The current spec for Process includes spec for ProcessBuilder created 
instances.
Most developers will read Process (but not ProcessBuilder) to find the 
details.

Only ProcessBuilder.start() says any specific about the created Process.

I'll see what I make sense.

Thanks, Roger



Paul.



Thanks, Roger



On 4/20/2015 12:33 PM, Paul Sandoz wrote:

On Apr 20, 2015, at 5:49 PM, Roger Riggs  wrote:


Hi Paul,

On 4/20/2015 9:01 AM, Paul Sandoz wrote:

Hi Roger,

I am not sure you have the @implSpec/@implNote quite correct on the new methods 
of Process.

For example, for Process.toHandle i would expect something like:

   ...

   @implSpec
   This implementation throws an instance of UnsupportedOperationException and
   performs no other action.  Sub-types should override this method, ensuring 
that
   calling methods (getPid etc.) of this class, that are not overridden, 
operate on the
   returned ProcessHandle.

The @implSpec should refer to the implementation in Process itself, and the 
@implNote cannot be used for any normative statements.

Thanks for the reminder and suggested text.  I updated the @implSpec clauses.


I spotted another one here:

  281 /**
  282  * Returns {@code true} if the implementation of {@link #destroy} is 
to
  283  * normally terminate the process,
  284  * Returns {@code false} if the implementation of {@code destroy}
  285  * forcibly and immediately terminates the process.
  286  *
  287  * @implSpec
  288  * Processes returned from ProcessBuilder support this operation to
  289  * return true or false depending on the platform implementation.
  290  *
  291  * @return {@code true} if the implementation of {@link #destroy} is 
to
  292  * normally terminate the process;
  293  * otherwise, {@link #destroy} forcibly terminates the process
  294  * @throws UnsupportedOperationException if the Process implementation
  295  * does not support this operation
  296  * @since 1.9
  297  */
  298 public boolean supportsNormalTermination() {
  299 throw new UnsupportedOperationException("supportsNormalTermination not 
supported for " + this.getClass().toString());
  300 }


The docs of Process.onExit might also require some tweaks. I dunno how much 
wiggle room there is with the current implementation, perhaps very little?


  415 /**
  416  * Returns a ProcessHandle for the Process.
  417  *
  418  * @implSpec
  419  * This implementation throws an instance of 
UnsupportedOperationException
  420  * and performs no other action.
...
  446  * @implSpec
  447  * The implementation of this method returns information about the 
process as:
  448  * {@link #toHandle toHandle().info()}.

Best to be consistent with either "This implementation ..." or "The implementation 
of this method ..." rather than a mix. I prefer the former as it is more concise. Up to you.



Some @implNotes describe the JDK implementation and some developers will
rely on the implementation specifics and to some degree define the expected 
behaviors.

One way of thinking about this is that the developers writing the TCK tests 
will pay close attention to @implSpec and need not do so for @implNote.



The document for Process.getPid (and similarly those methods depending on 
toHandle) could then be:

   ...
   @implSpec
   This implementation returns a process id as follows:

 toHandle().getPid();


In this respect is there a need to say anything about the behaviour 

Re: RFR 9: 8077350 Process API Updates Implementation Review (Due 4/23)

2015-04-20 Thread Paul Sandoz

On Apr 20, 2015, at 7:33 PM, Roger Riggs  wrote:

> Hi Paul,
> 
> There are statements in Process about the specified behavior of Processes
> created by ProcessBuilder.  That's why I included them in the @implSpec 
> clause.
> If @implSpec is only for the specifics of the method itself then where
> should the behavior of ProcessBuilder created instances be specified?
> 

I think the @implNote you have on Process.onExit about Processes from 
ProcessBuilder being more efficient is fine, but that @implNote also appears to 
mix stuff that is @implSpec for the method itself.

For Process.supportsNormalTermination i would presume the @implSpec should be 
similar to that of toHandle. I don't think there needs to be anything said 
about Process from ProcessBuilder on this method this as it should "naturally" 
conform to what is stated.

Perhaps you can state something in the class doc of Process and or 
ProcessBuilder about the behaviour of such Process instances? It would seem to 
flow from the general statement you added to Process about overriding.

Paul.


> Thanks, Roger
> 
> 
> 
> On 4/20/2015 12:33 PM, Paul Sandoz wrote:
>> On Apr 20, 2015, at 5:49 PM, Roger Riggs  wrote:
>> 
>>> Hi Paul,
>>> 
>>> On 4/20/2015 9:01 AM, Paul Sandoz wrote:
 Hi Roger,
 
 I am not sure you have the @implSpec/@implNote quite correct on the new 
 methods of Process.
 
 For example, for Process.toHandle i would expect something like:
 
   ...
 
   @implSpec
   This implementation throws an instance of UnsupportedOperationException 
 and
   performs no other action.  Sub-types should override this method, 
 ensuring that
   calling methods (getPid etc.) of this class, that are not overridden, 
 operate on the
   returned ProcessHandle.
 
 The @implSpec should refer to the implementation in Process itself, and 
 the @implNote cannot be used for any normative statements.
>>> Thanks for the reminder and suggested text.  I updated the @implSpec 
>>> clauses.
>>> 
>> I spotted another one here:
>> 
>>  281 /**
>>  282  * Returns {@code true} if the implementation of {@link #destroy} 
>> is to
>>  283  * normally terminate the process,
>>  284  * Returns {@code false} if the implementation of {@code destroy}
>>  285  * forcibly and immediately terminates the process.
>>  286  *
>>  287  * @implSpec
>>  288  * Processes returned from ProcessBuilder support this operation to
>>  289  * return true or false depending on the platform implementation.
>>  290  *
>>  291  * @return {@code true} if the implementation of {@link #destroy} 
>> is to
>>  292  * normally terminate the process;
>>  293  * otherwise, {@link #destroy} forcibly terminates the 
>> process
>>  294  * @throws UnsupportedOperationException if the Process 
>> implementation
>>  295  * does not support this operation
>>  296  * @since 1.9
>>  297  */
>>  298 public boolean supportsNormalTermination() {
>>  299 throw new 
>> UnsupportedOperationException("supportsNormalTermination not supported for " 
>> + this.getClass().toString());
>>  300 }
>> 
>> 
>> The docs of Process.onExit might also require some tweaks. I dunno how much 
>> wiggle room there is with the current implementation, perhaps very little?
>> 
>> 
>>  415 /**
>>  416  * Returns a ProcessHandle for the Process.
>>  417  *
>>  418  * @implSpec
>>  419  * This implementation throws an instance of 
>> UnsupportedOperationException
>>  420  * and performs no other action.
>> ...
>>  446  * @implSpec
>>  447  * The implementation of this method returns information about the 
>> process as:
>>  448  * {@link #toHandle toHandle().info()}.
>> 
>> Best to be consistent with either "This implementation ..." or "The 
>> implementation of this method ..." rather than a mix. I prefer the former as 
>> it is more concise. Up to you.
>> 
>> 
>>> Some @implNotes describe the JDK implementation and some developers will
>>> rely on the implementation specifics and to some degree define the expected 
>>> behaviors.
>> One way of thinking about this is that the developers writing the TCK tests 
>> will pay close attention to @implSpec and need not do so for @implNote.
>> 
>> 
 
 The document for Process.getPid (and similarly those methods depending on 
 toHandle) could then be:
 
   ...
   @implSpec
   This implementation returns a process id as follows:
 
 toHandle().getPid();
 
 
 In this respect is there a need to say anything about the behaviour of a 
 Process created by ProcessBuilder?
>>> The ProcessBuilder produced subclasses of Process implement the spec
>>> so no additional description is needed.
 It might be useful to have some general guidance for sub-types on the 
 class doc of Process e.g. saying they only need to override toHandle bu

Re: RFR 9: 8077350 Process API Updates Implementation Review (Due 4/23)

2015-04-20 Thread Roger Riggs

Hi Paul,

There are statements in Process about the specified behavior of Processes
created by ProcessBuilder.  That's why I included them in the @implSpec 
clause.

If @implSpec is only for the specifics of the method itself then where
should the behavior of ProcessBuilder created instances be specified?

Thanks, Roger



On 4/20/2015 12:33 PM, Paul Sandoz wrote:

On Apr 20, 2015, at 5:49 PM, Roger Riggs  wrote:


Hi Paul,

On 4/20/2015 9:01 AM, Paul Sandoz wrote:

Hi Roger,

I am not sure you have the @implSpec/@implNote quite correct on the new methods 
of Process.

For example, for Process.toHandle i would expect something like:

   ...

   @implSpec
   This implementation throws an instance of UnsupportedOperationException and
   performs no other action.  Sub-types should override this method, ensuring 
that
   calling methods (getPid etc.) of this class, that are not overridden, 
operate on the
   returned ProcessHandle.

The @implSpec should refer to the implementation in Process itself, and the 
@implNote cannot be used for any normative statements.

Thanks for the reminder and suggested text.  I updated the @implSpec clauses.


I spotted another one here:

  281 /**
  282  * Returns {@code true} if the implementation of {@link #destroy} is 
to
  283  * normally terminate the process,
  284  * Returns {@code false} if the implementation of {@code destroy}
  285  * forcibly and immediately terminates the process.
  286  *
  287  * @implSpec
  288  * Processes returned from ProcessBuilder support this operation to
  289  * return true or false depending on the platform implementation.
  290  *
  291  * @return {@code true} if the implementation of {@link #destroy} is 
to
  292  * normally terminate the process;
  293  * otherwise, {@link #destroy} forcibly terminates the process
  294  * @throws UnsupportedOperationException if the Process implementation
  295  * does not support this operation
  296  * @since 1.9
  297  */
  298 public boolean supportsNormalTermination() {
  299 throw new UnsupportedOperationException("supportsNormalTermination not 
supported for " + this.getClass().toString());
  300 }


The docs of Process.onExit might also require some tweaks. I dunno how much 
wiggle room there is with the current implementation, perhaps very little?


  415 /**
  416  * Returns a ProcessHandle for the Process.
  417  *
  418  * @implSpec
  419  * This implementation throws an instance of 
UnsupportedOperationException
  420  * and performs no other action.
...
  446  * @implSpec
  447  * The implementation of this method returns information about the 
process as:
  448  * {@link #toHandle toHandle().info()}.

Best to be consistent with either "This implementation ..." or "The implementation 
of this method ..." rather than a mix. I prefer the former as it is more concise. Up to you.



Some @implNotes describe the JDK implementation and some developers will
rely on the implementation specifics and to some degree define the expected 
behaviors.

One way of thinking about this is that the developers writing the TCK tests 
will pay close attention to @implSpec and need not do so for @implNote.




The document for Process.getPid (and similarly those methods depending on 
toHandle) could then be:

   ...
   @implSpec
   This implementation returns a process id as follows:

 toHandle().getPid();


In this respect is there a need to say anything about the behaviour of a 
Process created by ProcessBuilder?

The ProcessBuilder produced subclasses of Process implement the spec
so no additional description is needed.

It might be useful to have some general guidance for sub-types on the class doc 
of Process e.g. saying they only need to override toHandle but may override 
some or all dependent methods as appropriate.

It does not add much but I added a paragraph to the Process class javadoc.
There are not many subclasses of Process outside the JDK.


Ok.

Paul.


The webrev[1] and javadoc[2] have been updated in place.

Thanks, Roger

[1] http://cr.openjdk.java.net/~rriggs/webrev-ph
[2] http://cr.openjdk.java.net/~rriggs/ph-apidraft/




Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-20 Thread Roger Riggs

ok, thanks, roger


On 4/20/2015 11:57 AM, Thomas Stüfe wrote:

Hi Roger,

thanks!

Maybe better:

"When using ProcessHandles avoid assumptions about the state or 
liveness of the underlying process."
-> "When using ProcessHandles avoid assumptions about liveness or 
identity of the underlying process."


Regards, Thomas


On Mon, Apr 20, 2015 at 5:53 PM, Roger Riggs > wrote:


Hi Thomas,

I expanded the ProcessHandle class javadoc [1] paragraph to
include the
caution about process id reuse.

Thanks, Roger

[1]
http://cr.openjdk.java.net/~rriggs/ph-apidraft/java/lang/ProcessHandle.html





On 4/18/2015 2:44 PM, Thomas Stüfe wrote:

Hi Roger,

On Fri, Apr 17, 2015 at 8:57 PM, Roger Riggs
mailto:roger.ri...@oracle.com>> wrote:

Hi David,

On 4/17/2015 2:44 PM, David M. Lloyd wrote:

On 04/17/2015 11:53 AM, Roger Riggs wrote:

Hi Peter,

On 4/17/2015 4:05 AM, Peter Levart wrote:

Hi Roger,

Retrieving and caching the process start time as
soon as ProcessHandle
is instantiated might be a good idea. "destroy"
native code would then
use pid *and* start time to check the identity of
the process before
killing it.

Yes, though it does raise a question about how to
specify PH.equals().
Either the start time is explicitly
mentioned (and is repeatable and testable) or it is
vague and refers to
some implementation
specific value that uniquely identifies the process;
and is less testable.


Even with timestamps though, you cannot prevent the PID
from being reused in the time window after you verify its
timestamp but before you kill it unless it is a direct
child process (on UNIX anyway; who knows what Windows
does...).  I think that this inevitable race should be
mentioned in the docs regardless of whether the timestamp
thing is implemented (or doc'd) to prevent unrealistic
expectations (the api draft link seems to be dead so I
didn't see if it already includes this kind of language).

I can add a note about race conditions to the existing class
level
javadoc about process information changing asynchronously.

"Race conditions can exist between checking the status of a
process and acting upon it."

But I'm still struck by the oddity of Java needing to provide
a better interface
to processes than the native OS does.  In the native OS, a
pid is a pid and
the only handle given to applications.  So both the app and
the os need to
be conservative about pid re-use.

The problem is the audience. Every UNIX system programmer knows
about the dangers of pid recycling. Java programmers don't, and
your API promises a robustness and simplicity which it
unfortunately cannot deliver. No one will expect a ProcessHandle
returned by allChildren suddenly to refer to a totally different
process. Therefore I also think the dangers should be clearly
mentioned in the java doc.

Regards, Thomas

The link[1] was broken while I replaced it with an update
reflecting the recent comments.

Thanks, Roger

[1] http://cr.openjdk.java.net/~rriggs/phdoc/




At least on Linux (/proc//stat) and Solaris
(/proc//status)
it is not necessary to open those special files
and read them. Just
doing stat() on them and using the st_mtime will
get you the process
start time. I see AIX shares native code with
Linux (unix), so perhaps
AIX does the same. Mac OSX and Windows have
special calls...

That is less expensive.  But opening /proc//stat
is necessary for
allChildren to get
and filter on the parent and can provide the
starttime as well.
stat would be useful for allProcesses which does not
need the parent.


In case OS does not allow retrieving the start
time (not owner or
similar), it should be cached as some "undefined"
value and treated
the same when destroying. If while obtaining the
ProcessHandle *and*
  

Re: RFR 9: 8077350 Process API Updates Implementation Review (Due 4/23)

2015-04-20 Thread Paul Sandoz

On Apr 20, 2015, at 5:49 PM, Roger Riggs  wrote:

> Hi Paul,
> 
> On 4/20/2015 9:01 AM, Paul Sandoz wrote:
>> Hi Roger,
>> 
>> I am not sure you have the @implSpec/@implNote quite correct on the new 
>> methods of Process.
>> 
>> For example, for Process.toHandle i would expect something like:
>> 
>>   ...
>> 
>>   @implSpec
>>   This implementation throws an instance of UnsupportedOperationException and
>>   performs no other action.  Sub-types should override this method, ensuring 
>> that
>>   calling methods (getPid etc.) of this class, that are not overridden, 
>> operate on the
>>   returned ProcessHandle.
>> 
>> The @implSpec should refer to the implementation in Process itself, and the 
>> @implNote cannot be used for any normative statements.
> Thanks for the reminder and suggested text.  I updated the @implSpec clauses.
> 

I spotted another one here:

 281 /**
 282  * Returns {@code true} if the implementation of {@link #destroy} is to
 283  * normally terminate the process,
 284  * Returns {@code false} if the implementation of {@code destroy}
 285  * forcibly and immediately terminates the process.
 286  *
 287  * @implSpec
 288  * Processes returned from ProcessBuilder support this operation to
 289  * return true or false depending on the platform implementation.
 290  *
 291  * @return {@code true} if the implementation of {@link #destroy} is to
 292  * normally terminate the process;
 293  * otherwise, {@link #destroy} forcibly terminates the process
 294  * @throws UnsupportedOperationException if the Process implementation
 295  * does not support this operation
 296  * @since 1.9
 297  */
 298 public boolean supportsNormalTermination() {
 299 throw new UnsupportedOperationException("supportsNormalTermination 
not supported for " + this.getClass().toString());
 300 }


The docs of Process.onExit might also require some tweaks. I dunno how much 
wiggle room there is with the current implementation, perhaps very little?


 415 /**
 416  * Returns a ProcessHandle for the Process.
 417  * 
 418  * @implSpec
 419  * This implementation throws an instance of 
UnsupportedOperationException
 420  * and performs no other action.
...
 446  * @implSpec
 447  * The implementation of this method returns information about the 
process as:
 448  * {@link #toHandle toHandle().info()}.

Best to be consistent with either "This implementation ..." or "The 
implementation of this method ..." rather than a mix. I prefer the former as it 
is more concise. Up to you.


> Some @implNotes describe the JDK implementation and some developers will
> rely on the implementation specifics and to some degree define the expected 
> behaviors.

One way of thinking about this is that the developers writing the TCK tests 
will pay close attention to @implSpec and need not do so for @implNote.


>> 
>> 
>> The document for Process.getPid (and similarly those methods depending on 
>> toHandle) could then be:
>> 
>>   ...
>>   @implSpec
>>   This implementation returns a process id as follows:
>> 
>> toHandle().getPid();
>> 
>> 
>> In this respect is there a need to say anything about the behaviour of a 
>> Process created by ProcessBuilder?
> The ProcessBuilder produced subclasses of Process implement the spec
> so no additional description is needed.
>> 
>> It might be useful to have some general guidance for sub-types on the class 
>> doc of Process e.g. saying they only need to override toHandle but may 
>> override some or all dependent methods as appropriate.
> It does not add much but I added a paragraph to the Process class javadoc.
> There are not many subclasses of Process outside the JDK.
> 

Ok.

Paul.

> The webrev[1] and javadoc[2] have been updated in place.
> 
> Thanks, Roger
> 
> [1] http://cr.openjdk.java.net/~rriggs/webrev-ph
> [2] http://cr.openjdk.java.net/~rriggs/ph-apidraft/



Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-20 Thread Roger Riggs

Hi Thomas,

I expanded the ProcessHandle class javadoc [1] paragraph to include the
caution about process id reuse.

Thanks, Roger

[1] 
http://cr.openjdk.java.net/~rriggs/ph-apidraft/java/lang/ProcessHandle.html


On 4/18/2015 2:44 PM, Thomas Stüfe wrote:

Hi Roger,

On Fri, Apr 17, 2015 at 8:57 PM, Roger Riggs > wrote:


Hi David,

On 4/17/2015 2:44 PM, David M. Lloyd wrote:

On 04/17/2015 11:53 AM, Roger Riggs wrote:

Hi Peter,

On 4/17/2015 4:05 AM, Peter Levart wrote:

Hi Roger,

Retrieving and caching the process start time as soon
as ProcessHandle
is instantiated might be a good idea. "destroy" native
code would then
use pid *and* start time to check the identity of the
process before
killing it.

Yes, though it does raise a question about how to specify
PH.equals().
Either the start time is explicitly
mentioned (and is repeatable and testable) or it is vague
and refers to
some implementation
specific value that uniquely identifies the process; and
is less testable.


Even with timestamps though, you cannot prevent the PID from
being reused in the time window after you verify its timestamp
but before you kill it unless it is a direct child process (on
UNIX anyway; who knows what Windows does...).  I think that
this inevitable race should be mentioned in the docs
regardless of whether the timestamp thing is implemented (or
doc'd) to prevent unrealistic expectations (the api draft link
seems to be dead so I didn't see if it already includes this
kind of language).

I can add a note about race conditions to the existing class level
javadoc about process information changing asynchronously.

"Race conditions can exist between checking the status of a
process and acting upon it."

But I'm still struck by the oddity of Java needing to provide a
better interface
to processes than the native OS does.  In the native OS, a pid is
a pid and
the only handle given to applications.  So both the app and the os
need to
be conservative about pid re-use.

The problem is the audience. Every UNIX system programmer knows about 
the dangers of pid recycling. Java programmers don't, and your API 
promises a robustness and simplicity which it unfortunately cannot 
deliver. No one will expect a ProcessHandle returned by allChildren 
suddenly to refer to a totally different process. Therefore I also 
think the dangers should be clearly mentioned in the java doc.


Regards, Thomas

The link[1] was broken while I replaced it with an update
reflecting the recent comments.

Thanks, Roger

[1] http://cr.openjdk.java.net/~rriggs/phdoc/




At least on Linux (/proc//stat) and Solaris
(/proc//status)
it is not necessary to open those special files and
read them. Just
doing stat() on them and using the st_mtime will get
you the process
start time. I see AIX shares native code with Linux
(unix), so perhaps
AIX does the same. Mac OSX and Windows have special
calls...

That is less expensive.  But opening /proc//stat is
necessary for
allChildren to get
and filter on the parent and can provide the starttime as
well.
stat would be useful for allProcesses which does not need
the parent.


In case OS does not allow retrieving the start time
(not owner or
similar), it should be cached as some "undefined"
value and treated
the same when destroying. If while obtaining the
ProcessHandle *and*
while destroying the process, the start time of the
process with a
particular pid is "undefined" then there are two options:

1 - don't kill the process
2 - kill the process

They might actually be no different as the reason of
not being able to
retrieve the start time (not owner) might prevent the
process from
being killed too, so option 2 could be used to allow
killing on any
hypothetical platforms that don't support retrieving
start time and it
is no worse than current implementation anyway.

Destroy is a best-effort method;  it is not guaranteed to
result in
termination, though
the do

Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-20 Thread Thomas Stüfe
Hi Roger,

thanks!

Maybe better:

"When using ProcessHandles avoid assumptions about the state or liveness of
the underlying process."
-> "When using ProcessHandles avoid assumptions about liveness or identity
of the underlying process."

Regards, Thomas


On Mon, Apr 20, 2015 at 5:53 PM, Roger Riggs  wrote:

>  Hi Thomas,
>
> I expanded the ProcessHandle class javadoc [1] paragraph to include the
> caution about process id reuse.
>
> Thanks, Roger
>
> [1]
> http://cr.openjdk.java.net/~rriggs/ph-apidraft/java/lang/ProcessHandle.html
>
>
> On 4/18/2015 2:44 PM, Thomas Stüfe wrote:
>
> Hi Roger,
>
> On Fri, Apr 17, 2015 at 8:57 PM, Roger Riggs 
> wrote:
>
>> Hi David,
>>
>> On 4/17/2015 2:44 PM, David M. Lloyd wrote:
>>
>>> On 04/17/2015 11:53 AM, Roger Riggs wrote:
>>>
 Hi Peter,

 On 4/17/2015 4:05 AM, Peter Levart wrote:

> Hi Roger,
>
> Retrieving and caching the process start time as soon as ProcessHandle
> is instantiated might be a good idea. "destroy" native code would then
> use pid *and* start time to check the identity of the process before
> killing it.
>
 Yes, though it does raise a question about how to specify PH.equals().
 Either the start time is explicitly
 mentioned (and is repeatable and testable) or it is vague and refers to
 some implementation
 specific value that uniquely identifies the process; and is less
 testable.

>>>
>>> Even with timestamps though, you cannot prevent the PID from being
>>> reused in the time window after you verify its timestamp but before you
>>> kill it unless it is a direct child process (on UNIX anyway; who knows what
>>> Windows does...).  I think that this inevitable race should be mentioned in
>>> the docs regardless of whether the timestamp thing is implemented (or
>>> doc'd) to prevent unrealistic expectations (the api draft link seems to be
>>> dead so I didn't see if it already includes this kind of language).
>>>
>>  I can add a note about race conditions to the existing class level
>> javadoc about process information changing asynchronously.
>>
>> "Race conditions can exist between checking the status of a process and
>> acting upon it."
>>
>> But I'm still struck by the oddity of Java needing to provide a better
>> interface
>> to processes than the native OS does.  In the native OS, a pid is a pid
>> and
>> the only handle given to applications.  So both the app and the os need to
>> be conservative about pid re-use.
>>
>>
> The problem is the audience. Every UNIX system programmer knows about the
> dangers of pid recycling. Java programmers don't, and your API promises a
> robustness and simplicity which it unfortunately cannot deliver. No one
> will expect a ProcessHandle returned by allChildren suddenly to refer to a
> totally different process. Therefore I also think the dangers should be
> clearly mentioned in the java doc.
>
>  Regards, Thomas
>
>
>
>> The link[1] was broken while I replaced it with an update reflecting the
>> recent comments.
>>
>> Thanks, Roger
>>
>> [1] http://cr.openjdk.java.net/~rriggs/phdoc/
>>
>>
>>
>>>  At least on Linux (/proc//stat) and Solaris (/proc//status)
> it is not necessary to open those special files and read them. Just
> doing stat() on them and using the st_mtime will get you the process
> start time. I see AIX shares native code with Linux (unix), so perhaps
> AIX does the same. Mac OSX and Windows have special calls...
>
 That is less expensive.  But opening /proc//stat is necessary for
 allChildren to get
 and filter on the parent and can provide the starttime as well.
 stat would be useful for allProcesses which does not need the parent.

>
> In case OS does not allow retrieving the start time (not owner or
> similar), it should be cached as some "undefined" value and treated
> the same when destroying. If while obtaining the ProcessHandle *and*
> while destroying the process, the start time of the process with a
> particular pid is "undefined" then there are two options:
>
> 1 - don't kill the process
> 2 - kill the process
>
> They might actually be no different as the reason of not being able to
> retrieve the start time (not owner) might prevent the process from
> being killed too, so option 2 could be used to allow killing on any
> hypothetical platforms that don't support retrieving start time and it
> is no worse than current implementation anyway.
>
 Destroy is a best-effort method;  it is not guaranteed to result in
 termination, though
 the docs are a bit weak on this point.

 I'd go for option 2, anyone using the API is looking for results on
 processes they
 already know something about (mostly), so there's no reason to be
 particularly
 conservative about.  If the caller is trying to be more conservative,
 they can maintain
 their own extra information about the proces

Re: RFR 9: 8077350 Process API Updates Implementation Review (Due 4/23)

2015-04-20 Thread Roger Riggs

Hi Paul,

On 4/20/2015 9:01 AM, Paul Sandoz wrote:

Hi Roger,

I am not sure you have the @implSpec/@implNote quite correct on the new methods 
of Process.

For example, for Process.toHandle i would expect something like:

   ...

   @implSpec
   This implementation throws an instance of UnsupportedOperationException and
   performs no other action.  Sub-types should override this method, ensuring 
that
   calling methods (getPid etc.) of this class, that are not overridden, 
operate on the
   returned ProcessHandle.

The @implSpec should refer to the implementation in Process itself, and the 
@implNote cannot be used for any normative statements.
Thanks for the reminder and suggested text.  I updated the @implSpec 
clauses.


Some @implNotes describe the JDK implementation and some developers will
rely on the implementation specifics and to some degree define the 
expected behaviors.



The document for Process.getPid (and similarly those methods depending on 
toHandle) could then be:

   ...
   @implSpec
   This implementation returns a process id as follows:

 toHandle().getPid();


In this respect is there a need to say anything about the behaviour of a 
Process created by ProcessBuilder?

The ProcessBuilder produced subclasses of Process implement the spec
so no additional description is needed.


It might be useful to have some general guidance for sub-types on the class doc 
of Process e.g. saying they only need to override toHandle but may override 
some or all dependent methods as appropriate.

It does not add much but I added a paragraph to the Process class javadoc.
There are not many subclasses of Process outside the JDK.

The webrev[1] and javadoc[2] have been updated in place.

Thanks, Roger

[1] http://cr.openjdk.java.net/~rriggs/webrev-ph
[2] http://cr.openjdk.java.net/~rriggs/ph-apidraft/



Paul.

On Apr 17, 2015, at 9:12 PM, Roger Riggs  wrote:


The webrev for ProcessAPI updates has been updated to reflect recent comments.
Please  review and comment by April 23rd.

The updates include:
- Renaming Process/ProcessHandle supportsDestroyForcibly to 
supportsNormalTermination
   and updating related descriptions
- ProcessHandle.destroy/destroyForcible descriptions have more consistent 
descriptions
- ProcessHandle.destroy now returns ProcessHandle to enable fluent use.
- Corrected description of default implementation ProcessHandle.onExit

The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/

The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph

Issue: JDK-8077350  Process 
API Updates Implementation

The code is in the jdk9 sandbox on branch JDK-8046092-branch.

Two issues raised have been filed to be fixed after the current commit:
- JDK-8078099  (process) 
ProcessHandle should uniquely identify processes
- JDK-8078108  (process) 
ProcessHandle.isAlive should be robust




Re: RFR 9: 8077350 Process API Updates Implementation Review (Due 4/23)

2015-04-20 Thread Paul Sandoz
Hi Roger,

I am not sure you have the @implSpec/@implNote quite correct on the new methods 
of Process.

For example, for Process.toHandle i would expect something like:

  ...

  @implSpec
  This implementation throws an instance of UnsupportedOperationException and 
  performs no other action.  Sub-types should override this method, ensuring 
that 
  calling methods (getPid etc.) of this class, that are not overridden, operate 
on the 
  returned ProcessHandle.

The @implSpec should refer to the implementation in Process itself, and the 
@implNote cannot be used for any normative statements.


The document for Process.gePid (and similarly those methods depending on 
toHandle) could then be:

  ...
  @implSpec
  This implementation returns a process id as follows:

toHandle().getPid();


In this respect is there a need to say anything about the behaviour of a 
Process created by ProcessBuilder?

It might be useful to have some general guidance for sub-types on the class doc 
of Process e.g. saying they only need to override toHandle but may override 
some or all dependent methods as appropriate.

Paul.

On Apr 17, 2015, at 9:12 PM, Roger Riggs  wrote:

> The webrev for ProcessAPI updates has been updated to reflect recent comments.
> Please  review and comment by April 23rd.
> 
> The updates include:
> - Renaming Process/ProcessHandle supportsDestroyForcibly to 
> supportsNormalTermination
>   and updating related descriptions
> - ProcessHandle.destroy/destroyForcible descriptions have more consistent 
> descriptions
> - ProcessHandle.destroy now returns ProcessHandle to enable fluent use.
> - Corrected description of default implementation ProcessHandle.onExit
> 
> The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/
> 
> The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph
> 
> Issue: JDK-8077350  Process 
> API Updates Implementation
> 
> The code is in the jdk9 sandbox on branch JDK-8046092-branch.
> 
> Two issues raised have been filed to be fixed after the current commit:
> - JDK-8078099  (process) 
> ProcessHandle should uniquely identify processes
> - JDK-8078108  (process) 
> ProcessHandle.isAlive should be robust



Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-18 Thread Thomas Stüfe
Hi Roger,

On Fri, Apr 17, 2015 at 8:57 PM, Roger Riggs  wrote:

> Hi David,
>
> On 4/17/2015 2:44 PM, David M. Lloyd wrote:
>
>> On 04/17/2015 11:53 AM, Roger Riggs wrote:
>>
>>> Hi Peter,
>>>
>>> On 4/17/2015 4:05 AM, Peter Levart wrote:
>>>
 Hi Roger,

 Retrieving and caching the process start time as soon as ProcessHandle
 is instantiated might be a good idea. "destroy" native code would then
 use pid *and* start time to check the identity of the process before
 killing it.

>>> Yes, though it does raise a question about how to specify PH.equals().
>>> Either the start time is explicitly
>>> mentioned (and is repeatable and testable) or it is vague and refers to
>>> some implementation
>>> specific value that uniquely identifies the process; and is less
>>> testable.
>>>
>>
>> Even with timestamps though, you cannot prevent the PID from being reused
>> in the time window after you verify its timestamp but before you kill it
>> unless it is a direct child process (on UNIX anyway; who knows what Windows
>> does...).  I think that this inevitable race should be mentioned in the
>> docs regardless of whether the timestamp thing is implemented (or doc'd) to
>> prevent unrealistic expectations (the api draft link seems to be dead so I
>> didn't see if it already includes this kind of language).
>>
> I can add a note about race conditions to the existing class level
> javadoc about process information changing asynchronously.
>
> "Race conditions can exist between checking the status of a process and
> acting upon it."
>
> But I'm still struck by the oddity of Java needing to provide a better
> interface
> to processes than the native OS does.  In the native OS, a pid is a pid and
> the only handle given to applications.  So both the app and the os need to
> be conservative about pid re-use.
>
>
The problem is the audience. Every UNIX system programmer knows about the
dangers of pid recycling. Java programmers don't, and your API promises a
robustness and simplicity which it unfortunately cannot deliver. No one
will expect a ProcessHandle returned by allChildren suddenly to refer to a
totally different process. Therefore I also think the dangers should be
clearly mentioned in the java doc.

Regards, Thomas



> The link[1] was broken while I replaced it with an update reflecting the
> recent comments.
>
> Thanks, Roger
>
> [1] http://cr.openjdk.java.net/~rriggs/phdoc/
>
>
>
>>  At least on Linux (/proc//stat) and Solaris (/proc//status)
 it is not necessary to open those special files and read them. Just
 doing stat() on them and using the st_mtime will get you the process
 start time. I see AIX shares native code with Linux (unix), so perhaps
 AIX does the same. Mac OSX and Windows have special calls...

>>> That is less expensive.  But opening /proc//stat is necessary for
>>> allChildren to get
>>> and filter on the parent and can provide the starttime as well.
>>> stat would be useful for allProcesses which does not need the parent.
>>>

 In case OS does not allow retrieving the start time (not owner or
 similar), it should be cached as some "undefined" value and treated
 the same when destroying. If while obtaining the ProcessHandle *and*
 while destroying the process, the start time of the process with a
 particular pid is "undefined" then there are two options:

 1 - don't kill the process
 2 - kill the process

 They might actually be no different as the reason of not being able to
 retrieve the start time (not owner) might prevent the process from
 being killed too, so option 2 could be used to allow killing on any
 hypothetical platforms that don't support retrieving start time and it
 is no worse than current implementation anyway.

>>> Destroy is a best-effort method;  it is not guaranteed to result in
>>> termination, though
>>> the docs are a bit weak on this point.
>>>
>>> I'd go for option 2, anyone using the API is looking for results on
>>> processes they
>>> already know something about (mostly), so there's no reason to be
>>> particularly
>>> conservative about.  If the caller is trying to be more conservative,
>>> they can maintain
>>> their own extra information about the processes they are managing.
>>>
>>>
>>>
 What do you think?

>>> I'd like to pick this up as a separate change, and get the current one
>>> in first.
>>>
>>> Roger
>>>
>>>
>>>
>>
>


Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-18 Thread Thomas Stüfe
Hi Roger,

On Fri, Apr 17, 2015 at 7:05 PM, Roger Riggs  wrote:

>  Hi Thomas,
>
> On 4/16/2015 3:01 PM, Thomas Stüfe wrote:
>
> Hi Roger,
>
>  thank you for your answer!
>
>  The reason I take an interest is not just theoretical. We (SAP) use our
> JVM for our test infrastructure and we had exactly the problem
> allChildren() is designed to solve: killing a process tree related to a
> specific tests (similar to jtreg tests) in case of errors or hangs. We have
> test machines running large workloads of tests in parallel and we reach pid
> wraparound - depending on the OS - quite fast.
>
>  We solved this by adding process groups to Process.java and we are very
> happy with this solution. We are able to quickly kill a whole process tree,
> cleanly and completely, without ambiguity or risk to other tests. Of course
> we had to add this support as a "sideways hack" in order to not change the
> official Process.java interface. Therefore I was hoping that with JEP 102,
> we would get official support for process groups. Unfortunately, seems the
> decision is already done and we are too late in the discussion :(
>
> It would be interesting to see a description of what you added to/around
> the API.
>

Very simple really, all we did was to add a flag to Runtime.exec -
ultimately exposed via ProcessBuilder - to make the child process leader of
a new process group. This flag just triggered a setpgid() call between
fork() and exec() in the child process. This caused creation of a new
process group with child process as leader. Now you could kill the whole
tree with kill(-pid). On Windows we implemented it with Jobs.

It was all simple because we did never aim to bring process groups with all
their features to the JDK, we just needed a way to kill a tree of child
processes, which is a rather specific problem.

The reason to avoid them was one of simplicity and non-interference with
> processes
> spawned by native libraries.
>

See, that I don't understand, you still interfere with them by returning
all child pids - be they spawned by java or by native libs. Or do you mean
you offload responsibility to the caller - so he should decide whether to
kill the child pids indiscriminately or be more careful?


>   If that complexity can be understood process groups/jobs
> could fulfill a need in a scalable system.
>
>
I think process groups could be added to the API if they are well
documented (which admittedly will be difficult in a platform-neutral way).
Basically, process groups are a tool like all others, and the caller must
think before using it like with every other tool.


> At this point, I'd like to deal with it as a separate request for
> enhancement.
>

Sure! Thanks for listening.

Kind Regards, Thoams


>
>
>
>  see my other comments inline.
>
> On Sat, Apr 11, 2015 at 8:55 PM, Roger Riggs 
> wrote:
>
>>  Hi Thomas,
>>
>> Thanks for the comments.
>>
>> On 4/11/2015 8:31 AM, Thomas Stüfe wrote:
>>
>> Hi Roger,
>>
>>  I have a question about getChildren() and getAllChildren().
>>
>> I assume the point of those functions is to implement point 4 of JEP 102
>> ("The ability to deal with process trees, in particular some means to
>> destroy a process tree."), by returning a collection of PIDs which are the
>> children of the process and then killing them?
>>
>>  Earlier versions included a killProcess tree method but it was
>> recommended to leave
>> the exact algorithm to kill processes to the caller.
>>
>>
>>  However, I am not sure that this can be implemented in a safe way, at
>> least on UNIX, because - as Martin already pointed out - of PID recycling.
>> I do not see how you can prevent allChildren() from returning PIDs which
>> may be already reaped and recyled when you use them later. How do you
>> prevent that?
>>
>>  Unless there is an extended time between getting the children and
>> destroying them the pids will still be valid.
>>
>
>  Why? Child process may be getting reaped the instant you are done
> reading it from /proc, and pid may have been recycled by the OS right away
> and already pointing to another process when allChildren() returns. If a
> process lives about as long as it takes the system to reach a pid
> wraparound to the same pid value, its pid could be recycled right after it
> is reaped, or? Sure, the longer you wait, the higher the chance of this to
> happen, but it may happen right away.
>
>  As Martin said, we had those races in the kill() code since a long time,
> but children()/allChildren() could make those error more probable, because
> now more processes are involved. Especially if you use allChildren to kill
> a deep process tree. And there is nothing in the javadoc warning the user
> about this scenario. You would just happen from time to time to kill an
> unrelated process. Those problems are hard to debug.
>
>  The technique of caching the start time can prevent that case; though it
>> has AFAIK not been a problem.
>>
>
>  How would that work? User should, before issuing the kil

Re: RFR 9: 8077350 Process API Updates Implementation Review (Due 4/23)

2015-04-17 Thread Roger Riggs
The webrev for ProcessAPI updates has been updated to reflect recent 
comments.

Please  review and comment by April 23rd.

The updates include:
 - Renaming Process/ProcessHandle supportsDestroyForcibly to 
supportsNormalTermination

   and updating related descriptions
- ProcessHandle.destroy/destroyForcible descriptions have more 
consistent descriptions

- ProcessHandle.destroy now returns ProcessHandle to enable fluent use.
- Corrected description of default implementation ProcessHandle.onExit

The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/

The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph

Issue: JDK-8077350  
Process API Updates Implementation


The code is in the jdk9 sandbox on branch JDK-8046092-branch.

Two issues raised have been filed to be fixed after the current commit:
 - JDK-8078099  
(process) ProcessHandle should uniquely identify processes
 - JDK-8078108  
(process) ProcessHandle.isAlive should be robust


Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-17 Thread Roger Riggs

Hi David,

On 4/17/2015 2:44 PM, David M. Lloyd wrote:

On 04/17/2015 11:53 AM, Roger Riggs wrote:

Hi Peter,

On 4/17/2015 4:05 AM, Peter Levart wrote:

Hi Roger,

Retrieving and caching the process start time as soon as ProcessHandle
is instantiated might be a good idea. "destroy" native code would then
use pid *and* start time to check the identity of the process before
killing it.

Yes, though it does raise a question about how to specify PH.equals().
Either the start time is explicitly
mentioned (and is repeatable and testable) or it is vague and refers to
some implementation
specific value that uniquely identifies the process; and is less 
testable.


Even with timestamps though, you cannot prevent the PID from being 
reused in the time window after you verify its timestamp but before 
you kill it unless it is a direct child process (on UNIX anyway; who 
knows what Windows does...).  I think that this inevitable race should 
be mentioned in the docs regardless of whether the timestamp thing is 
implemented (or doc'd) to prevent unrealistic expectations (the api 
draft link seems to be dead so I didn't see if it already includes 
this kind of language).

I can add a note about race conditions to the existing class level
javadoc about process information changing asynchronously.

"Race conditions can exist between checking the status of a process and 
acting upon it."


But I'm still struck by the oddity of Java needing to provide a better 
interface

to processes than the native OS does.  In the native OS, a pid is a pid and
the only handle given to applications.  So both the app and the os need to
be conservative about pid re-use.

The link[1] was broken while I replaced it with an update reflecting the 
recent comments.


Thanks, Roger

[1] http://cr.openjdk.java.net/~rriggs/phdoc/




At least on Linux (/proc//stat) and Solaris (/proc//status)
it is not necessary to open those special files and read them. Just
doing stat() on them and using the st_mtime will get you the process
start time. I see AIX shares native code with Linux (unix), so perhaps
AIX does the same. Mac OSX and Windows have special calls...

That is less expensive.  But opening /proc//stat is necessary for
allChildren to get
and filter on the parent and can provide the starttime as well.
stat would be useful for allProcesses which does not need the parent.


In case OS does not allow retrieving the start time (not owner or
similar), it should be cached as some "undefined" value and treated
the same when destroying. If while obtaining the ProcessHandle *and*
while destroying the process, the start time of the process with a
particular pid is "undefined" then there are two options:

1 - don't kill the process
2 - kill the process

They might actually be no different as the reason of not being able to
retrieve the start time (not owner) might prevent the process from
being killed too, so option 2 could be used to allow killing on any
hypothetical platforms that don't support retrieving start time and it
is no worse than current implementation anyway.

Destroy is a best-effort method;  it is not guaranteed to result in
termination, though
the docs are a bit weak on this point.

I'd go for option 2, anyone using the API is looking for results on
processes they
already know something about (mostly), so there's no reason to be
particularly
conservative about.  If the caller is trying to be more conservative,
they can maintain
their own extra information about the processes they are managing.




What do you think?

I'd like to pick this up as a separate change, and get the current one
in first.

Roger








Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-17 Thread David M. Lloyd

On 04/17/2015 11:53 AM, Roger Riggs wrote:

Hi Peter,

On 4/17/2015 4:05 AM, Peter Levart wrote:

Hi Roger,

Retrieving and caching the process start time as soon as ProcessHandle
is instantiated might be a good idea. "destroy" native code would then
use pid *and* start time to check the identity of the process before
killing it.

Yes, though it does raise a question about how to specify PH.equals().
Either the start time is explicitly
mentioned (and is repeatable and testable) or it is vague and refers to
some implementation
specific value that uniquely identifies the process; and is less testable.


Even with timestamps though, you cannot prevent the PID from being 
reused in the time window after you verify its timestamp but before you 
kill it unless it is a direct child process (on UNIX anyway; who knows 
what Windows does...).  I think that this inevitable race should be 
mentioned in the docs regardless of whether the timestamp thing is 
implemented (or doc'd) to prevent unrealistic expectations (the api 
draft link seems to be dead so I didn't see if it already includes this 
kind of language).



At least on Linux (/proc//stat) and Solaris (/proc//status)
it is not necessary to open those special files and read them. Just
doing stat() on them and using the st_mtime will get you the process
start time. I see AIX shares native code with Linux (unix), so perhaps
AIX does the same. Mac OSX and Windows have special calls...

That is less expensive.  But opening /proc//stat is necessary for
allChildren to get
and filter on the parent and can provide the starttime as well.
stat would be useful for allProcesses which does not need the parent.


In case OS does not allow retrieving the start time (not owner or
similar), it should be cached as some "undefined" value and treated
the same when destroying. If while obtaining the ProcessHandle *and*
while destroying the process, the start time of the process with a
particular pid is "undefined" then there are two options:

1 - don't kill the process
2 - kill the process

They might actually be no different as the reason of not being able to
retrieve the start time (not owner) might prevent the process from
being killed too, so option 2 could be used to allow killing on any
hypothetical platforms that don't support retrieving start time and it
is no worse than current implementation anyway.

Destroy is a best-effort method;  it is not guaranteed to result in
termination, though
the docs are a bit weak on this point.

I'd go for option 2, anyone using the API is looking for results on
processes they
already know something about (mostly), so there's no reason to be
particularly
conservative about.  If the caller is trying to be more conservative,
they can maintain
their own extra information about the processes they are managing.




What do you think?

I'd like to pick this up as a separate change, and get the current one
in first.

Roger




--
- DML


Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-17 Thread Roger Riggs

Hi Thomas,

On 4/16/2015 3:01 PM, Thomas Stüfe wrote:

Hi Roger,

thank you for your answer!

The reason I take an interest is not just theoretical. We (SAP) use 
our JVM for our test infrastructure and we had exactly the problem 
allChildren() is designed to solve: killing a process tree related to 
a specific tests (similar to jtreg tests) in case of errors or hangs. 
We have test machines running large workloads of tests in parallel and 
we reach pid wraparound - depending on the OS - quite fast.


We solved this by adding process groups to Process.java and we are 
very happy with this solution. We are able to quickly kill a whole 
process tree, cleanly and completely, without ambiguity or risk to 
other tests. Of course we had to add this support as a "sideways hack" 
in order to not change the official Process.java interface. Therefore 
I was hoping that with JEP 102, we would get official support for 
process groups. Unfortunately, seems the decision is already done and 
we are too late in the discussion :(
It would be interesting to see a description of what you added to/around 
the API.
The reason to avoid them was one of simplicity and non-interference with 
processes
spawned by native libraries.  If that complexity can be understood 
process groups/jobs

could fulfill a need in a scalable system.

At this point, I'd like to deal with it as a separate request for 
enhancement.




see my other comments inline.

On Sat, Apr 11, 2015 at 8:55 PM, Roger Riggs > wrote:


Hi Thomas,

Thanks for the comments.

On 4/11/2015 8:31 AM, Thomas Stüfe wrote:

Hi Roger,

I have a question about getChildren() and getAllChildren().

I assume the point of those functions is to implement point 4 of
JEP 102 ("The ability to deal with process trees, in particular
some means to destroy a process tree."), by returning a
collection of PIDs which are the children of the process and then
killing them?

Earlier versions included a killProcess tree method but it was
recommended to leave
the exact algorithm to kill processes to the caller.


However, I am not sure that this can be implemented in a safe
way, at least on UNIX, because - as Martin already pointed out -
of PID recycling. I do not see how you can prevent allChildren()
from returning PIDs which may be already reaped and recyled when
you use them later. How do you prevent that?

Unless there is an extended time between getting the children and
destroying them the pids will still be valid.


Why? Child process may be getting reaped the instant you are done 
reading it from /proc, and pid may have been recycled by the OS right 
away and already pointing to another process when allChildren() 
returns. If a process lives about as long as it takes the system to 
reach a pid wraparound to the same pid value, its pid could be 
recycled right after it is reaped, or? Sure, the longer you wait, the 
higher the chance of this to happen, but it may happen right away.


As Martin said, we had those races in the kill() code since a long 
time, but children()/allChildren() could make those error more 
probable, because now more processes are involved. Especially if you 
use allChildren to kill a deep process tree. And there is nothing in 
the javadoc warning the user about this scenario. You would just 
happen from time to time to kill an unrelated process. Those problems 
are hard to debug.


The technique of caching the start time can prevent that case;
though it has AFAIK not been a problem.


How would that work? User should, before issuing the kill, compare 
start time of process to kill with cached start time?
See Peter's email, he described it more thoroughly that I have in 
previous emails.



Note even if your coding is bulletproof, that allChildren() will
also return PIDs of sub processes which are completely unrelated
to you and Process.java - they could have been forked by some
third party native code which just happens to run in parallel in
the same process. There, you have no control about when it gets
reaped. It might already have been reaped by the time
allChildren() returns, and now the same PID got recycled as
another, unrelated process.

Of course, the best case is for an application to spawn and manage
its own processes
and handle there proper termination.
The use cases for children/allChildren are focused on
supervisory/executive functions
that monitor a running system and can cleanup even in the case of
unexpected failures.

All management of processes is subject to OS limitations, if the
PID were from a completely
different process tree, the ordinary destroy/info functions would
not be available
unless the process was running as a privileged os user (same as
any other native application).


Could you explain this please? If both trees run under the same user

Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-17 Thread Roger Riggs

Hi Thomas,

On 4/17/2015 4:22 AM, Thomas Stüfe wrote:

Hi Roger,

aside from the recycle-pid-question, one additional remark:

in ProcessHandleImpl_unix.c, 
Java_java_lang_ProcessHandleImpl_isAlive0, you call kill(pid, 0) for 
the liveness check.


If you have not the necessary permissions to do this call, this may 
fail with EPERM. In this case, isAlive() will return false, but the 
process exists, which strictly spoken is a lie.


Caller may base actions on this, e.g. try to clean up resources for a 
process he thinks is dead.


We actually had this problem, and our version of isAlive() grew to be 
more elaborate over time. Basically, we do:


1) first try kill(0). If EPERM:
2) try read /proc on OSes which have /proc.
3) do a system("ps").

Alternativly, one may report EPERM to the caller - with an exception 
or an extension of the return value - and leave it up to him what to do.
Good to have the experienced input.  I'd been trying to keep the code 
simple but it seems
more complexity is needed to be robust.  I'm not sure I'd go as far as 
invoking ps; it too might
fail for a number of reasons.  That would be a good case to throw an 
exception if the liveness

cannot be determined.

Thanks, Roger



Kind Regards, Thomas




On Thu, Apr 9, 2015 at 10:00 PM, Roger Riggs > wrote:


Please review the API and implementation of the Process API Updates
described inJEP 102
. Please  review
and comment by April 23rd.

The recommendation to make ProcessHandle an interface is included
allowing the new functions to be extended by Process subclasses.
The implementation covers all functions on Unix, Windows, Solaris,
and Mac OS X.

The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/


The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph


Issue: JDK-8077350
 Process API
Updates Implementation

The code is in the jdk9 sandbox on branch JDK-8046092-branch.

Please review and comment, Roger






Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-17 Thread Roger Riggs

Hi Peter,

On 4/17/2015 4:05 AM, Peter Levart wrote:

Hi Roger,

Retrieving and caching the process start time as soon as ProcessHandle 
is instantiated might be a good idea. "destroy" native code would then 
use pid *and* start time to check the identity of the process before 
killing it.
Yes, though it does raise a question about how to specify PH.equals().  
Either the start time is explicitly
mentioned (and is repeatable and testable) or it is vague and refers to 
some implementation

specific value that uniquely identifies the process; and is less testable.


At least on Linux (/proc//stat) and Solaris (/proc//status) 
it is not necessary to open those special files and read them. Just 
doing stat() on them and using the st_mtime will get you the process 
start time. I see AIX shares native code with Linux (unix), so perhaps 
AIX does the same. Mac OSX and Windows have special calls...
That is less expensive.  But opening /proc//stat is necessary for 
allChildren to get

and filter on the parent and can provide the starttime as well.
stat would be useful for allProcesses which does not need the parent.


In case OS does not allow retrieving the start time (not owner or 
similar), it should be cached as some "undefined" value and treated 
the same when destroying. If while obtaining the ProcessHandle *and* 
while destroying the process, the start time of the process with a 
particular pid is "undefined" then there are two options:


1 - don't kill the process
2 - kill the process

They might actually be no different as the reason of not being able to 
retrieve the start time (not owner) might prevent the process from 
being killed too, so option 2 could be used to allow killing on any 
hypothetical platforms that don't support retrieving start time and it 
is no worse than current implementation anyway.
Destroy is a best-effort method;  it is not guaranteed to result in 
termination, though

the docs are a bit weak on this point.

I'd go for option 2, anyone using the API is looking for results on 
processes they
already know something about (mostly), so there's no reason to be 
particularly
conservative about.  If the caller is trying to be more conservative, 
they can maintain

their own extra information about the processes they are managing.




What do you think?
I'd like to pick this up as a separate change, and get the current one 
in first.


Roger




Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-17 Thread Thomas Stüfe
Hi Roger,

aside from the recycle-pid-question, one additional remark:

in ProcessHandleImpl_unix.c, Java_java_lang_ProcessHandleImpl_isAlive0, you
call kill(pid, 0) for the liveness check.

If you have not the necessary permissions to do this call, this may fail
with EPERM. In this case, isAlive() will return false, but the process
exists, which strictly spoken is a lie.

Caller may base actions on this, e.g. try to clean up resources for a
process he thinks is dead.

We actually had this problem, and our version of isAlive() grew to be more
elaborate over time. Basically, we do:

1) first try kill(0). If EPERM:
2) try read /proc on OSes which have /proc.
3) do a system("ps").

Alternativly, one may report EPERM to the caller - with an exception or an
extension of the return value - and leave it up to him what to do.

Kind Regards, Thomas




On Thu, Apr 9, 2015 at 10:00 PM, Roger Riggs  wrote:

> Please review the API and implementation of the Process API Updates
> described inJEP 102 .
> Please  review and comment by April 23rd.
>
> The recommendation to make ProcessHandle an interface is included
> allowing the new functions to be extended by Process subclasses.
> The implementation covers all functions on Unix, Windows, Solaris, and Mac
> OS X.
>
> The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/
>
> The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph
>
> Issue: JDK-8077350 
> Process API Updates Implementation
>
> The code is in the jdk9 sandbox on branch JDK-8046092-branch.
>
> Please review and comment, Roger
>


Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-17 Thread Thomas Stüfe
On Fri, Apr 17, 2015 at 8:40 AM, Staffan Larsen 
wrote:

>
> On 16 apr 2015, at 21:01, Thomas Stüfe  wrote:
>
> Hi Roger,
>
> thank you for your answer!
>
> The reason I take an interest is not just theoretical. We (SAP) use our JVM
> for our test infrastructure and we had exactly the problem allChildren() is
> designed to solve: killing a process tree related to a specific tests
> (similar to jtreg tests) in case of errors or hangs. We have test machines
> running large workloads of tests in parallel and we reach pid wraparound -
> depending on the OS - quite fast.
>
> We solved this by adding process groups to Process.java and we are very
> happy with this solution. We are able to quickly kill a whole process tree,
> cleanly and completely, without ambiguity or risk to other tests. Of course
> we had to add this support as a "sideways hack" in order to not change the
> official Process.java interface. Therefore I was hoping that with JEP 102,
> we would get official support for process groups. Unfortunately, seems the
> decision is already done and we are too late in the discussion :(
>
>
> Interestingly we are hoping to use allChildren() to kill process trees in
> jtreg - exactly the use case you are describing. I haven’t been testing the
> current approach in allChildren(), but it seems your experience indicates
> that it will not be a perfect fit for the use case. In a previous test
> framework I was involved in we also used process groups for this with good
> results. This does beg the question: if the current approach isn’t useful
> for our own testing purposes, when is it useful?
>
>
Monitoring, I guess. Like writing your own pstree. But not for anything
requiring you to send signals to those pids.

..Thomas


> Thanks,
> /Staffan
>
>
> see my other comments inline.
>
> On Sat, Apr 11, 2015 at 8:55 PM, Roger Riggs 
> wrote:
>
> Hi Thomas,
>
> Thanks for the comments.
>
> On 4/11/2015 8:31 AM, Thomas Stüfe wrote:
>
> Hi Roger,
>
> I have a question about getChildren() and getAllChildren().
>
> I assume the point of those functions is to implement point 4 of JEP 102
> ("The ability to deal with process trees, in particular some means to
> destroy a process tree."), by returning a collection of PIDs which are the
> children of the process and then killing them?
>
> Earlier versions included a killProcess tree method but it was recommended
> to leave
> the exact algorithm to kill processes to the caller.
>
>
> However, I am not sure that this can be implemented in a safe way, at
> least on UNIX, because - as Martin already pointed out - of PID recycling.
> I do not see how you can prevent allChildren() from returning PIDs which
> may be already reaped and recyled when you use them later. How do you
> prevent that?
>
> Unless there is an extended time between getting the children and
> destroying them the pids will still be valid.
>
>
> Why? Child process may be getting reaped the instant you are done reading
> it from /proc, and pid may have been recycled by the OS right away and
> already pointing to another process when allChildren() returns. If a
> process lives about as long as it takes the system to reach a pid
> wraparound to the same pid value, its pid could be recycled right after it
> is reaped, or? Sure, the longer you wait, the higher the chance of this to
> happen, but it may happen right away.
>
> As Martin said, we had those races in the kill() code since a long time,
> but children()/allChildren() could make those error more probable, because
> now more processes are involved. Especially if you use allChildren to kill
> a deep process tree. And there is nothing in the javadoc warning the user
> about this scenario. You would just happen from time to time to kill an
> unrelated process. Those problems are hard to debug.
>
> The technique of caching the start time can prevent that case; though it
>
> has AFAIK not been a problem.
>
>
> How would that work? User should, before issuing the kill, compare start
> time of process to kill with cached start time?
>
> Note even if your coding is bulletproof, that allChildren() will also
> return PIDs of sub processes which are completely unrelated to you and
> Process.java - they could have been forked by some third party native code
> which just happens to run in parallel in the same process. There, you have
> no control about when it gets reaped. It might already have been reaped by
> the time allChildren() returns, and now the same PID got recycled as
> another, unrelated process.
>
> Of course, the best case is for an application to spawn and manage its own
> processes
> and handle there proper termination.
> The use cases for children/allChildren are focused on
> supervisory/executive functions
> that monitor a running system and can cleanup even in the case of
> unexpected failures.
>
> All management of processes is subject to OS limitations, if the PID were
>
> from a completely
> different process tree, the ordinary destroy/info functions

Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-17 Thread Peter Levart

Hi Roger,

Retrieving and caching the process start time as soon as ProcessHandle 
is instantiated might be a good idea. "destroy" native code would then 
use pid *and* start time to check the identity of the process before 
killing it.


At least on Linux (/proc//stat) and Solaris (/proc//status) it 
is not necessary to open those special files and read them. Just doing 
stat() on them and using the st_mtime will get you the process start 
time. I see AIX shares native code with Linux (unix), so perhaps AIX 
does the same. Mac OSX and Windows have special calls...


In case OS does not allow retrieving the start time (not owner or 
similar), it should be cached as some "undefined" value and treated the 
same when destroying. If while obtaining the ProcessHandle *and* while 
destorying the process, the start time of the process with a particular 
pid is "undefined" then there are two options:


1 - don't kill the process
2 - kill the process

They might actually be no different as the reason of not being able to 
retrieve the start time (not owner) might prevent the process from being 
killed too, so option 2 could be used to allow killing on any 
hypothetical platforms that don't support retrieving start time and it 
is no worse than current implementation anyway.


What do you think?

Regards, Peter

On 04/11/2015 08:55 PM, Roger Riggs wrote:

Hi Thomas,

Thanks for the comments.

On 4/11/2015 8:31 AM, Thomas Stüfe wrote:

Hi Roger,

I have a question about getChildren() and getAllChildren().

I assume the point of those functions is to implement point 4 of JEP 
102 ("The ability to deal with process trees, in particular some 
means to destroy a process tree."), by returning a collection of PIDs 
which are the children of the process and then killing them?
Earlier versions included a killProcess tree method but it was 
recommended to leave

the exact algorithm to kill processes to the caller.


However, I am not sure that this can be implemented in a safe way, at 
least on UNIX, because - as Martin already pointed out - of PID 
recycling. I do not see how you can prevent allChildren() from 
returning PIDs which may be already reaped and recyled when you use 
them later. How do you prevent that?
Unless there is an extended time between getting the children and 
destroying them the pids will still be valid.
The technique of caching the start time can prevent that case; though 
it has AFAIK not been a problem.


Note even if your coding is bulletproof, that allChildren() will also 
return PIDs of sub processes which are completely unrelated to you 
and Process.java - they could have been forked by some third party 
native code which just happens to run in parallel in the same 
process. There, you have no control about when it gets reaped. It 
might already have been reaped by the time allChildren() returns, and 
now the same PID got recycled as another, unrelated process.
Of course, the best case is for an application to spawn and manage its 
own processes

and handle there proper termination.
The use cases for children/allChildren are focused on 
supervisory/executive functions
that monitor a running system and can cleanup even in the case of 
unexpected failures.
All management of processes is subject to OS limitations, if the PID 
were from a completely
different process tree, the ordinary destroy/info functions would not 
be available
unless the process was running as a privileged os user (same as any 
other native application).




If I am right, it would not be sufficient to state "There is no 
guarantee that a process is alive." - it may be alive but by now be a 
different process altogether. This makes "allChildren()" useless for 
many cases, because the returned information may already be obsolete 
the moment the function returns.

The caching of startTime can remove the ambiguity.


Of course I may something missing here?

But if I got all that right and the sole purpose of allChildren() is 
to be able to kill them (or otherwise signal them), why not use 
process groups? Process groups would be the traditional way on POSIX 
platforms to handle process trees, and they are also available on 
Windows in the form of Job Objects.


Using process groups to signal sub process trees would be safe, would 
not rely on PID identity, and would be more efficient. Also way less 
coding. Also, it would be an old, established pattern - process 
groups have been around for a long time. Also, using process groups 
it is possible to break away from a group, so a program below you 
which wants to run as a demon can do so by removing itself from the 
process group and thus escaping your kill.


On Windows we have Job objects, and I think there are enough 
similarities to POSIX process groups to abstract them into something 
platform independent.
Earlier discussions of process termination and exit value reaping 
considered
using process groups but it became evident that the Java runtime 
needed to

be very careful to 

Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-16 Thread Staffan Larsen

> On 16 apr 2015, at 21:01, Thomas Stüfe  wrote:
> 
> Hi Roger,
> 
> thank you for your answer!
> 
> The reason I take an interest is not just theoretical. We (SAP) use our JVM
> for our test infrastructure and we had exactly the problem allChildren() is
> designed to solve: killing a process tree related to a specific tests
> (similar to jtreg tests) in case of errors or hangs. We have test machines
> running large workloads of tests in parallel and we reach pid wraparound -
> depending on the OS - quite fast.
> 
> We solved this by adding process groups to Process.java and we are very
> happy with this solution. We are able to quickly kill a whole process tree,
> cleanly and completely, without ambiguity or risk to other tests. Of course
> we had to add this support as a "sideways hack" in order to not change the
> official Process.java interface. Therefore I was hoping that with JEP 102,
> we would get official support for process groups. Unfortunately, seems the
> decision is already done and we are too late in the discussion :(

Interestingly we are hoping to use allChildren() to kill process trees in jtreg 
- exactly the use case you are describing. I haven’t been testing the current 
approach in allChildren(), but it seems your experience indicates that it will 
not be a perfect fit for the use case. In a previous test framework I was 
involved in we also used process groups for this with good results. This does 
beg the question: if the current approach isn’t useful for our own testing 
purposes, when is it useful?

Thanks,
/Staffan

> 
> see my other comments inline.
> 
> On Sat, Apr 11, 2015 at 8:55 PM, Roger Riggs  > wrote:
> 
>> Hi Thomas,
>> 
>> Thanks for the comments.
>> 
>> On 4/11/2015 8:31 AM, Thomas Stüfe wrote:
>> 
>> Hi Roger,
>> 
>> I have a question about getChildren() and getAllChildren().
>> 
>> I assume the point of those functions is to implement point 4 of JEP 102
>> ("The ability to deal with process trees, in particular some means to
>> destroy a process tree."), by returning a collection of PIDs which are the
>> children of the process and then killing them?
>> 
>> Earlier versions included a killProcess tree method but it was recommended
>> to leave
>> the exact algorithm to kill processes to the caller.
>> 
>> 
>> However, I am not sure that this can be implemented in a safe way, at
>> least on UNIX, because - as Martin already pointed out - of PID recycling.
>> I do not see how you can prevent allChildren() from returning PIDs which
>> may be already reaped and recyled when you use them later. How do you
>> prevent that?
>> 
>> Unless there is an extended time between getting the children and
>> destroying them the pids will still be valid.
>> 
> 
> Why? Child process may be getting reaped the instant you are done reading
> it from /proc, and pid may have been recycled by the OS right away and
> already pointing to another process when allChildren() returns. If a
> process lives about as long as it takes the system to reach a pid
> wraparound to the same pid value, its pid could be recycled right after it
> is reaped, or? Sure, the longer you wait, the higher the chance of this to
> happen, but it may happen right away.
> 
> As Martin said, we had those races in the kill() code since a long time,
> but children()/allChildren() could make those error more probable, because
> now more processes are involved. Especially if you use allChildren to kill
> a deep process tree. And there is nothing in the javadoc warning the user
> about this scenario. You would just happen from time to time to kill an
> unrelated process. Those problems are hard to debug.
> 
> The technique of caching the start time can prevent that case; though it
>> has AFAIK not been a problem.
>> 
> 
> How would that work? User should, before issuing the kill, compare start
> time of process to kill with cached start time?
> 
>> Note even if your coding is bulletproof, that allChildren() will also
>> return PIDs of sub processes which are completely unrelated to you and
>> Process.java - they could have been forked by some third party native code
>> which just happens to run in parallel in the same process. There, you have
>> no control about when it gets reaped. It might already have been reaped by
>> the time allChildren() returns, and now the same PID got recycled as
>> another, unrelated process.
>> 
>> Of course, the best case is for an application to spawn and manage its own
>> processes
>> and handle there proper termination.
>> The use cases for children/allChildren are focused on
>> supervisory/executive functions
>> that monitor a running system and can cleanup even in the case of
>> unexpected failures.
>> 
> All management of processes is subject to OS limitations, if the PID were
>> from a completely
>> different process tree, the ordinary destroy/info functions would not be
>> available
>> unless the process was running as a privileged os user (same as any other
>

Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-16 Thread Thomas Stüfe
Hi Roger,

thank you for your answer!

The reason I take an interest is not just theoretical. We (SAP) use our JVM
for our test infrastructure and we had exactly the problem allChildren() is
designed to solve: killing a process tree related to a specific tests
(similar to jtreg tests) in case of errors or hangs. We have test machines
running large workloads of tests in parallel and we reach pid wraparound -
depending on the OS - quite fast.

We solved this by adding process groups to Process.java and we are very
happy with this solution. We are able to quickly kill a whole process tree,
cleanly and completely, without ambiguity or risk to other tests. Of course
we had to add this support as a "sideways hack" in order to not change the
official Process.java interface. Therefore I was hoping that with JEP 102,
we would get official support for process groups. Unfortunately, seems the
decision is already done and we are too late in the discussion :(

see my other comments inline.

On Sat, Apr 11, 2015 at 8:55 PM, Roger Riggs  wrote:

>  Hi Thomas,
>
> Thanks for the comments.
>
> On 4/11/2015 8:31 AM, Thomas Stüfe wrote:
>
> Hi Roger,
>
>  I have a question about getChildren() and getAllChildren().
>
> I assume the point of those functions is to implement point 4 of JEP 102
> ("The ability to deal with process trees, in particular some means to
> destroy a process tree."), by returning a collection of PIDs which are the
> children of the process and then killing them?
>
> Earlier versions included a killProcess tree method but it was recommended
> to leave
> the exact algorithm to kill processes to the caller.
>
>
>  However, I am not sure that this can be implemented in a safe way, at
> least on UNIX, because - as Martin already pointed out - of PID recycling.
> I do not see how you can prevent allChildren() from returning PIDs which
> may be already reaped and recyled when you use them later. How do you
> prevent that?
>
> Unless there is an extended time between getting the children and
> destroying them the pids will still be valid.
>

Why? Child process may be getting reaped the instant you are done reading
it from /proc, and pid may have been recycled by the OS right away and
already pointing to another process when allChildren() returns. If a
process lives about as long as it takes the system to reach a pid
wraparound to the same pid value, its pid could be recycled right after it
is reaped, or? Sure, the longer you wait, the higher the chance of this to
happen, but it may happen right away.

As Martin said, we had those races in the kill() code since a long time,
but children()/allChildren() could make those error more probable, because
now more processes are involved. Especially if you use allChildren to kill
a deep process tree. And there is nothing in the javadoc warning the user
about this scenario. You would just happen from time to time to kill an
unrelated process. Those problems are hard to debug.

The technique of caching the start time can prevent that case; though it
> has AFAIK not been a problem.
>

How would that work? User should, before issuing the kill, compare start
time of process to kill with cached start time?

> Note even if your coding is bulletproof, that allChildren() will also
> return PIDs of sub processes which are completely unrelated to you and
> Process.java - they could have been forked by some third party native code
> which just happens to run in parallel in the same process. There, you have
> no control about when it gets reaped. It might already have been reaped by
> the time allChildren() returns, and now the same PID got recycled as
> another, unrelated process.
>
> Of course, the best case is for an application to spawn and manage its own
> processes
> and handle there proper termination.
> The use cases for children/allChildren are focused on
> supervisory/executive functions
> that monitor a running system and can cleanup even in the case of
> unexpected failures.
>
All management of processes is subject to OS limitations, if the PID were
> from a completely
> different process tree, the ordinary destroy/info functions would not be
> available
> unless the process was running as a privileged os user (same as any other
> native application).
>

Could you explain this please? If both trees run under the same user, why
should I not be able to kill a process from a different tree?

> If I am right, it would not be sufficient to state "There is no guarantee
> that a process is alive." - it may be alive but by now be a different
> process altogether. This makes "allChildren()" useless for many cases,
> because the returned information may already be obsolete the moment the
> function returns.
>
> The caching of startTime can remove the ambiguity.
>

>
>  Of course I may something missing here?
>
>  But if I got all that right and the sole purpose of allChildren() is to
> be able to kill them (or otherwise signal them), why not use process
> groups? Process groups wou

Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-15 Thread Peter Levart

Hi Roger,

On 04/14/2015 11:33 PM, Roger Riggs wrote:

Hi Peter,

Thanks for the review.

On 4/14/2015 11:47 AM, Peter Levart wrote:


On 04/09/2015 10:00 PM, Roger Riggs wrote:

Please review the API and implementation of the Process API Updates
described inJEP 102 
. Please review 
and comment by April 23rd.


The recommendation to make ProcessHandle an interface is included
allowing the new functions to be extended by Process subclasses.
The implementation covers all functions on Unix, Windows, Solaris, 
and Mac OS X.


The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/

The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph

Issue: JDK-8077350 
 Process API 
Updates Implementation


The code is in the jdk9 sandbox on branch JDK-8046092-branch.

Please review and comment, Roger


Hi Roger,

I have a few comments...

ProcessHandle JNI interface is same for all platforms and it is 
reused in UNIX ProcessImpl for destroying and waiting. Is there some 
particular reason why it is not reused in Windows ProcessImpl too? 
The Windows impl still uses it's own native methods for destroying 
and waiting, but they take "process handles" instead of PIDs. Are 
handles less susceptible to recycling?


I am confused by the method name "supportsDestroyForcibly" too. The 
following would be more intuitive in my opinion:


- destroyIsForcible()
- isDestroyForcible()
We're tried several variations including the word 'destroy' and it 
still seems confusing.

I suggested the supportsNormalTermination name in another thread.


Or:

- supportsGracefulTermination
- supportsGracefulDestroy
- supportsDestroyGracefully


Regards, Peter



Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-14 Thread Roger Riggs

Hi Peter,

Thanks for the review.

On 4/14/2015 11:47 AM, Peter Levart wrote:


On 04/09/2015 10:00 PM, Roger Riggs wrote:

Please review the API and implementation of the Process API Updates
described inJEP 102 
. Please review and 
comment by April 23rd.


The recommendation to make ProcessHandle an interface is included
allowing the new functions to be extended by Process subclasses.
The implementation covers all functions on Unix, Windows, Solaris, 
and Mac OS X.


The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/

The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph

Issue: JDK-8077350  
Process API Updates Implementation


The code is in the jdk9 sandbox on branch JDK-8046092-branch.

Please review and comment, Roger


Hi Roger,

I have a few comments...

ProcessHandle JNI interface is same for all platforms and it is reused 
in UNIX ProcessImpl for destroying and waiting. Is there some 
particular reason why it is not reused in Windows ProcessImpl too? The 
Windows impl still uses it's own native methods for destroying and 
waiting, but they take "process handles" instead of PIDs. Are handles 
less susceptible to recycling?


I am confused by the method name "supportsDestroyForcibly" too. The 
following would be more intuitive in my opinion:


- destroyIsForcible()
- isDestroyForcible()
We're tried several variations including the word 'destroy' and it still 
seems confusing.

I suggested the supportsNormalTermination name in another thread.


The @implNote for Process.onExit() says:

 366  * @implNote
 367  * The default implementation of this method employs a thread to
 368  * {@link #waitFor() wait} for process exit,
 369  * which may consume a lot of memory for thread stacks if a
 370  * large number of processes are waited for concurrently.

That's true for the default implementation in Process but not true for 
internal OpenJDK ProcessImpl(s). Should the note say so explicitly so 
that users won't be afraid of using this method with internal 
implementations?

Yes, it should be more specific about ProcessBuilder created instances.


 371  * 
 372  * External implementations are advised to override this 
method and provide
 373  * more efficient implementation. For example, to delegate to 
the underlying

 374  * process, it can simply do the following:
 375  * {@code
 376  *public CompletableFuture onExit() {
 377  *   return delegate.onExit();
 378  *}
 379  * }
 380  * ...which in case of implementation of the delegate is used.


...this example is not correct as it exposes the "delegate" Process 
instance. It should be something like:


public CompletableFuture onExit() {
   return delegate.onExit().thenApply(p -> this);
}

Will fix tomorrow.



Otherwise it's nice how this turned out.

Unrelated to your implementation, I have been thinking of another 
small Process API update. Some people find it odd how redirected 
in/out/err streams are exposed:


http://blog.headius.com/2013/06/the-pain-of-broken-subprocess.html

yep, I've read that several times.


They basically don't like:

- that exposed Input/Output streams are buffered
- that underlying streams are File(Input/Output)Streams which, 
although the backing OS implementation are not files but pipes, don't 
expose selectable channels so that non-blocking event-based IO could 
be performed on them.
- that exposed IO streams are automatically "managed" in UNIX variants 
of ProcessImpl which needs subtle "hacks" to do it in a perceptively 
transparent way (delayed close, draining input on exit and making it 
available after the underlying handle is already closed, ...)


So I've been playing with the idea of exposing the "real" pipe 
channels in last couple of days. Here's the prototype I came up with:


http://cr.openjdk.java.net/~plevart/jdk9-sandbox/JDK-8046092-branch/Process.PipeChannel/webrev.01/ 



This adds new Redirect type to the API and 3 new methods to Process 
that return Pipe channels when this new Redirect type is used. It's 
interesting that no native code changes were necessary. The behavior 
of pipes on Windows is a little different (perhaps because the Pipe 
NIO API uses sockets under the hood on Windows - why is that? Windows 
does have a pipe equivalent). What bothers me is that file handles 
opened on files (when redirecting to/from File) can be closed as soon 
as the subprocess is started and the subprocess is still able to 
read/write from the files (like with UNIX). It's not the same with 
pipe (i.e. socket) handles on Windows. They must be closed only after 
subprocess exits.


If this subtle difference between file handles and socket handles on 
Windows could be dealt with (perhaps some options exist that affect 
subprocess spawning), then the extra waiting thread would not be 
needed on Windows.


So what do you think of this A

Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-14 Thread Alan Bateman

On 14/04/2015 16:47, Peter Levart wrote:


The behavior of pipes on Windows is a little different (perhaps 
because the Pipe NIO API uses sockets under the hood on Windows - why 
is that? Windows does have a pipe equivalent).
Multiplexing is a bit limited on Windows, I don't think you can select 
on both sockets and pipe handlers in the one select. It should be rare 
to need to do this of course. So it should be possible to have the 
Windows Selector partition the selectable channels so that they are 
multiplexed via different threads when it arises.


-Alan


Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-14 Thread Peter Levart


On 04/09/2015 10:00 PM, Roger Riggs wrote:

Please review the API and implementation of the Process API Updates
described inJEP 102 
. Please review and 
comment by April 23rd.


The recommendation to make ProcessHandle an interface is included
allowing the new functions to be extended by Process subclasses.
The implementation covers all functions on Unix, Windows, Solaris, and 
Mac OS X.


The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/

The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph

Issue: JDK-8077350  
Process API Updates Implementation


The code is in the jdk9 sandbox on branch JDK-8046092-branch.

Please review and comment, Roger


Hi Roger,

I have a few comments...

ProcessHandle JNI interface is same for all platforms and it is reused 
in UNIX ProcessImpl for destroying and waiting. Is there some particular 
reason why it is not reused in Windows ProcessImpl too? The Windows impl 
still uses it's own native methods for destroying and waiting, but they 
take "process handles" instead of PIDs. Are handles less susceptible to 
recycling?


I am confused by the method name "supportsDestroyForcibly" too. The 
following would be more intuitive in my opinion:


- destroyIsForcible()
- isDestroyForcible()

The @implNote for Process.onExit() says:

 366  * @implNote
 367  * The default implementation of this method employs a thread to
 368  * {@link #waitFor() wait} for process exit,
 369  * which may consume a lot of memory for thread stacks if a
 370  * large number of processes are waited for concurrently.

That's true for the default implementation in Process but not true for 
internal OpenJDK ProcessImpl(s). Should the note say so explicitly so 
that users won't be afraid of using this method with internal 
implementations?


 371  * 
 372  * External implementations are advised to override this 
method and provide
 373  * more efficient implementation. For example, to delegate to 
the underlying

 374  * process, it can simply do the following:
 375  * {@code
 376  *public CompletableFuture onExit() {
 377  *   return delegate.onExit();
 378  *}
 379  * }
 380  * ...which in case of implementation of the delegate is used.


...this example is not correct as it exposes the "delegate" Process 
instance. It should be something like:


public CompletableFuture onExit() {
   return delegate.onExit().thenApply(p -> this);
}


Otherwise it's nice how this turned out.

Unrelated to your implementation, I have been thinking of another small 
Process API update. Some people find it odd how redirected in/out/err 
streams are exposed:


http://blog.headius.com/2013/06/the-pain-of-broken-subprocess.html

They basically don't like:

- that exposed Input/Output streams are buffered
- that underlying streams are File(Input/Output)Streams which, although 
the backing OS implementation are not files but pipes, don't expose 
selectable channels so that non-blocking event-based IO could be 
performed on them.
- that exposed IO streams are automatically "managed" in UNIX variants 
of ProcessImpl which needs subtle "hacks" to do it in a perceptively 
transparent way (delayed close, draining input on exit and making it 
available after the underlying handle is already closed, ...)


So I've been playing with the idea of exposing the "real" pipe channels 
in last couple of days. Here's the prototype I came up with:


http://cr.openjdk.java.net/~plevart/jdk9-sandbox/JDK-8046092-branch/Process.PipeChannel/webrev.01/

This adds new Redirect type to the API and 3 new methods to Process that 
return Pipe channels when this new Redirect type is used. It's 
interesting that no native code changes were necessary. The behavior of 
pipes on Windows is a little different (perhaps because the Pipe NIO API 
uses sockets under the hood on Windows - why is that? Windows does have 
a pipe equivalent). What bothers me is that file handles opened on files 
(when redirecting to/from File) can be closed as soon as the subprocess 
is started and the subprocess is still able to read/write from the files 
(like with UNIX). It's not the same with pipe (i.e. socket) handles on 
Windows. They must be closed only after subprocess exits.


If this subtle difference between file handles and socket handles on 
Windows could be dealt with (perhaps some options exist that affect 
subprocess spawning), then the extra waiting thread would not be needed 
on Windows.


So what do you thik of this API update?

Regards, Peter



Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-13 Thread Martin Buchholz
On Sat, Apr 11, 2015 at 11:13 AM, Roger Riggs 
wrote:

>  Hi Martin,
>
> Thanks for the review.
>
> On 4/11/2015 1:37 AM, Martin Buchholz wrote:
>
> Thanks for the huge effort.  I did a superficial review and it seems
> pretty good.  Of course, changing the Process good is high risk and some
> things will probably need fixup later.
>
>  On Unix, you seem to be identifying ProcessHandles by their pid.  What
> are you going to do about the problem of misidentifying Unix processes
> whose pid got recycled?  If you spin up a thread to call waitpid as soon as
> you create the ProcessHandle, the window is very short (and OSes avoid
> recycling pids immediately) so that's pretty safe, but it wastes a thread
> for each ProcessHandle!  I don't have a good answer.  Maybe record the
> start time of the process, where available?
>
> Getting the basic API and implementation in has taken too long, so I'm
> looking to get it general agreement
> and pushed and then I'd be happy to go back and include the start time in
> ProcessHandle.
> In the case of children() which already has to read /proc/pid/stat for the
> parent,
> the start time is inexpensively available also.  The spec for equals(),
> and compareTo is straightforward
> without the start time.  With the start time, either starttime has be
> exposed in the API/spec or
> be left 'implementation' dependent which is a poor spec.
> Only in the case of allProcesses() would the overhead increase
> significantly, it currently
> does not need to open /proc/pid for every process.
>

We have ancient "recycle pid" bugs and unavoidable races (kill(2) is
fundamentally racy).

// There is a risk that pid will be recycled, causing us to
// kill the wrong process!  So we only terminate processes
// that appear to still be running.  Even with this check,
// there is an unavoidable race condition here, but the
window
// is very small, and OSes try hard to not recycle pids too
// soon, so this is quite safe.

but in practice, we can assume that there will be a significant time gap
between reuses of the same pid, and so a check-then-act race will not
result in bad behavior, but you can't allow arbitrary time to elapse and
assume you're still dealing with the same process.


> Siginfo is only used with waitid as an output argument
>
>
Yes, my mistake.


>
> It seems you don't support sending arbitary signals, which is a little sad.
>
> I had not investigated any potential negative and unknowable effects of
> being able to send arbitrary signals.
> The implementation is easy to generalize, though it does not map to
> Windows well/at all.
> Would you propose it as an 'int' argument with no limitation? or what
> limits?
> I think it would be an additional method, the current
> destory/destroyForcibly.
>
>
The portable nature of Java has always hindered progress in functionality
in this area.

I haven't investigated in detail, but perl/emacs/python/tcl-expect manage
to achieve cross-platform functionality, perhaps by degraded functionality
on windows.

I very much do __not__ expect it to happen, but my wish would be to be able
to reimplement emacs in 100% java, including all the subprocess and
pseudo-terminal support required for:

http://www.gnu.org/software/emacs/manual/html_node/elisp/Processes.html


Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-11 Thread Anthony Vanelverdinghe
Thanks for your clarifications Roger. I'm very much in favor of your 
suggestion for naming the method "supportsNormalTermination".


Kind regards,
Anthony


On 11/04/2015 20:35, Roger Riggs wrote:

Hi Anthony,

Thanks for the review and comments.

On 4/11/2015 5:00 AM, Anthony Vanelverdinghe wrote:

Hi Roger

In my opinion, the method "supportsDestroyForcibly" is unintuitive, 
for the following 2 reasons:


- it's named "supportsXxx", where Xxx is the name of a method in the 
same class. So as a user of this API, I would intuitively assume that 
"supportsDestroyForcibly" is related to "destroyForcibly", and means 
whether or not "destroyForcibly" actually forcibly terminates the 
process.


- "supports" has a positive connotation. However, if 
"supportsDestroyForcibly" returns true, this means that the 
implementation of "destroy" is forcible. And "destroyForcibly" either 
invokes "destroy" (default implementation) or is overridden with a 
compliant implementation. So in other words: if 
"supportsDestroyForcibly" returns true, it's impossible to allow the 
process to shut down cleanly. This, in my opinion, is a bad thing, so 
the method indicating this behavior should not start with "supports".
The naming of the method has been problematic; as is the ambiguous 
semantics of Process.destroy

which are historical.


Therefore I would like to propose to replace 
"supportsDestroyForcibly" with "supportsDestroy", which returns the 
negation of what's currently returned by "supportsDestroyForcibly".
I'm not sure this is clearer.  Both destroy() and destroyForcibly() 
always support the destruction
of the process and without reading more closely, supportDestroy() 
would always be true.


Perhaps supportNormalTermination() would reflect the more general 
understanding of the behavior

and the spec could provide the details in relation to the destroy method.



Another question I have is: what happens if "destroy" or 
"destroyForcibly" are called on processes other than the current 
process & its children, i.e. a process that is in "allProcesses" but 
isn't "current" &  isn't in "allChildren" (e.g. the parent process of 
the current process)?

The current spec does not guarantee that the process will be destroyed;
processes can protect themselves against the signals (SIGTERM or SIGKILL)
just as it does not make any assertion about the timing of the process 
state change.


The behavior is to attempt to kill the process but it may not work for 
a variety of

reasons, not all of which can be detected or reported to the caller.
For ProcessHandle.destroy, if the OS does not allow the signal delivered,
the destory/destroyForcibly method could return a boolean.
Alternatively, Martin's request to throw arbitrary signals would be a 
new method

with more os specific behavior and exceptions.

The decoupling of ProcessHandle and Process opens the possibility
to have different semantics than for Process.destroy/destroyForcibly.

Roger


Kind regards,
Anthony


On 9/04/2015 22:00, Roger Riggs wrote:

Please review the API and implementation of the Process API Updates
described inJEP 102 
. Please review 
and comment by April 23rd.


The recommendation to make ProcessHandle an interface is included
allowing the new functions to be extended by Process subclasses.
The implementation covers all functions on Unix, Windows, Solaris, 
and Mac OS X.


The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/

The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph

Issue: JDK-8077350 
 Process API 
Updates Implementation


The code is in the jdk9 sandbox on branch JDK-8046092-branch.

Please review and comment, Roger












Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-11 Thread Roger Riggs

Hi Thomas,

Thanks for the comments.

On 4/11/2015 8:31 AM, Thomas Stüfe wrote:

Hi Roger,

I have a question about getChildren() and getAllChildren().

I assume the point of those functions is to implement point 4 of JEP 
102 ("The ability to deal with process trees, in particular some means 
to destroy a process tree."), by returning a collection of PIDs which 
are the children of the process and then killing them?
Earlier versions included a killProcess tree method but it was 
recommended to leave

the exact algorithm to kill processes to the caller.


However, I am not sure that this can be implemented in a safe way, at 
least on UNIX, because - as Martin already pointed out - of PID 
recycling. I do not see how you can prevent allChildren() from 
returning PIDs which may be already reaped and recyled when you use 
them later. How do you prevent that?
Unless there is an extended time between getting the children and 
destroying them the pids will still be valid.
The technique of caching the start time can prevent that case; though it 
has AFAIK not been a problem.


Note even if your coding is bulletproof, that allChildren() will also 
return PIDs of sub processes which are completely unrelated to you and 
Process.java - they could have been forked by some third party native 
code which just happens to run in parallel in the same process. There, 
you have no control about when it gets reaped. It might already have 
been reaped by the time allChildren() returns, and now the same PID 
got recycled as another, unrelated process.
Of course, the best case is for an application to spawn and manage its 
own processes

and handle there proper termination.
The use cases for children/allChildren are focused on 
supervisory/executive functions
that monitor a running system and can cleanup even in the case of 
unexpected failures.
All management of processes is subject to OS limitations, if the PID 
were from a completely
different process tree, the ordinary destroy/info functions would not be 
available
unless the process was running as a privileged os user (same as any 
other native application).




If I am right, it would not be sufficient to state "There is no 
guarantee that a process is alive." - it may be alive but by now be a 
different process altogether. This makes "allChildren()" useless for 
many cases, because the returned information may already be obsolete 
the moment the function returns.

The caching of startTime can remove the ambiguity.


Of course I may something missing here?

But if I got all that right and the sole purpose of allChildren() is 
to be able to kill them (or otherwise signal them), why not use 
process groups? Process groups would be the traditional way on POSIX 
platforms to handle process trees, and they are also available on 
Windows in the form of Job Objects.


Using process groups to signal sub process trees would be safe, would 
not rely on PID identity, and would be more efficient. Also way less 
coding. Also, it would be an old, established pattern - process groups 
have been around for a long time. Also, using process groups it is 
possible to break away from a group, so a program below you which 
wants to run as a demon can do so by removing itself from the process 
group and thus escaping your kill.


On Windows we have Job objects, and I think there are enough 
similarities to POSIX process groups to abstract them into something 
platform independent.
Earlier discussions of process termination and exit value reaping 
considered

using process groups but it became evident that the Java runtime needed to
be very careful to not interfere with processes that might be spawned and
controlled by native libraries and that process groups would only increase
complexity and the interactions.



The only problem I think is that the API would have somehow to be 
changed. Either by directly reflecting the use of process groups, or 
at least by removing allChildren() and replacing it with something 
like "killAllChildren()" or "signalAllChildren()".


Thanks, Roger



Kind Regards, Thomas











On Thu, Apr 9, 2015 at 10:00 PM, Roger Riggs > wrote:


Please review the API and implementation of the Process API Updates
described inJEP 102
. Please  review
and comment by April 23rd.

The recommendation to make ProcessHandle an interface is included
allowing the new functions to be extended by Process subclasses.
The implementation covers all functions on Unix, Windows, Solaris,
and Mac OS X.

The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/


The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph


Issue: JDK-8077350
 Process API
Updates Implementation

The code is in the jdk

Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-11 Thread Roger Riggs

Hi Anthony,

Thanks for the review and comments.

On 4/11/2015 5:00 AM, Anthony Vanelverdinghe wrote:

Hi Roger

In my opinion, the method "supportsDestroyForcibly" is unintuitive, 
for the following 2 reasons:


- it's named "supportsXxx", where Xxx is the name of a method in the 
same class. So as a user of this API, I would intuitively assume that 
"supportsDestroyForcibly" is related to "destroyForcibly", and means 
whether or not "destroyForcibly" actually forcibly terminates the 
process.


- "supports" has a positive connotation. However, if 
"supportsDestroyForcibly" returns true, this means that the 
implementation of "destroy" is forcible. And "destroyForcibly" either 
invokes "destroy" (default implementation) or is overridden with a 
compliant implementation. So in other words: if 
"supportsDestroyForcibly" returns true, it's impossible to allow the 
process to shut down cleanly. This, in my opinion, is a bad thing, so 
the method indicating this behavior should not start with "supports".
The naming of the method has been problematic; as is the ambiguous 
semantics of Process.destroy

which are historical.


Therefore I would like to propose to replace "supportsDestroyForcibly" 
with "supportsDestroy", which returns the negation of what's currently 
returned by "supportsDestroyForcibly".
I'm not sure this is clearer.  Both destroy() and destroyForcibly() 
always support the destruction
of the process and without reading more closely, supportDestroy() would 
always be true.


Perhaps supportNormalTermination() would reflect the more general 
understanding of the behavior

and the spec could provide the details in relation to the destroy method.



Another question I have is: what happens if "destroy" or 
"destroyForcibly" are called on processes other than the current 
process & its children, i.e. a process that is in "allProcesses" but 
isn't "current" &  isn't in "allChildren" (e.g. the parent process of 
the current process)?

The current spec does not guarantee that the process will be destroyed;
processes can protect themselves against the signals (SIGTERM or SIGKILL)
just as it does not make any assertion about the timing of the process 
state change.


The behavior is to attempt to kill the process but it may not work for a 
variety of

reasons, not all of which can be detected or reported to the caller.
For ProcessHandle.destroy, if the OS does not allow the signal delivered,
the destory/destroyForcibly method could return a boolean.
Alternatively, Martin's request to throw arbitrary signals would be a 
new method

with more os specific behavior and exceptions.

The decoupling of ProcessHandle and Process opens the possibility
to have different semantics than for Process.destroy/destroyForcibly.

Roger


Kind regards,
Anthony


On 9/04/2015 22:00, Roger Riggs wrote:

Please review the API and implementation of the Process API Updates
described inJEP 102 
. Please review and 
comment by April 23rd.


The recommendation to make ProcessHandle an interface is included
allowing the new functions to be extended by Process subclasses.
The implementation covers all functions on Unix, Windows, Solaris, 
and Mac OS X.


The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/

The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph

Issue: JDK-8077350  
Process API Updates Implementation


The code is in the jdk9 sandbox on branch JDK-8046092-branch.

Please review and comment, Roger








Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-11 Thread Roger Riggs

Hi Martin,

Thanks for the review.

On 4/11/2015 1:37 AM, Martin Buchholz wrote:
Thanks for the huge effort.  I did a superficial review and it seems 
pretty good.  Of course, changing the Process good is high risk and 
some things will probably need fixup later.


On Unix, you seem to be identifying ProcessHandles by their pid.  What 
are you going to do about the problem of misidentifying Unix processes 
whose pid got recycled?  If you spin up a thread to call waitpid as 
soon as you create the ProcessHandle, the window is very short (and 
OSes avoid recycling pids immediately) so that's pretty safe, but it 
wastes a thread for each ProcessHandle!  I don't have a good answer.  
Maybe record the start time of the process, where available?
Getting the basic API and implementation in has taken too long, so I'm 
looking to get it general agreement
and pushed and then I'd be happy to go back and include the start time 
in ProcessHandle.
In the case of children() which already has to read /proc/pid/stat for 
the parent,
the start time is inexpensively available also.  The spec for equals(), 
and compareTo is straightforward
without the start time.  With the start time, either starttime has be 
exposed in the API/spec or

be left 'implementation' dependent which is a poor spec.
Only in the case of allProcesses() would the overhead increase 
significantly, it currently

does not need to open /proc/pid for every process.



+} else if (siginfo.si_code == CLD_KILLED || siginfo.si_code == 
CLD_DUMPED) {
Using siginfo probably won't work under load because multiple signals 
arriving at the same time are coalesced into one, at least on Linux.

Unix signals are sadly totally broken.

Siginfo is only used with waitid as an output argument

For the case of reaping the status, it uses waitpid (as Process did 
before),

that seemed the least risky.
For the case of waiting for a process without reaping, that option is 
only available
using waitid.  I tried using waitid for both cases but on Mac, waitid 
didn't seem to

be working correctly in all cases.



--

It seems you don't support sending arbitary signals, which is a little 
sad.
I had not investigated any potential negative and unknowable effects of 
being able to send arbitrary signals.
The implementation is easy to generalize, though it does not map to 
Windows well/at all.
Would you propose it as an 'int' argument with no limitation? or what 
limits?
I think it would be an additional method, the current 
destory/destroyForcibly.


Thanks, Roger




On Thu, Apr 9, 2015 at 1:00 PM, Roger Riggs > wrote:


Please review the API and implementation of the Process API Updates
described inJEP 102
. Please  review
and comment by April 23rd.

The recommendation to make ProcessHandle an interface is included
allowing the new functions to be extended by Process subclasses.
The implementation covers all functions on Unix, Windows, Solaris,
and Mac OS X.

The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/


The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph


Issue: JDK-8077350
 Process API
Updates Implementation

The code is in the jdk9 sandbox on branch JDK-8046092-branch.

Please review and comment, Roger






Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-11 Thread Thomas Stüfe
p.s.

Note that using allChildren() to kill process trees has a second problem,
even without PID recycling: the PID list it returns may not be complete
once you come around to use it.

Imagine a process tree with some runaway process forking below you
constantly. You want to kill the complete process tree, and do this fast.
Iterating over /proc to collect PIDs, returning this list to java to then
start processing it may be way to slow for this case. You'd end up with a
lot of zombies.

Again, process groups were made for problems like this.

On Sat, Apr 11, 2015 at 2:31 PM, Thomas Stüfe 
wrote:

> Hi Roger,
>
> I have a question about getChildren() and getAllChildren().
>
> I assume the point of those functions is to implement point 4 of JEP 102
> ("The ability to deal with process trees, in particular some means to
> destroy a process tree."), by returning a collection of PIDs which are the
> children of the process and then killing them?
>
> However, I am not sure that this can be implemented in a safe way, at
> least on UNIX, because - as Martin already pointed out - of PID recycling.
> I do not see how you can prevent allChildren() from returning PIDs which
> may be already reaped and recyled when you use them later. How do you
> prevent that?
>
> Note even if your coding is bulletproof, that allChildren() will also
> return PIDs of sub processes which are completely unrelated to you and
> Process.java - they could have been forked by some third party native code
> which just happens to run in parallel in the same process. There, you have
> no control about when it gets reaped. It might already have been reaped by
> the time allChildren() returns, and now the same PID got recycled as
> another, unrelated process.
>
> If I am right, it would not be sufficient to state "There is no guarantee
> that a process is alive." - it may be alive but by now be a different
> process altogether. This makes "allChildren()" useless for many cases,
> because the returned information may already be obsolete the moment the
> function returns.
>
>
Of course I may something missing here?
>
> But if I got all that right and the sole purpose of allChildren() is to be
> able to kill them (or otherwise signal them), why not use process groups?
> Process groups would be the traditional way on POSIX platforms to handle
> process trees, and they are also available on Windows in the form of Job
> Objects.
>
> Using process groups to signal sub process trees would be safe, would not
> rely on PID identity, and would be more efficient. Also way less coding.
> Also, it would be an old, established pattern - process groups have been
> around for a long time. Also, using process groups it is possible to break
> away from a group, so a program below you which wants to run as a demon can
> do so by removing itself from the process group and thus escaping your kill.
>
> On Windows we have Job objects, and I think there are enough similarities
> to POSIX process groups to abstract them into something platform
> independent.
>
> The only problem I think is that the API would have somehow to be changed.
> Either by directly reflecting the use of process groups, or at least by
> removing allChildren() and replacing it with something like
> "killAllChildren()" or "signalAllChildren()".
>
> Kind Regards, Thomas
>
>
>
>
>
>
>
>
>
>
>
> On Thu, Apr 9, 2015 at 10:00 PM, Roger Riggs 
>  wrote:
>
>> Please review the API and implementation of the Process API Updates
>> described inJEP 102 .
>> Please  review and comment by April 23rd.
>>
>> The recommendation to make ProcessHandle an interface is included
>> allowing the new functions to be extended by Process subclasses.
>> The implementation covers all functions on Unix, Windows, Solaris, and
>> Mac OS X.
>>
>> The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/
>>
>> The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph
>>
>> Issue: JDK-8077350 
>> Process API Updates Implementation
>>
>> The code is in the jdk9 sandbox on branch JDK-8046092-branch.
>>
>> Please review and comment, Roger
>>
>
>
> On Thu, Apr 9, 2015 at 10:00 PM, Roger Riggs 
> wrote:
>
>> Please review the API and implementation of the Process API Updates
>> described inJEP 102 .
>> Please  review and comment by April 23rd.
>>
>> The recommendation to make ProcessHandle an interface is included
>> allowing the new functions to be extended by Process subclasses.
>> The implementation covers all functions on Unix, Windows, Solaris, and
>> Mac OS X.
>>
>> The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/
>>
>> The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph
>>
>> Issue: JDK-8077350 
>> Process API Updates Implementation
>>
>> The code is in the jdk9 sandbox on branch JDK-8046092-branch.
>>
>> Please revie

Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-11 Thread Thomas Stüfe
Hi Roger,

I have a question about getChildren() and getAllChildren().

I assume the point of those functions is to implement point 4 of JEP 102
("The ability to deal with process trees, in particular some means to
destroy a process tree."), by returning a collection of PIDs which are the
children of the process and then killing them?

However, I am not sure that this can be implemented in a safe way, at least
on UNIX, because - as Martin already pointed out - of PID recycling. I do
not see how you can prevent allChildren() from returning PIDs which may be
already reaped and recyled when you use them later. How do you prevent
that?

Note even if your coding is bulletproof, that allChildren() will also
return PIDs of sub processes which are completely unrelated to you and
Process.java - they could have been forked by some third party native code
which just happens to run in parallel in the same process. There, you have
no control about when it gets reaped. It might already have been reaped by
the time allChildren() returns, and now the same PID got recycled as
another, unrelated process.

If I am right, it would not be sufficient to state "There is no guarantee
that a process is alive." - it may be alive but by now be a different
process altogether. This makes "allChildren()" useless for many cases,
because the returned information may already be obsolete the moment the
function returns.

Of course I may something missing here?

But if I got all that right and the sole purpose of allChildren() is to be
able to kill them (or otherwise signal them), why not use process groups?
Process groups would be the traditional way on POSIX platforms to handle
process trees, and they are also available on Windows in the form of Job
Objects.

Using process groups to signal sub process trees would be safe, would not
rely on PID identity, and would be more efficient. Also way less coding.
Also, it would be an old, established pattern - process groups have been
around for a long time. Also, using process groups it is possible to break
away from a group, so a program below you which wants to run as a demon can
do so by removing itself from the process group and thus escaping your kill.

On Windows we have Job objects, and I think there are enough similarities
to POSIX process groups to abstract them into something platform
independent.

The only problem I think is that the API would have somehow to be changed.
Either by directly reflecting the use of process groups, or at least by
removing allChildren() and replacing it with something like
"killAllChildren()" or "signalAllChildren()".

Kind Regards, Thomas











On Thu, Apr 9, 2015 at 10:00 PM, Roger Riggs  wrote:

> Please review the API and implementation of the Process API Updates
> described inJEP 102 .
> Please  review and comment by April 23rd.
>
> The recommendation to make ProcessHandle an interface is included
> allowing the new functions to be extended by Process subclasses.
> The implementation covers all functions on Unix, Windows, Solaris, and Mac
> OS X.
>
> The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/
>
> The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph
>
> Issue: JDK-8077350 
> Process API Updates Implementation
>
> The code is in the jdk9 sandbox on branch JDK-8046092-branch.
>
> Please review and comment, Roger
>


On Thu, Apr 9, 2015 at 10:00 PM, Roger Riggs  wrote:

> Please review the API and implementation of the Process API Updates
> described inJEP 102 .
> Please  review and comment by April 23rd.
>
> The recommendation to make ProcessHandle an interface is included
> allowing the new functions to be extended by Process subclasses.
> The implementation covers all functions on Unix, Windows, Solaris, and Mac
> OS X.
>
> The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/
>
> The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph
>
> Issue: JDK-8077350 
> Process API Updates Implementation
>
> The code is in the jdk9 sandbox on branch JDK-8046092-branch.
>
> Please review and comment, Roger
>


Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-11 Thread Anthony Vanelverdinghe

Hi Roger

In my opinion, the method "supportsDestroyForcibly" is unintuitive, for 
the following 2 reasons:


- it's named "supportsXxx", where Xxx is the name of a method in the 
same class. So as a user of this API, I would intuitively assume that 
"supportsDestroyForcibly" is related to "destroyForcibly", and means 
whether or not "destroyForcibly" actually forcibly terminates the process.


- "supports" has a positive connotation. However, if 
"supportsDestroyForcibly" returns true, this means that the 
implementation of "destroy" is forcible. And "destroyForcibly" either 
invokes "destroy" (default implementation) or is overridden with a 
compliant implementation. So in other words: if 
"supportsDestroyForcibly" returns true, it's impossible to allow the 
process to shut down cleanly. This, in my opinion, is a bad thing, so 
the method indicating this behavior should not start with "supports".


Therefore I would like to propose to replace "supportsDestroyForcibly" 
with "supportsDestroy", which returns the negation of what's currently 
returned by "supportsDestroyForcibly".


Another question I have is: what happens if "destroy" or 
"destroyForcibly" are called on processes other than the current process 
& its children, i.e. a process that is in "allProcesses" but isn't 
"current" &  isn't in "allChildren" (e.g. the parent process of the 
current process)?


Kind regards,
Anthony


On 9/04/2015 22:00, Roger Riggs wrote:

Please review the API and implementation of the Process API Updates
described inJEP 102 
. Please review and 
comment by April 23rd.


The recommendation to make ProcessHandle an interface is included
allowing the new functions to be extended by Process subclasses.
The implementation covers all functions on Unix, Windows, Solaris, and 
Mac OS X.


The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/

The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph

Issue: JDK-8077350  
Process API Updates Implementation


The code is in the jdk9 sandbox on branch JDK-8046092-branch.

Please review and comment, Roger






Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-10 Thread Martin Buchholz
On Fri, Apr 10, 2015 at 10:37 PM, Martin Buchholz 
wrote:

>
> +} else if (siginfo.si_code == CLD_KILLED || siginfo.si_code == 
> CLD_DUMPED) {
>
> Using siginfo probably won't work under load because multiple signals
> arriving at the same time are coalesced into one, at least on Linux.
> Unix signals are sadly totally broken.
>

Of course I was wrong - I should have looked more closely at the code -
you're not using signals, despite the use of siginfo!

---

Can you explain why you chose to use waitid instead of waitpid?


Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-04-10 Thread Martin Buchholz
Thanks for the huge effort.  I did a superficial review and it seems pretty
good.  Of course, changing the Process good is high risk and some things
will probably need fixup later.

On Unix, you seem to be identifying ProcessHandles by their pid.  What are
you going to do about the problem of misidentifying Unix processes whose
pid got recycled?  If you spin up a thread to call waitpid as soon as you
create the ProcessHandle, the window is very short (and OSes avoid
recycling pids immediately) so that's pretty safe, but it wastes a thread
for each ProcessHandle!  I don't have a good answer.  Maybe record the
start time of the process, where available?

+} else if (siginfo.si_code == CLD_KILLED || siginfo.si_code
== CLD_DUMPED) {

Using siginfo probably won't work under load because multiple signals
arriving at the same time are coalesced into one, at least on Linux.
Unix signals are sadly totally broken.

--

It seems you don't support sending arbitary signals, which is a little sad.


On Thu, Apr 9, 2015 at 1:00 PM, Roger Riggs  wrote:

> Please review the API and implementation of the Process API Updates
> described inJEP 102 .
> Please  review and comment by April 23rd.
>
> The recommendation to make ProcessHandle an interface is included
> allowing the new functions to be extended by Process subclasses.
> The implementation covers all functions on Unix, Windows, Solaris, and Mac
> OS X.
>
> The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/
>
> The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph
>
> Issue: JDK-8077350 
> Process API Updates Implementation
>
> The code is in the jdk9 sandbox on branch JDK-8046092-branch.
>
> Please review and comment, Roger
>