On 04/09/2015 10:00 PM, Roger Riggs wrote:
Please review the API and implementation of the Process API Updates
described inJEP 102 <https://bugs.openjdk.java.net/browse/JDK-8046092>. Please review and comment by April 23rd.

The recommendation to make ProcessHandle an interface is included
allowing the new functions to be extended by Process subclasses.
The implementation covers all functions on Unix, Windows, Solaris, and Mac OS X.

The API doc: http://cr.openjdk.java.net/~rriggs/ph-apidraft/

The webrev: http://cr.openjdk.java.net/~rriggs/webrev-ph

Issue: JDK-8077350 <https://bugs.openjdk.java.net/browse/JDK-8077350> Process API Updates Implementation

The code is in the jdk9 sandbox on branch JDK-8046092-branch.

Please review and comment, Roger

Hi Roger,

I have a few comments...

ProcessHandle JNI interface is same for all platforms and it is reused in UNIX ProcessImpl for destroying and waiting. Is there some particular reason why it is not reused in Windows ProcessImpl too? The Windows impl still uses it's own native methods for destroying and waiting, but they take "process handles" instead of PIDs. Are handles less susceptible to recycling?

I am confused by the method name "supportsDestroyForcibly" too. The following would be more intuitive in my opinion:

- destroyIsForcible()
- isDestroyForcible()

The @implNote for Process.onExit() says:

 366      * @implNote
 367      * The default implementation of this method employs a thread to
 368      * {@link #waitFor() wait} for process exit,
 369      * which may consume a lot of memory for thread stacks if a
 370      * large number of processes are waited for concurrently.

That's true for the default implementation in Process but not true for internal OpenJDK ProcessImpl(s). Should the note say so explicitly so that users won't be afraid of using this method with internal implementations?

 371      * <p>
372 * External implementations are advised to override this method and provide 373 * more efficient implementation. For example, to delegate to the underlying
 374      * process, it can simply do the following:
 375      * <pre>{@code
 376      *    public CompletableFuture<Process> onExit() {
 377      *       return delegate.onExit();
 378      *    }
 379      * }</pre>
 380      * ...which in case of implementation of the delegate is used.


...this example is not correct as it exposes the "delegate" Process instance. It should be something like:

public CompletableFuture<Process> onExit() {
   return delegate.onExit().thenApply(p -> this);
}


Otherwise it's nice how this turned out.

Unrelated to your implementation, I have been thinking of another small Process API update. Some people find it odd how redirected in/out/err streams are exposed:

http://blog.headius.com/2013/06/the-pain-of-broken-subprocess.html

They basically don't like:

- that exposed Input/Output streams are buffered
- that underlying streams are File(Input/Output)Streams which, although the backing OS implementation are not files but pipes, don't expose selectable channels so that non-blocking event-based IO could be performed on them. - that exposed IO streams are automatically "managed" in UNIX variants of ProcessImpl which needs subtle "hacks" to do it in a perceptively transparent way (delayed close, draining input on exit and making it available after the underlying handle is already closed, ...)

So I've been playing with the idea of exposing the "real" pipe channels in last couple of days. Here's the prototype I came up with:

http://cr.openjdk.java.net/~plevart/jdk9-sandbox/JDK-8046092-branch/Process.PipeChannel/webrev.01/

This adds new Redirect type to the API and 3 new methods to Process that return Pipe channels when this new Redirect type is used. It's interesting that no native code changes were necessary. The behavior of pipes on Windows is a little different (perhaps because the Pipe NIO API uses sockets under the hood on Windows - why is that? Windows does have a pipe equivalent). What bothers me is that file handles opened on files (when redirecting to/from File) can be closed as soon as the subprocess is started and the subprocess is still able to read/write from the files (like with UNIX). It's not the same with pipe (i.e. socket) handles on Windows. They must be closed only after subprocess exits.

If this subtle difference between file handles and socket handles on Windows could be dealt with (perhaps some options exist that affect subprocess spawning), then the extra waiting thread would not be needed on Windows.

So what do you thik of this API update?

Regards, Peter

Reply via email to