Re: RFR [15] 8236825: Reading output from ... using ProcessBuilder/Process might hang
Hi Paul, yes, other potential changes in low level I/O should probably happen first. Will check. On 1/22/20 3:31 PM, Paul Sandoz wrote: My sense is it is fixing a marginal case for the less dominant platform where this is less likely arise, whereas for the dominant platform there is still an issue. Waiting for a non-blocking native read is a reasonable approach (IIUC that is required for the desired proper fix). It would be useful to assess any time-frame of such support to plan ahead? — ProcessImpl — 665 void processExited() { 666 synchronized (closeLock) { 667 try { 668 InputStream in = this.in; 669 // this stream is closed if and only if: in == null 670 if (in != null) { 671 boolean noCompete = false; 672 try { 673 // Briefly, wait for competing read to complete 674 noCompete = readLock.tryAcquire(500L, TimeUnit.MILLISECONDS); 675 if (noCompete) { 676 // no competing read, buffer any pending input 677 this.in = drainInputStream(in); 678 } 679 } catch (InterruptedException ie) { 680 // Ignore interrupt and release and close always 681 } finally { 682 readAborted = !noCompete; 683 in.close(); // close the original underlying input stream 684 if (noCompete) 685 readLock.release(); 686 } 687 } 688 } catch (IOException ignored) {} 689 } 690 } Do you need to move the code at lines 684-685 to occur before line 683? since in.close() could throw. Good catch. A try/catch/ignore block around the close would also address it. If the readLock.release()happens after the close(), then a pending read won't be in a race with the close. Thanks, Roger Paul. On Jan 21, 2020, at 12:36 PM, Roger Riggs wrote: Please review a potential way to address two issues of java.lang.Process deadlocks during process exit. [1] [2] (Linxu OS process expertise appreciated). The deadlock is between some thread reading process output from a process that has exited and the processExited thread that is attempting to buffer any remaining output so the file descriptor for the pipe can be closed. The methods involved are synchronized on a ProcessPipeInputStream instance. The hard case arises infrequently since the pipe streams are closed by the OS normally (or within a few seconds) and the readXXX completes. However, the case causing trouble is when the subprocess has spawned another process and both processes are using the same file descriptor/stream for output. Though the process that exits doesn't have the fd open anymore the extra subprocess does. And if that subprocess does not exit, then the read and deadlock does not get resolved. The approach proposed is to use a semaphore to guard the readXXX and providing some non-blocking logic in processExited to forcibly close the pipe if it detects that there is a conflicting read in progress. (And remove the synchronized on processExited). This solution works ok on MacOSX, where one of the issues occurred frequently. Closing the pipe unblocks the reading thread. On Linux, it appears that the blocking read (in native code) does not unblock unless a signal occurs; so the solution does not fix the problem adqurated/completely. Having a non-blocking native read would be the next step of complexity. The problem has been around for a while so it may be an option to wait for additional work on non-blocking pipe reads, the direction that Loom is moving. Suggestions welcome, Thanks, Roger Webrev: http://cr.openjdk.java.net/~rriggs/webrev-hdiutil-8236825/ Issues: [1] https://bugs.openjdk.java.net/browse/JDK-8236825 [2] https://bugs.openjdk.java.net/browse/JDK-8169565
Re: RFR [15] 8236825: Reading output from ... using ProcessBuilder/Process might hang
My sense is it is fixing a marginal case for the less dominant platform where this is less likely arise, whereas for the dominant platform there is still an issue. Waiting for a non-blocking native read is a reasonable approach (IIUC that is required for the desired proper fix). It would be useful to assess any time-frame of such support to plan ahead? — ProcessImpl — 665 void processExited() { 666 synchronized (closeLock) { 667 try { 668 InputStream in = this.in; 669 // this stream is closed if and only if: in == null 670 if (in != null) { 671 boolean noCompete = false; 672 try { 673 // Briefly, wait for competing read to complete 674 noCompete = readLock.tryAcquire(500L, TimeUnit.MILLISECONDS); 675 if (noCompete) { 676 // no competing read, buffer any pending input 677 this.in = drainInputStream(in); 678 } 679 } catch (InterruptedException ie) { 680 // Ignore interrupt and release and close always 681 } finally { 682 readAborted = !noCompete; 683 in.close(); // close the original underlying input stream 684 if (noCompete) 685 readLock.release(); 686 } 687 } 688 } catch (IOException ignored) {} 689 } 690 } Do you need to move the code at lines 684-685 to occur before line 683? since in.close() could throw. Paul. > On Jan 21, 2020, at 12:36 PM, Roger Riggs wrote: > > Please review a potential way to address two issues of java.lang.Process > deadlocks during process exit. [1] [2] > (Linxu OS process expertise appreciated). > > The deadlock is between some thread reading process output from a process > that has exited > and the processExited thread that is attempting to buffer any remaining > output so > the file descriptor for the pipe can be closed. The methods involved are > synchronized > on a ProcessPipeInputStream instance. > > The hard case arises infrequently since the pipe streams are closed by the OS > normally (or within a few seconds) and the readXXX completes. > However, the case causing trouble is when the subprocess has spawned > another process and both processes are using the same file descriptor/stream > for output. > Though the process that exits doesn't have the fd open anymore the extra > subprocess does. > And if that subprocess does not exit, then the read and deadlock does not get > resolved. > > The approach proposed is to use a semaphore to guard the readXXX and > providing some non-blocking logic in processExited to forcibly close > the pipe if it detects that there is a conflicting read in progress. > (And remove the synchronized on processExited). > > This solution works ok on MacOSX, where one of the issues occurred frequently. > Closing the pipe unblocks the reading thread. > > On Linux, it appears that the blocking read (in native code) does not unblock > unless a signal occurs; so the solution does not fix the problem > adqurated/completely. > > Having a non-blocking native read would be the next step of complexity. > The problem has been around for a while so it may be an option to wait > for additional work on non-blocking pipe reads, the direction that Loom is > moving. > > Suggestions welcome, > > Thanks, Roger > > Webrev: http://cr.openjdk.java.net/~rriggs/webrev-hdiutil-8236825/ > > Issues: > [1] https://bugs.openjdk.java.net/browse/JDK-8236825 > [2] https://bugs.openjdk.java.net/browse/JDK-8169565
RFR [15] 8236825: Reading output from ... using ProcessBuilder/Process might hang
Please review a potential way to address two issues of java.lang.Process deadlocks during process exit. [1] [2] (Linxu OS process expertise appreciated). The deadlock is between some thread reading process output from a process that has exited and the processExited thread that is attempting to buffer any remaining output so the file descriptor for the pipe can be closed. The methods involved are synchronized on a ProcessPipeInputStream instance. The hard case arises infrequently since the pipe streams are closed by the OS normally (or within a few seconds) and the readXXX completes. However, the case causing trouble is when the subprocess has spawned another process and both processes are using the same file descriptor/stream for output. Though the process that exits doesn't have the fd open anymore the extra subprocess does. And if that subprocess does not exit, then the read and deadlock does not get resolved. The approach proposed is to use a semaphore to guard the readXXX and providing some non-blocking logic in processExited to forcibly close the pipe if it detects that there is a conflicting read in progress. (And remove the synchronized on processExited). This solution works ok on MacOSX, where one of the issues occurred frequently. Closing the pipe unblocks the reading thread. On Linux, it appears that the blocking read (in native code) does not unblock unless a signal occurs; so the solution does not fix the problem adqurated/completely. Having a non-blocking native read would be the next step of complexity. The problem has been around for a while so it may be an option to wait for additional work on non-blocking pipe reads, the direction that Loom is moving. Suggestions welcome, Thanks, Roger Webrev: http://cr.openjdk.java.net/~rriggs/webrev-hdiutil-8236825/ Issues: [1] https://bugs.openjdk.java.net/browse/JDK-8236825 [2] https://bugs.openjdk.java.net/browse/JDK-8169565