Hi Paul,

Thank you for looking at the webrev!

On 14.10.2013 12:50, Paul Sandoz wrote:
Hi Ivan,

Why do you require closeLock? is it not sufficient leave processExited as is 
and just add:

public synchronized void close() throws IOException {
     super.close();
}

I want to limit the impact on the code, that's why I synchronize close() with only processExited(). If I made close() synchronized with all the stream's methods, then I think it could introduce some unpredictable drawback.

Plus there was already some logic checking if close was asynchronously called 
(see BufferedInputStream.close):

  372                         if (buf == null) // asynchronous close()?
  373                             this.in = null;

and this would no longer be required

Yes, you're right and these two lines can be removed.
I wouldn't remove the surrounding try {} catch (IOException) though, as the stream can still be closed asynchronously.

I am somewhat struggling to understand the test (not too familiar with this 
area). Requires a more expert eye in this area than I.

IIUC you are trying to induce the the output process fd of ExecLoop to "flip" 
to another fd opened by OpenLoop.

Yes.
I actually adopted a testcase provided by the reporter of the problem.
I only changed the test in such a way that it can be run in an automated mode.

Below is the exact comment from the original testcase.
It may be a good idea to include it in the regtest.

/* Following code is a test case that demonstrates a close race in ProcessPipeInputStream.
 *.
 * A snippet of the offending code within ProcessPipeInputStream:.
 *       // Called by the process reaper thread when the process exits..
 *       synchronized void processExited() {
* // Most BufferedInputStream methods are synchronized, but close()
 *           // is not, and so we have to handle concurrent racing close().
 *           try {
 *               InputStream in = this.in;
 *               if (in != null) {
 *                   byte[] stragglers = drainInputStream(in);
 *                   in.close();
 *
* The comment even talks about the race. However, the code is not adequately * protecting against it. drainInputStream() (available() in particular) is not * threadsafe when a close happens in parallel. What's worse is that a subsequent * open() is likely to reassign the closed fd so drainInputStream ends up draining * the wrong fd! This can cause OOMs, data corruption on other fds, and other very
 * bad things.
 *.
 * Notes:.
 *  - Test creates a large sparse file in /tmp/bigsparsefile
 *  - Test uses "/proc/meminfo" for simple input..
* - Invoke using "-Xmx128m" to insure file size of 1G is greater than heap size * - Test will run forever, if successful there will not be any OOM exceptions
 *  (on my test system they occur about 1 per minute)
 *  - You need multiple cores (I use 2 x Xeon E5620 = 8 cores, 16 threads).
 *..
* The basic premise of the test is to create a 1GB sparse file. Then have lots * of threads opening and closing this file. In a loop within a separate thread use UNIXprocess * to run a simple command. We read the output of the command, make sure the process * has exited and then close the inputstream. When the bug happens, processExited() * will use FIONREAD to figure out how much data to drain BUT the fd it is checking * will be closed and reopened (now referring to /tmp/bigsparsefile) so it will * see the size as 1GB and then attempt to allocate a buffer of that size and.
 * then OOM..
 */



  The test concerns me since it is gonna use up loads of resources and proceed 
for 10 minutes until deciding beyond reasonable doubt that there is no issue.
When I ran the test across different machines (with the unpatched jdk) it reported from one to five races during 10 minutes. We can surely decrease the timeout for the price of higher probability of a false positive result.


Sincerely yours,
Ivan

Paul.

On Oct 11, 2013, at 9:45 PM, Ivan Gerasimov <ivan.gerasi...@oracle.com> wrote:

Hello all!

Would you please help review a fix?
http://bugs.sun.com/view_bug.do?bug_id=8024521

As of JDK 1.7, ProcessPipeInputStream tries to drain the InputStream when the 
process exits.
However, it is racing with application code that could be closing the 
inputstream at the same time.
This can lead to processExited() operating on the wrong underlying file.

The test included into the webrev demonstrates exactly that: Due to the race, 
the draining thread can suddenly start to read a huge file which will cause 
OutOfMemoryException.

Here's the webrev with a simple fix, which synchronizes close() and 
processExited():
http://cr.openjdk.java.net/~igerasim/8024521/0/webrev/

It's important to note that this simple fix isn't complete.
It only protects from the race when the file descriptor is closed by a call to 
Process#getInputStream().close().
The file descriptor can still be asynchronously closed in some other way, for 
example by calling to the native close() function.

Sincerely yours,
Ivan Gerasimov


Reply via email to