Eric Sirianni created HDFS-5792:
-----------------------------------

             Summary: DFSOutputStream.close() throws exceptions with 
unintuitive stacktraces
                 Key: HDFS-5792
                 URL: https://issues.apache.org/jira/browse/HDFS-5792
             Project: Hadoop HDFS
          Issue Type: Improvement
          Components: hdfs-client
            Reporter: Eric Sirianni
            Priority: Minor


Given the following client code:
{code}
class Foo {
  void test() {
    FSDataOutputStream out = ...;
    out.write(...);
    out.close();
  }
}
{code}

A programmer would expect an exception thrown from {{out.close()}} to include 
the stack trace of the calling thread:
{noformat}
...
FSDataOutputStream.close()
Foo.test()
...
{noformat}

Instead, it includes the stack trace from the {{DataStreamer}} thread:
{noformat}
java.io.IOException: All datanodes 127.0.0.1:49331 are bad. Aborting...
        at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1023)
        at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:838)
        at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:483)
{noformat}

This makes it difficult to debug the _client_ call stack that actually was 
unwinded when the exception was thrown.

A simple fix seems to be modifying {{DFSOutputStream.close()}} to wrap the 
{{lastException}} from the {{DataStreamer}} thread in a {{Exception}}, thereby 
getting both stack traces.  

I can work on a patch for this.  Can someone confirm that my approach is 
acceptable?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to