[ 
https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13190438#comment-13190438
 ] 

Zhihong Yu commented on HBASE-5235:
-----------------------------------

{code}
+      // close the log writer streams only if they are not closed
+      // in closeStreams().
+      if (!closeCompleted && !logWritersClosed) {
{code}
Do we need to check closeCompleted here ? It is set after logWritersClosed is 
set to true.

I think closeAndCleanupCompleted would be a better name for hasClosed.
The following line should be in closeLogWriters():
{code}
+      logWritersClosed = true;
{code}

If I were you, I would put the loop from line 789 to 798 into closeLogWriters() 
and let closeStreams() call closeLogWriters()
Then closeLogWriters() would start for loop to iterate over logWriters.values()
                
> HLogSplitter writer thread's streams not getting closed when any of the 
> writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.92.0, 0.90.5
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>         Attachments: HBASE-5235_0.90.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got 
> while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
>       at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
>       at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
>       at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we 
> try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to