[ https://issues.apache.org/jira/browse/HDFS-9494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066224#comment-15066224 ]
GAO Rui commented on HDFS-9494: ------------------------------- Additionally, I think we might encounter other thread related ExecutionException as well, right? So, I suggest to keep printing a warning while also re-throw the ExecutionException as an IOException. {code} } catch (ExecutionException ee) { LOG.warn( "Caught ExecutionException while waiting all streamer flush, ", ee); throw new IOException(ee); } {code} Do you think the above codes is OK? > Parallel optimization of DFSStripedOutputStream#flushAllInternals( ) > -------------------------------------------------------------------- > > Key: HDFS-9494 > URL: https://issues.apache.org/jira/browse/HDFS-9494 > Project: Hadoop HDFS > Issue Type: Sub-task > Reporter: GAO Rui > Assignee: GAO Rui > Priority: Minor > Attachments: HDFS-9494-origin-trunk.00.patch, > HDFS-9494-origin-trunk.01.patch, HDFS-9494-origin-trunk.02.patch, > HDFS-9494-origin-trunk.03.patch > > > Currently, in DFSStripedOutputStream#flushAllInternals( ), we trigger and > wait for flushInternal( ) in sequence. So the runtime flow is like: > {code} > Streamer0#flushInternal( ) > Streamer0#waitForAckedSeqno( ) > Streamer1#flushInternal( ) > Streamer1#waitForAckedSeqno( ) > … > Streamer8#flushInternal( ) > Streamer8#waitForAckedSeqno( ) > {code} > It could be better to trigger all the streamers to flushInternal( ) and > wait for all of them to return from waitForAckedSeqno( ), and then > flushAllInternals( ) returns. -- This message was sent by Atlassian JIRA (v6.3.4#6332)