[jira] [Updated] (HDFS-7168) Use excludedNodes consistently in DFSOutputStream
[ https://issues.apache.org/jira/browse/HDFS-7168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated HDFS-7168: - Labels: (was: BB2015-05-TBR) > Use excludedNodes consistently in DFSOutputStream > - > > Key: HDFS-7168 > URL: https://issues.apache.org/jira/browse/HDFS-7168 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Colin P. McCabe >Assignee: Colin P. McCabe > Attachments: HDFS-7168.001.patch > > > We currently have two separate collections of excluded nodes in the > {{DFSOutputStream#DataStreamer}}. One is {{DFSOutputStream#failed}}; another > is {{DFSOutputStream#excludedNodes}}. Both of these collections just deal > with blacklisting nodes that we have found to be bad. We should just use > excludedNodes for both. > We also should make this a per-DFSOutputStream variable, rather than being > per-DataStreamer. We don't need to forget all this information whenever a > DataStreamer is torn down. Since {{DFSOutputStream#excludedNodes}} is a > Guava cache, nodes will expire out of it once enough time elapses, so they > will not be permanently blacklisted. > We should also remove {{DFSOutputStream#setTestFilename}}, since it is no > longer needed now that we can safely rename streams that are open for write. > And {{DFSOutputStream#getBlock}} should be synchronized. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-7168) Use excludedNodes consistently in DFSOutputStream
[ https://issues.apache.org/jira/browse/HDFS-7168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated HDFS-7168: - Target Version/s: (was: 2.8.0) > Use excludedNodes consistently in DFSOutputStream > - > > Key: HDFS-7168 > URL: https://issues.apache.org/jira/browse/HDFS-7168 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Colin P. McCabe >Assignee: Colin P. McCabe > Attachments: HDFS-7168.001.patch > > > We currently have two separate collections of excluded nodes in the > {{DFSOutputStream#DataStreamer}}. One is {{DFSOutputStream#failed}}; another > is {{DFSOutputStream#excludedNodes}}. Both of these collections just deal > with blacklisting nodes that we have found to be bad. We should just use > excludedNodes for both. > We also should make this a per-DFSOutputStream variable, rather than being > per-DataStreamer. We don't need to forget all this information whenever a > DataStreamer is torn down. Since {{DFSOutputStream#excludedNodes}} is a > Guava cache, nodes will expire out of it once enough time elapses, so they > will not be permanently blacklisted. > We should also remove {{DFSOutputStream#setTestFilename}}, since it is no > longer needed now that we can safely rename streams that are open for write. > And {{DFSOutputStream#getBlock}} should be synchronized. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-7168) Use excludedNodes consistently in DFSOutputStream
[ https://issues.apache.org/jira/browse/HDFS-7168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HDFS-7168: --- Labels: BB2015-05-TBR (was: ) Use excludedNodes consistently in DFSOutputStream - Key: HDFS-7168 URL: https://issues.apache.org/jira/browse/HDFS-7168 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.0 Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Labels: BB2015-05-TBR Attachments: HDFS-7168.001.patch We currently have two separate collections of excluded nodes in the {{DFSOutputStream#DataStreamer}}. One is {{DFSOutputStream#failed}}; another is {{DFSOutputStream#excludedNodes}}. Both of these collections just deal with blacklisting nodes that we have found to be bad. We should just use excludedNodes for both. We also should make this a per-DFSOutputStream variable, rather than being per-DataStreamer. We don't need to forget all this information whenever a DataStreamer is torn down. Since {{DFSOutputStream#excludedNodes}} is a Guava cache, nodes will expire out of it once enough time elapses, so they will not be permanently blacklisted. We should also remove {{DFSOutputStream#setTestFilename}}, since it is no longer needed now that we can safely rename streams that are open for write. And {{DFSOutputStream#getBlock}} should be synchronized. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7168) Use excludedNodes consistently in DFSOutputStream
[ https://issues.apache.org/jira/browse/HDFS-7168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-7168: --- Attachment: HDFS-7168.001.patch Use excludedNodes consistently in DFSOutputStream - Key: HDFS-7168 URL: https://issues.apache.org/jira/browse/HDFS-7168 Project: Hadoop HDFS Issue Type: Bug Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Attachments: HDFS-7168.001.patch We currently have two separate collections of excluded nodes in the {{DFSOutputStream#DataStreamer}}. One is {{DFSOutputStream#failed}}; another is {{DFSOutputStream#excludedNodes}}. Both of these collections just deal with blacklisting nodes that we have found to be bad. We should just use excludedNodes for both. We also should make this a per-DFSOutputStream variable, rather than being per-DataStreamer. We don't need to forget all this information whenever a DataStreamer is torn down. Since {{DFSOutputStream#excludedNodes}} is a Guava cache, nodes will expire out of it once enough time elapses, so they will not be permanently blacklisted. We should also remove {{DFSOutputStream#setTestFilename}}, since it is no longer needed now that we can safely rename streams that are open for write. And {{DFSOutputStream#getBlock}} should be synchronized. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7168) Use excludedNodes consistently in DFSOutputStream
[ https://issues.apache.org/jira/browse/HDFS-7168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-7168: --- Target Version/s: 2.7.0 Affects Version/s: 2.7.0 Status: Patch Available (was: Open) Use excludedNodes consistently in DFSOutputStream - Key: HDFS-7168 URL: https://issues.apache.org/jira/browse/HDFS-7168 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.0 Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Attachments: HDFS-7168.001.patch We currently have two separate collections of excluded nodes in the {{DFSOutputStream#DataStreamer}}. One is {{DFSOutputStream#failed}}; another is {{DFSOutputStream#excludedNodes}}. Both of these collections just deal with blacklisting nodes that we have found to be bad. We should just use excludedNodes for both. We also should make this a per-DFSOutputStream variable, rather than being per-DataStreamer. We don't need to forget all this information whenever a DataStreamer is torn down. Since {{DFSOutputStream#excludedNodes}} is a Guava cache, nodes will expire out of it once enough time elapses, so they will not be permanently blacklisted. We should also remove {{DFSOutputStream#setTestFilename}}, since it is no longer needed now that we can safely rename streams that are open for write. And {{DFSOutputStream#getBlock}} should be synchronized. -- This message was sent by Atlassian JIRA (v6.3.4#6332)