[jira] [Commented] (HDFS-10399) DiskBalancer: Add JMX for DiskBalancer
[ https://issues.apache.org/jira/browse/HDFS-10399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15283428#comment-15283428 ] Hadoop QA commented on HDFS-10399: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 8s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s {color} | {color:green} HDFS-1312 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 25s {color} | {color:green} HDFS-1312 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 42s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s {color} | {color:green} HDFS-1312 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 10s {color} | {color:green} HDFS-1312 passed with JDK v1.7.0_95 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s {color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 21s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s {color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s {color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.8.0_91. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 28s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 2s {color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.7.0_95. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 9s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 155m 42s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK
[jira] [Commented] (HDFS-8872) Reporting of missing blocks is different in fsck and namenode ui/metasave
[ https://issues.apache.org/jira/browse/HDFS-8872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15283384#comment-15283384 ] Ming Ma commented on HDFS-8872: --- Agree that we should try to make fsck and webUI consistent Note that isn't just for webUI, it is also about the MissingBlocks metrics which the webUI is based on. For this scenario, it is debatable if the block should be marked as missing, it isn't uncommon for admins to decommission multiple nodes across racks, which means all 3 replica nodes will be in decommissioning state. We don't want to mark the block as missing during this transition window, as it might trigger unnecessary alert. Actually after HDFS-7933, fsck includes decommissioning nodes and won't mark it as missing anymore. Want to check again? > Reporting of missing blocks is different in fsck and namenode ui/metasave > - > > Key: HDFS-8872 > URL: https://issues.apache.org/jira/browse/HDFS-8872 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > > Namenode ui and metasave will not report a block as missing if the only > replica is on decommissioning/decomissioned node while fsck will show it as > MISSING. > Since decommissioned node can be formatted/removed anytime, we can actually > lose the block. > Its better to alert on namenode ui if the only copy is on > decomissioned/decommissioning node. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10399) DiskBalancer: Add JMX for DiskBalancer
[ https://issues.apache.org/jira/browse/HDFS-10399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-10399: Attachment: HDFS-10399-HDFS-1312.001.patch > DiskBalancer: Add JMX for DiskBalancer > -- > > Key: HDFS-10399 > URL: https://issues.apache.org/jira/browse/HDFS-10399 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-10399-HDFS-1312.001.patch > > > Expose diskbalancer status via JMX -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10399) DiskBalancer: Add JMX for DiskBalancer
[ https://issues.apache.org/jira/browse/HDFS-10399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-10399: Status: Patch Available (was: Open) > DiskBalancer: Add JMX for DiskBalancer > -- > > Key: HDFS-10399 > URL: https://issues.apache.org/jira/browse/HDFS-10399 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-10399-HDFS-1312.001.patch > > > Expose diskbalancer status via JMX -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10400) hdfs dfs -put exits with zero on error
[ https://issues.apache.org/jira/browse/HDFS-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15283309#comment-15283309 ] Jo Desmet commented on HDFS-10400: -- Hadoop version is 2.7.1.2.3.2.0-2950 Apologies, didn't notice that I used the hadoop 1 link. I doubt that a fix will cause issues, as all other exceptions so far have been returning a valid exit code (non-zero). Also filling up a filesystem will hopefully be a rare event. In my case the event went undetected, and a lot of wasteful processing continued. It does look however that other exceptions are handled - things like a {{put}} of a non existing file - as they do not generate a Java Exception but a clean error report. I think what needs to happen is put a generic handler in place for un-trapped exceptions, still do the exception dump, but then exit with proper exit code. There is precedent for this issue, please see - [HADOOP-4340|https://issues.apache.org/jira/browse/hadoop-4340] - [MAPREDUCE-3179|https://issues.apache.org/jira/browse/MAPREDUCE-3179] > hdfs dfs -put exits with zero on error > -- > > Key: HDFS-10400 > URL: https://issues.apache.org/jira/browse/HDFS-10400 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jo Desmet > > On a filesystem that is about to fill up, execute "hdfs dfs -put" for a file > that is big enough to go over the limit. As a result, the command fails with > an exception, however the command terminates normally (exit code 0). > Expectation is that any detectable failure generates an exit code different > than zero. > Documentation on > https://hadoop.apache.org/docs/r1.2.1/file_system_shell.html#put states: > Exit Code: > Returns 0 on success and -1 on error. > following is the exception generated: > 16/05/11 13:37:07 INFO hdfs.DFSClient: Exception in createBlockOutputStream > java.io.EOFException: Premature EOF: no length prefix available > at > org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2282) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1352) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1271) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464) > 16/05/11 13:37:07 INFO hdfs.DFSClient: Abandoning > BP-1964113808-130.8.138.99-1446787670498:blk_1073835906_95114 > 16/05/11 13:37:08 INFO hdfs.DFSClient: Excluding datanode > DatanodeInfoWithStorage[130.8.138.99:50010,DS-eed7039a-8031-499e-85a5-7216b9d766a8,DISK] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10400) hdfs dfs -put exits with zero on error
[ https://issues.apache.org/jira/browse/HDFS-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15283293#comment-15283293 ] Mingliang Liu commented on HDFS-10400: -- Thanks for reporting this. Which version are you working on? I saw you provided a hadoop 1 doc. I'm also concerned that the fix, if needed, may be backwards incompatible as some existing scripts may rely on this behavior, though it's makes less sense than you proposed. > hdfs dfs -put exits with zero on error > -- > > Key: HDFS-10400 > URL: https://issues.apache.org/jira/browse/HDFS-10400 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jo Desmet > > On a filesystem that is about to fill up, execute "hdfs dfs -put" for a file > that is big enough to go over the limit. As a result, the command fails with > an exception, however the command terminates normally (exit code 0). > Expectation is that any detectable failure generates an exit code different > than zero. > Documentation on > https://hadoop.apache.org/docs/r1.2.1/file_system_shell.html#put states: > Exit Code: > Returns 0 on success and -1 on error. > following is the exception generated: > 16/05/11 13:37:07 INFO hdfs.DFSClient: Exception in createBlockOutputStream > java.io.EOFException: Premature EOF: no length prefix available > at > org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2282) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1352) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1271) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464) > 16/05/11 13:37:07 INFO hdfs.DFSClient: Abandoning > BP-1964113808-130.8.138.99-1446787670498:blk_1073835906_95114 > 16/05/11 13:37:08 INFO hdfs.DFSClient: Excluding datanode > DatanodeInfoWithStorage[130.8.138.99:50010,DS-eed7039a-8031-499e-85a5-7216b9d766a8,DISK] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-10400) hdfs dfs -put exits with zero on error
Jo Desmet created HDFS-10400: Summary: hdfs dfs -put exits with zero on error Key: HDFS-10400 URL: https://issues.apache.org/jira/browse/HDFS-10400 Project: Hadoop HDFS Issue Type: Bug Reporter: Jo Desmet On a filesystem that is about to fill up, execute "hdfs dfs -put" for a file that is big enough to go over the limit. As a result, the command fails with an exception, however the command terminates normally (exit code 0). Expectation is that any detectable failure generates an exit code different than zero. Documentation on https://hadoop.apache.org/docs/r1.2.1/file_system_shell.html#put states: Exit Code: Returns 0 on success and -1 on error. following is the exception generated: 16/05/11 13:37:07 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2282) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1352) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1271) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464) 16/05/11 13:37:07 INFO hdfs.DFSClient: Abandoning BP-1964113808-130.8.138.99-1446787670498:blk_1073835906_95114 16/05/11 13:37:08 INFO hdfs.DFSClient: Excluding datanode DatanodeInfoWithStorage[130.8.138.99:50010,DS-eed7039a-8031-499e-85a5-7216b9d766a8,DISK] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-10399) DiskBalancer: Add JMX for DiskBalancer
Anu Engineer created HDFS-10399: --- Summary: DiskBalancer: Add JMX for DiskBalancer Key: HDFS-10399 URL: https://issues.apache.org/jira/browse/HDFS-10399 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Anu Engineer Assignee: Anu Engineer Expose diskbalancer status via JMX -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
[ https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15283242#comment-15283242 ] Hadoop QA commented on HDFS-10390: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 41s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 52s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 58s {color} | {color:green} trunk passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 53s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 30s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 26s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 41s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 15s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s {color} | {color:green} trunk passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 22s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 0s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 55s {color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 55s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 47s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 30s {color} | {color:red} root: patch generated 3 new + 418 unchanged - 0 fixed = 421 total (was 418) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 41s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 26s {color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 28s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 36s {color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_91. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s {color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.8.0_91. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 56s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 0s {color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 59s {color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.7.0_95. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 59s {color} | {color:red} hadoop-hdfs in the
[jira] [Commented] (HDFS-10220) A large number of expired leases can make namenode unresponsive and cause failover
[ https://issues.apache.org/jira/browse/HDFS-10220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15283237#comment-15283237 ] Yongjun Zhang commented on HDFS-10220: -- I happen to see this jira now. Hi [~daryn], do you suggest to dynamically adjust the lease check interval by saying "if it broke out early then perhaps it could sleep for less than 2s"? I agree with [~kihwal]'s "I don't think this kind of mass lease recoveries are normal" comment. I wonder if we could just make both MAX_LOCK_HOLD_TO_RELEASE_LEASE_MS and the lease check interval config parameters instead of fixed numbers. If this config is there, it can solve the abnormal situation. I know we have many configs already though. What do you guys think? Thanks. > A large number of expired leases can make namenode unresponsive and cause > failover > -- > > Key: HDFS-10220 > URL: https://issues.apache.org/jira/browse/HDFS-10220 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Nicolas Fraison >Assignee: Nicolas Fraison >Priority: Minor > Attachments: HADOOP-10220.001.patch, HADOOP-10220.002.patch, > HADOOP-10220.003.patch, HADOOP-10220.004.patch, HADOOP-10220.005.patch, > HADOOP-10220.006.patch, threaddump_zkfc.txt > > > I have faced a namenode failover due to unresponsive namenode detected by the > zkfc with lot's of WARN messages (5 millions) like this one: > _org.apache.hadoop.hdfs.StateChange: BLOCK* internalReleaseLease: All > existing blocks are COMPLETE, lease removed, file closed._ > On the threaddump taken by the zkfc there are lots of thread blocked due to a > lock. > Looking at the code, there are a lock taken by the LeaseManager.Monitor when > some lease must be released. Due to the really big number of lease to be > released the namenode has taken too many times to release them blocking all > other tasks and making the zkfc thinking that the namenode was not > available/stuck. > The idea of this patch is to limit the number of leased released each time we > check for lease so the lock won't be taken for a too long time period. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10397) Distcp should ignore -delete option if -diff option is provided instead of exiting
[ https://issues.apache.org/jira/browse/HDFS-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15283184#comment-15283184 ] Yongjun Zhang commented on HDFS-10397: -- Hi [~liuml07], Thanks for the patch. I did a quick browse, and have the following suggestion. Instead of changing {{validate}} method, I think we can possibly change the {{OptionParser#parse}} method, something like {code} boolean deleteMissing = command.hasOption(DistCpOptionSwitch.DELETE_MISSING.getSwitch()); boolean diff = command.hasOption(DistCpOptionSwitch.DIFF.getSwitch()); boolean ignoreDeleteMissing = deleteMissing && diff; if (ignoreDeleteMissing) { // issue warning message } else if (deleteMissing) { //set deleteMissing } if (diff) { // set diff } {code} Basically let the parser to decide whether to ignore some switches, and let {{DistCpOption#validate}} to decide other invalid situations and throw InvalidArguement exception. This seems cleaner to me. What do you think? Thanks. > Distcp should ignore -delete option if -diff option is provided instead of > exiting > -- > > Key: HDFS-10397 > URL: https://issues.apache.org/jira/browse/HDFS-10397 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-10397.000.patch, HDFS-10397.001.patch > > > In distcp, {{-delete}} and {{-diff}} options are mutually exclusive. > [HDFS-8828] brought strictly checking which makes the existing applications > (or scripts) that work just fine with both {{-delete}} and {{-diff}} options > previously stop performing because of the > {{java.lang.IllegalArgumentException: Diff is valid only with update > options}} exception. > To make it backward incompatible, we can ignore the {{-delete}} option, given > {{-diff}} option, instead of exiting the program. Along with that, we can > print a warning message saying that _Diff is valid only with update options, > and -delete option is ignored_. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9546) DiskBalancer : Add Execute command
[ https://issues.apache.org/jira/browse/HDFS-9546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15283090#comment-15283090 ] Hadoop QA commented on HDFS-9546: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 34s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s {color} | {color:green} HDFS-1312 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s {color} | {color:green} HDFS-1312 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 23s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 11s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s {color} | {color:green} HDFS-1312 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 49s {color} | {color:green} HDFS-1312 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s {color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s {color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 49s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 51s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 28s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 171m 11s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_91 Failed junit tests | hadoop.tools.TestHdfsConfigFields | | | hadoop.tracing.TestTracing | | | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations | | | hadoop.hdfs.TestAsyncDFSRename | | JDK v1.7.0_95 Failed junit tests | hadoop.tools.TestHdfsConfigFields | | | hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:cf2ee45 | | JIRA Patch URL |
[jira] [Commented] (HDFS-10385) LocalFileSystem rename() function should return false when destination file exists
[ https://issues.apache.org/jira/browse/HDFS-10385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15283084#comment-15283084 ] Chris Nauroth commented on HDFS-10385: -- I'm wary of changing this behavior, especially in branch-2, because of backwards-compatibility concerns. Even though the behavior differs from HDFS semantics, applications often have different expectations of the local file system. There is a risk that downstream ecosystem components or user applications will start encountering errors, where previously the same calls to the local file system succeeded. > LocalFileSystem rename() function should return false when destination file > exists > -- > > Key: HDFS-10385 > URL: https://issues.apache.org/jira/browse/HDFS-10385 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Affects Versions: 2.6.0 >Reporter: Aihua Xu >Assignee: Xiaobing Zhou > > Currently rename() of LocalFileSystem returns true and renames successfully > when the destination file exists. That seems to have different behavior from > DFSFileSystem. > If they can have the same behavior, then we can use one call to do rename > rather than checking if destination exists and then making rename() call. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
[ https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10390: - Status: Patch Available (was: Open) > Implement asynchronous setAcl/getAclStatus for DistributedFileSystem > > > Key: HDFS-10390 > URL: https://issues.apache.org/jira/browse/HDFS-10390 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10390-HDFS-9924.000.patch > > > This is proposed to implement asynchronous setAcl/getAclStatus. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
[ https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10390: - Attachment: HDFS-10390-HDFS-9924.000.patch > Implement asynchronous setAcl/getAclStatus for DistributedFileSystem > > > Key: HDFS-10390 > URL: https://issues.apache.org/jira/browse/HDFS-10390 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10390-HDFS-9924.000.patch > > > This is proposed to implement asynchronous setAcl/getAclStatus. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
[ https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15283013#comment-15283013 ] Xiaobing Zhou commented on HDFS-10390: -- I posted the v000 patch, please kindly review it, thanks. > Implement asynchronous setAcl/getAclStatus for DistributedFileSystem > > > Key: HDFS-10390 > URL: https://issues.apache.org/jira/browse/HDFS-10390 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10390-HDFS-9924.000.patch > > > This is proposed to implement asynchronous setAcl/getAclStatus. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9546) DiskBalancer : Add Execute command
[ https://issues.apache.org/jira/browse/HDFS-9546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-9546: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) [~eddyxu] Thanks for your review and comments. I have committed this JIRA to the feature branch. > DiskBalancer : Add Execute command > -- > > Key: HDFS-9546 > URL: https://issues.apache.org/jira/browse/HDFS-9546 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-9546-HDFS-1312.001.patch, > HDFS-9546-HDFS-1312.002.patch, plan.json > > > This command allows user to execute a plan that is already generated by the > plan command. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10383) Safely close resources in DFSTestUtil
[ https://issues.apache.org/jira/browse/HDFS-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15282885#comment-15282885 ] Xiaoyu Yao commented on HDFS-10383: --- V2 Looks good to me. Do we understand why createStripedFile() needs to hide the exception with IOUtils.cleanup() during close(). It looks like the exception is caused by a double completeFile(). Should we remove the extra completeFile() from createStripedFile() and use try-with-resource? > Safely close resources in DFSTestUtil > - > > Key: HDFS-10383 > URL: https://issues.apache.org/jira/browse/HDFS-10383 > Project: Hadoop HDFS > Issue Type: Improvement > Components: test >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-10383.000.patch, HDFS-10383.001.patch, > HDFS-10383.002.patch > > > There are a few of methods in {{DFSTestUtil}} that do not close the resource > safely, or elegantly. We can use the try-with-resource statement to address > this problem. > Specially, as {{DFSTestUtil}} is popularly used in test, we need to preserve > any exceptions thrown during the processing of the resource while still > guaranteeing it's closed finally. Take for example,the current implementation > of {{DFSTestUtil#createFile()}} closes the FSDataOutputStream in the > {{finally}} block, and when closing if the internal > {{DFSOutputStream#close()}} throws any exception, which it often does, the > exception thrown during the processing will be lost. See this [test > failure|https://builds.apache.org/job/PreCommit-HADOOP-Build/9320/testReport/org.apache.hadoop.hdfs/TestAsyncDFSRename/testAggressiveConcurrentAsyncRenameWithOverwrite/], > and we have to guess what was the root cause. > Using try-with-resource, we can close the resources safely, and the > exceptions thrown both in processing and closing will be available (closing > exception will be suppressed). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10360) DataNode may format directory and lose blocks if current/VERSION is missing
[ https://issues.apache.org/jira/browse/HDFS-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-10360: --- Attachment: HDFS-10360.004.patch HDFS-10360.004.patch > DataNode may format directory and lose blocks if current/VERSION is missing > --- > > Key: HDFS-10360 > URL: https://issues.apache.org/jira/browse/HDFS-10360 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HDFS-10360.001.patch, HDFS-10360.002.patch, > HDFS-10360.003.patch, HDFS-10360.004.patch, HDFS-10360.004.patch > > > Under certain circumstances, if the current/VERSION of a storage directory is > missing, DataNode may format the storage directory even though _block files > are not missing_. > This is very easy to reproduce. Simply launch a HDFS cluster and create some > files. Delete current/VERSION, and restart the data node. > After the restart, the data node will format the directory and remove all > existing block files: > {noformat} > 2016-05-03 12:57:15,387 INFO org.apache.hadoop.hdfs.server.common.Storage: > Lock on /data/dfs/dn/in_use.lock acquired by nodename > 5...@weichiu-dn-2.vpc.cloudera.com > 2016-05-03 12:57:15,389 INFO org.apache.hadoop.hdfs.server.common.Storage: > Storage directory /data/dfs/dn is not formatted for > BP-787466439-172.26.24.43-1462305406642 > 2016-05-03 12:57:15,389 INFO org.apache.hadoop.hdfs.server.common.Storage: > Formatting ... > 2016-05-03 12:57:15,464 INFO org.apache.hadoop.hdfs.server.common.Storage: > Analyzing storage directories for bpid BP-787466439-172.26.24.43-1462305406642 > 2016-05-03 12:57:15,464 INFO org.apache.hadoop.hdfs.server.common.Storage: > Locking is disabled for > /data/dfs/dn/current/BP-787466439-172.26.24.43-1462305406642 > 2016-05-03 12:57:15,465 INFO org.apache.hadoop.hdfs.server.common.Storage: > Block pool storage directory > /data/dfs/dn/current/BP-787466439-172.26.24.43-1462305406642 is not formatted > for BP-787466439-172 > .26.24.43-1462305406642 > 2016-05-03 12:57:15,465 INFO org.apache.hadoop.hdfs.server.common.Storage: > Formatting ... > 2016-05-03 12:57:15,465 INFO org.apache.hadoop.hdfs.server.common.Storage: > Formatting block pool BP-787466439-172.26.24.43-1462305406642 directory > /data/dfs/dn/current/BP-787466439-172.26.24.43-1462305406642/current > {noformat} > The bug is: DataNode assumes that if none of {{current/VERSION}}, > {{previous/}}, {{previous.tmp/}}, {{removed.tmp/}}, {{finalized.tmp/}} and > {{lastcheckpoint.tmp/}} exists, the storage directory contains nothing > important to HDFS and decides to format it. > https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java#L526-L545 > However, block files may still exist, and in my opinion, we should do > everything possible to retain the block files. > I have two suggestions: > # check if {{current/}} directory is empty. If not, throw an > InconsistentFSStateException in {{Storage#analyzeStorage}} instead of > asumming its not formatted. Or, > # In {{Storage#clearDirectory}}, before it formats the storage directory, > rename or move {{current/}} directory. Also, log whatever is being > renamed/moved. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-2173) saveNamespace should not throw IOE when only one storage directory fails to write VERSION file
[ https://issues.apache.org/jira/browse/HDFS-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HDFS-2173: --- Attachment: HDFS-2173.03.patch > saveNamespace should not throw IOE when only one storage directory fails to > write VERSION file > -- > > Key: HDFS-2173 > URL: https://issues.apache.org/jira/browse/HDFS-2173 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: Edit log branch (HDFS-1073), 0.23.0 >Reporter: Todd Lipcon >Assignee: Andras Bokor > Attachments: HDFS-2173.01.patch, HDFS-2173.02.patch, > HDFS-2173.03.patch > > > This JIRA tracks a TODO in TestSaveNamespace. Currently, if, while writing > the VERSION files in the storage directories, one of the directories fails, > the entire operation throws IOE. This is unnecessary -- instead, just that > directory should be marked as failed. > This is targeted to be fixed _after_ HDFS-1073 is merged to trunk, since it > does not ever dataloss, and would rarely occur in practice (the dir would > have to fail between writing the fsimage file and writing VERSION) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-2173) saveNamespace should not throw IOE when only one storage directory fails to write VERSION file
[ https://issues.apache.org/jira/browse/HDFS-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HDFS-2173: --- Attachment: HDFS-2173.02.patch > saveNamespace should not throw IOE when only one storage directory fails to > write VERSION file > -- > > Key: HDFS-2173 > URL: https://issues.apache.org/jira/browse/HDFS-2173 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: Edit log branch (HDFS-1073), 0.23.0 >Reporter: Todd Lipcon >Assignee: Andras Bokor > Attachments: HDFS-2173.01.patch, HDFS-2173.02.patch > > > This JIRA tracks a TODO in TestSaveNamespace. Currently, if, while writing > the VERSION files in the storage directories, one of the directories fails, > the entire operation throws IOE. This is unnecessary -- instead, just that > directory should be marked as failed. > This is targeted to be fixed _after_ HDFS-1073 is merged to trunk, since it > does not ever dataloss, and would rarely occur in practice (the dir would > have to fail between writing the fsimage file and writing VERSION) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-2173) saveNamespace should not throw IOE when only one storage directory fails to write VERSION file
[ https://issues.apache.org/jira/browse/HDFS-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HDFS-2173: --- Status: Patch Available (was: Open) > saveNamespace should not throw IOE when only one storage directory fails to > write VERSION file > -- > > Key: HDFS-2173 > URL: https://issues.apache.org/jira/browse/HDFS-2173 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 0.23.0, Edit log branch (HDFS-1073) >Reporter: Todd Lipcon >Assignee: Andras Bokor > Attachments: HDFS-2173.01.patch > > > This JIRA tracks a TODO in TestSaveNamespace. Currently, if, while writing > the VERSION files in the storage directories, one of the directories fails, > the entire operation throws IOE. This is unnecessary -- instead, just that > directory should be marked as failed. > This is targeted to be fixed _after_ HDFS-1073 is merged to trunk, since it > does not ever dataloss, and would rarely occur in practice (the dir would > have to fail between writing the fsimage file and writing VERSION) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work stopped] (HDFS-2173) saveNamespace should not throw IOE when only one storage directory fails to write VERSION file
[ https://issues.apache.org/jira/browse/HDFS-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-2173 stopped by Andras Bokor. -- > saveNamespace should not throw IOE when only one storage directory fails to > write VERSION file > -- > > Key: HDFS-2173 > URL: https://issues.apache.org/jira/browse/HDFS-2173 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: Edit log branch (HDFS-1073), 0.23.0 >Reporter: Todd Lipcon >Assignee: Andras Bokor > Attachments: HDFS-2173.01.patch > > > This JIRA tracks a TODO in TestSaveNamespace. Currently, if, while writing > the VERSION files in the storage directories, one of the directories fails, > the entire operation throws IOE. This is unnecessary -- instead, just that > directory should be marked as failed. > This is targeted to be fixed _after_ HDFS-1073 is merged to trunk, since it > does not ever dataloss, and would rarely occur in practice (the dir would > have to fail between writing the fsimage file and writing VERSION) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org