[jira] [Commented] (HDFS-11516) Admin command line should print message to stderr in failure case
[ https://issues.apache.org/jira/browse/HDFS-11516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929393#comment-15929393 ] Andrew Wang commented on HDFS-11516: LGTM +1 pending a cleaner Jenkins run, thanks for working on this [~lewuathe]! I think it's okay to change the error code for things like incorrect help invocations, it's really unlikely that anyone is depending on that. > Admin command line should print message to stderr in failure case > - > > Key: HDFS-11516 > URL: https://issues.apache.org/jira/browse/HDFS-11516 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kai Sasaki >Assignee: Kai Sasaki >Priority: Minor > Attachments: HDFS-11516.01.patch, HDFS-11516.02.patch > > > {{AdminHelper}} and {{CryptAdmin}} where prints message to stdout instead of > stderr. Since other failure cases prints to stderr, it is necessary to > consolidate that manner. > e.g. > {code} > if (args.size() != 1) { > System.err.println("You must give exactly one argument to -help."); > return 0; > } > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9807) Add an optional StorageID to writes
[ https://issues.apache.org/jira/browse/HDFS-9807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929380#comment-15929380 ] Hadoop QA commented on HDFS-9807: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 18 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 14s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 19s{color} | {color:orange} root: The patch generated 36 new + 1756 unchanged - 26 fixed = 1792 total (was 1782) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 25s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 52s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 5 new + 9 unchanged - 0 fixed = 14 total (was 9) {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 3s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 9s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 15s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}169m 19s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-9807 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12859208/HDFS-9807.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 7cd46f88f1f0 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c04fb35 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle |
[jira] [Commented] (HDFS-4015) Safemode should count and report orphaned blocks
[ https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929317#comment-15929317 ] Daniel Pol commented on HDFS-4015: -- [~anu] Managed to get my space back by triggering a full block report from the bad nodes. > Safemode should count and report orphaned blocks > > > Key: HDFS-4015 > URL: https://issues.apache.org/jira/browse/HDFS-4015 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 3.0.0-alpha1 >Reporter: Todd Lipcon >Assignee: Anu Engineer > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch, > HDFS-4015.003.patch, HDFS-4015.004.patch, HDFS-4015.005.patch, > HDFS-4015.006.patch, HDFS-4015.007.patch > > > The safemode status currently reports the number of unique reported blocks > compared to the total number of blocks referenced by the namespace. However, > it does not report the inverse: blocks which are reported by datanodes but > not referenced by the namespace. > In the case that an admin accidentally starts up from an old image, this can > be confusing: safemode and fsck will show "corrupt files", which are the > files which actually have been deleted but got resurrected by restarting from > the old image. This will convince them that they can safely force leave > safemode and remove these files -- after all, they know that those files > should really have been deleted. However, they're not aware that leaving > safemode will also unrecoverably delete a bunch of other block files which > have been orphaned due to the namespace rollback. > I'd like to consider reporting something like: "90 of expected 100 > blocks have been reported. Additionally, 1 blocks have been reported > which do not correspond to any file in the namespace. Forcing exit of > safemode will unrecoverably remove those data blocks" > Whether this statistic is also used for some kind of "inverse safe mode" is > the logical next step, but just reporting it as a warning seems easy enough > to accomplish and worth doing. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10394) move declaration of okhttp version from hdfs-client to hadoop-project POM
[ https://issues.apache.org/jira/browse/HDFS-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929311#comment-15929311 ] Hudson commented on HDFS-10394: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11419 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11419/]) HDFS-10394. move declaration of okhttp version from hdfs-client to (arp: rev 75368150395901f65a4698e84be4e7bbdcba94fa) * (edit) hadoop-project/pom.xml * (edit) hadoop-hdfs-project/hadoop-hdfs-client/pom.xml > move declaration of okhttp version from hdfs-client to hadoop-project POM > - > > Key: HDFS-10394 > URL: https://issues.apache.org/jira/browse/HDFS-10394 > Project: Hadoop HDFS > Issue Type: Bug > Components: build >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Xiaobing Zhou >Priority: Minor > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HDFS-10394.000.patch, HDFS-10394.001.patch, > HDFS-10394.002.patch > > > The POM dependency on okhttp in hadoop-hdfs-client declares its version in > that POM instead. > the root declaration, including version, must go into the > hadoop-project/pom.xml so that its easy to track use and have only one place > if this version were ever to be incremented. As it stands, if any other > module picked up the library, they could adopt a different version. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10394) move declaration of okhttp version from hdfs-client to hadoop-project POM
[ https://issues.apache.org/jira/browse/HDFS-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-10394: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha3 2.9.0 Status: Resolved (was: Patch Available) I've committed this. Thanks for the contribution [~xiaobingo]. > move declaration of okhttp version from hdfs-client to hadoop-project POM > - > > Key: HDFS-10394 > URL: https://issues.apache.org/jira/browse/HDFS-10394 > Project: Hadoop HDFS > Issue Type: Bug > Components: build >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Xiaobing Zhou >Priority: Minor > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HDFS-10394.000.patch, HDFS-10394.001.patch, > HDFS-10394.002.patch > > > The POM dependency on okhttp in hadoop-hdfs-client declares its version in > that POM instead. > the root declaration, including version, must go into the > hadoop-project/pom.xml so that its easy to track use and have only one place > if this version were ever to be incremented. As it stands, if any other > module picked up the library, they could adopt a different version. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-4015) Safemode should count and report orphaned blocks
[ https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929265#comment-15929265 ] Daniel Pol edited comment on HDFS-4015 at 3/17/17 1:08 AM: --- [~anu] I've seen it before in other cases, but here's my current one. I have an issue on my cluster where datanodes start failing when hit with heavy write activity (even to the point where I can't ssh to the system anymore and needs to be power cycled), mostly during the reduce phase from Terasort. So I start Teragen and during the run some datanodes crash, and the remaining nodes get more data to store compared to expected. At that point I stop the whole cluster and reboot all nodes (even some of the nodes that are not bad still take a long time to respond). Once the cluster is up I delete the Teragen folder (with skipTrash, and I don't use snapshots) because some nodes are now close to their space capacity. However not all space is freed up and upon investigation I see the bad nodes have orphaned blocks. A few runs like this quickly take up all my available space at which point I have to manually clean the orphaned blocks or reformat HDFS. Steps to reproduce in your enviroment; 1, Start Teragen (make sure its big enough to run for >5 min for example) 2, While Teragen is running (say half way in) kill the datanode process on some node (not shutdown) 3. Once Teragen finished, delete the Teragen folder 4. Restart the whole HDFS cluster, including the killed nodes 5. You should be able to find on the killed node orphaned blocks now that are not getting deleted. Fsck will say something like "Block blk_1075307258 does not exist" even in safe mode. Generally speaking, I think it would be better to be able to detect (and delete) all orphaned blocks, regardless of their source. was (Author: danielpol): [~anu] I've seen it before in other cases, but here's my current one. I have an issue on my cluster where datanodes start failing when hit with heavy write activity (even to the point where I can't ssh to the system anymore and needs to be power cycled), mostly during the reduce phase from Terasort. So I start Teragen and during the run some datanodes crash, and the remaining nodes get more data to store compared to expected. At that point I stop the whole cluster and reboot all nodes (even some of the nodes that are not bad still take a long time to respond). Once the cluster is up I delete the Teragen folder (with skipTrash, and I don't use snapshots) because some nodes are now close to their space capacity. However not all space is freed up and upon investigation I see the bad nodes have orphaned blocks. A few runs like this quickly take up all my available space at which point I have to manually clean the orphaned blocks or reformat HDFS. Steps to reproduce in your enviroment; 1, Start Teragen (make sure its big enough to run for >5 min for example) 2, While Teragen is running (say half way in) kill the datanode process on some node (not shutdown) 3. Once Teragen finished, delete the Teragen folder 4. Restart the whole HDFS cluster, including the killed nodes 5. You should be able to find on the killed node orphaned blocks now that are not getting deleted. Fsck will say something like "Block blk_1075307258 does not exist" even in safe mode. > Safemode should count and report orphaned blocks > > > Key: HDFS-4015 > URL: https://issues.apache.org/jira/browse/HDFS-4015 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 3.0.0-alpha1 >Reporter: Todd Lipcon >Assignee: Anu Engineer > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch, > HDFS-4015.003.patch, HDFS-4015.004.patch, HDFS-4015.005.patch, > HDFS-4015.006.patch, HDFS-4015.007.patch > > > The safemode status currently reports the number of unique reported blocks > compared to the total number of blocks referenced by the namespace. However, > it does not report the inverse: blocks which are reported by datanodes but > not referenced by the namespace. > In the case that an admin accidentally starts up from an old image, this can > be confusing: safemode and fsck will show "corrupt files", which are the > files which actually have been deleted but got resurrected by restarting from > the old image. This will convince them that they can safely force leave > safemode and remove these files -- after all, they know that those files > should really have been deleted. However, they're not aware that leaving > safemode will also unrecoverably delete a bunch of other block files which > have been orphaned due to the namespace rollback. > I'd like to consider reporting something like: "90 of expected 100 >
[jira] [Commented] (HDFS-4015) Safemode should count and report orphaned blocks
[ https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929265#comment-15929265 ] Daniel Pol commented on HDFS-4015: -- [~anu] I've seen it before in other cases, but here's my current one. I have an issue on my cluster where datanodes start failing when hit with heavy write activity (even to the point where I can't ssh to the system anymore and needs to be power cycled), mostly during the reduce phase from Terasort. So I start Teragen and during the run some datanodes crash, and the remaining nodes get more data to store compared to expected. At that point I stop the whole cluster and reboot all nodes (even some of the nodes that are not bad still take a long time to respond). Once the cluster is up I delete the Teragen folder (with skipTrash, and I don't use snapshots) because some nodes are now close to their space capacity. However not all space is freed up and upon investigation I see the bad nodes have orphaned blocks. A few runs like this quickly take up all my available space at which point I have to manually clean the orphaned blocks or reformat HDFS. Steps to reproduce in your enviroment; 1, Start Teragen (make sure its big enough to run for >5 min for example) 2, While Teragen is running (say half way in) kill the datanode process on some node (not shutdown) 3. Once Teragen finished, delete the Teragen folder 4. Restart the whole HDFS cluster, including the killed nodes 5. You should be able to find on the killed node orphaned blocks now that are not getting deleted. Fsck will say something like "Block blk_1075307258 does not exist" even in safe mode. > Safemode should count and report orphaned blocks > > > Key: HDFS-4015 > URL: https://issues.apache.org/jira/browse/HDFS-4015 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 3.0.0-alpha1 >Reporter: Todd Lipcon >Assignee: Anu Engineer > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch, > HDFS-4015.003.patch, HDFS-4015.004.patch, HDFS-4015.005.patch, > HDFS-4015.006.patch, HDFS-4015.007.patch > > > The safemode status currently reports the number of unique reported blocks > compared to the total number of blocks referenced by the namespace. However, > it does not report the inverse: blocks which are reported by datanodes but > not referenced by the namespace. > In the case that an admin accidentally starts up from an old image, this can > be confusing: safemode and fsck will show "corrupt files", which are the > files which actually have been deleted but got resurrected by restarting from > the old image. This will convince them that they can safely force leave > safemode and remove these files -- after all, they know that those files > should really have been deleted. However, they're not aware that leaving > safemode will also unrecoverably delete a bunch of other block files which > have been orphaned due to the namespace rollback. > I'd like to consider reporting something like: "90 of expected 100 > blocks have been reported. Additionally, 1 blocks have been reported > which do not correspond to any file in the namespace. Forcing exit of > safemode will unrecoverably remove those data blocks" > Whether this statistic is also used for some kind of "inverse safe mode" is > the logical next step, but just reporting it as a warning seems easy enough > to accomplish and worth doing. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-4015) Safemode should count and report orphaned blocks
[ https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929244#comment-15929244 ] Anu Engineer edited comment on HDFS-4015 at 3/17/17 12:43 AM: -- [~danielpol] Thanks for reporting this. I will try to repro the case you have described. But just to make sure that we are on the same page, this patch addresses the issue of orphaned blocks when NN is in safe mode. So is your case when you have Datanode down, you delete the directory and then you reboot the datanodes *and* namenode ? Can you please explain the steps to repro this issue ? Thanks in advance. was (Author: anu): [~danielpol] Thanks for reporting this. I will try to repro the case you have described. But just to make sure that we are on the same page, this patch addresses the issue of when NN is in safe mode. So is your case when you have Datanode down, you delete the directory and then you reboot the datanodes *and* namenode ? Can you please explain the steps to repro this issue ? Thanks in advance. > Safemode should count and report orphaned blocks > > > Key: HDFS-4015 > URL: https://issues.apache.org/jira/browse/HDFS-4015 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 3.0.0-alpha1 >Reporter: Todd Lipcon >Assignee: Anu Engineer > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch, > HDFS-4015.003.patch, HDFS-4015.004.patch, HDFS-4015.005.patch, > HDFS-4015.006.patch, HDFS-4015.007.patch > > > The safemode status currently reports the number of unique reported blocks > compared to the total number of blocks referenced by the namespace. However, > it does not report the inverse: blocks which are reported by datanodes but > not referenced by the namespace. > In the case that an admin accidentally starts up from an old image, this can > be confusing: safemode and fsck will show "corrupt files", which are the > files which actually have been deleted but got resurrected by restarting from > the old image. This will convince them that they can safely force leave > safemode and remove these files -- after all, they know that those files > should really have been deleted. However, they're not aware that leaving > safemode will also unrecoverably delete a bunch of other block files which > have been orphaned due to the namespace rollback. > I'd like to consider reporting something like: "90 of expected 100 > blocks have been reported. Additionally, 1 blocks have been reported > which do not correspond to any file in the namespace. Forcing exit of > safemode will unrecoverably remove those data blocks" > Whether this statistic is also used for some kind of "inverse safe mode" is > the logical next step, but just reporting it as a warning seems easy enough > to accomplish and worth doing. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-4015) Safemode should count and report orphaned blocks
[ https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929244#comment-15929244 ] Anu Engineer commented on HDFS-4015: [~danielpol] Thanks for reporting this. I will try to repro the case you have described. But just to make sure that we are on the same page, this patch addresses the issue of when NN is in safe mode. So is your case when you have Datanode down, you delete the directory and then you reboot the datanodes *and* namenode ? Can you please explain the steps to repro this issue ? Thanks in advance. > Safemode should count and report orphaned blocks > > > Key: HDFS-4015 > URL: https://issues.apache.org/jira/browse/HDFS-4015 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 3.0.0-alpha1 >Reporter: Todd Lipcon >Assignee: Anu Engineer > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch, > HDFS-4015.003.patch, HDFS-4015.004.patch, HDFS-4015.005.patch, > HDFS-4015.006.patch, HDFS-4015.007.patch > > > The safemode status currently reports the number of unique reported blocks > compared to the total number of blocks referenced by the namespace. However, > it does not report the inverse: blocks which are reported by datanodes but > not referenced by the namespace. > In the case that an admin accidentally starts up from an old image, this can > be confusing: safemode and fsck will show "corrupt files", which are the > files which actually have been deleted but got resurrected by restarting from > the old image. This will convince them that they can safely force leave > safemode and remove these files -- after all, they know that those files > should really have been deleted. However, they're not aware that leaving > safemode will also unrecoverably delete a bunch of other block files which > have been orphaned due to the namespace rollback. > I'd like to consider reporting something like: "90 of expected 100 > blocks have been reported. Additionally, 1 blocks have been reported > which do not correspond to any file in the namespace. Forcing exit of > safemode will unrecoverably remove those data blocks" > Whether this statistic is also used for some kind of "inverse safe mode" is > the logical next step, but just reporting it as a warning seems easy enough > to accomplish and worth doing. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-4015) Safemode should count and report orphaned blocks
[ https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929237#comment-15929237 ] Daniel Pol commented on HDFS-4015: -- RE:"In this patch we track blocks with generation stamp greater than the current highest generation stamp that is known to NN. I have made the assumption that if DN comes back on-line and reports blocks for files that have been deleted, those Generation IDs for those blocks will be lesser than the current Generation Stamp of NN. Please let me know if you think this assumption is not valid or breaks down in special cases, Could this happen with V1 vs V2 generation stamps ?" I'm hitting the case with same Generation ID quite often during testing. Test scenario is run Teragen and for various reasons (mostly Hadoop settings) datanode service on some nodes dies abruptly (think power failures also). While the bad nodes are down, you delete the Teragen output folder (to free up space on the remaining good nodes that now are trying to maintain the replication factor with less nodes). Once all nodes are up again and running the bad nodes have orphaned blocks with the same Generation IDs. Right now its pretty painful to get rid of those manually. > Safemode should count and report orphaned blocks > > > Key: HDFS-4015 > URL: https://issues.apache.org/jira/browse/HDFS-4015 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 3.0.0-alpha1 >Reporter: Todd Lipcon >Assignee: Anu Engineer > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch, > HDFS-4015.003.patch, HDFS-4015.004.patch, HDFS-4015.005.patch, > HDFS-4015.006.patch, HDFS-4015.007.patch > > > The safemode status currently reports the number of unique reported blocks > compared to the total number of blocks referenced by the namespace. However, > it does not report the inverse: blocks which are reported by datanodes but > not referenced by the namespace. > In the case that an admin accidentally starts up from an old image, this can > be confusing: safemode and fsck will show "corrupt files", which are the > files which actually have been deleted but got resurrected by restarting from > the old image. This will convince them that they can safely force leave > safemode and remove these files -- after all, they know that those files > should really have been deleted. However, they're not aware that leaving > safemode will also unrecoverably delete a bunch of other block files which > have been orphaned due to the namespace rollback. > I'd like to consider reporting something like: "90 of expected 100 > blocks have been reported. Additionally, 1 blocks have been reported > which do not correspond to any file in the namespace. Forcing exit of > safemode will unrecoverably remove those data blocks" > Whether this statistic is also used for some kind of "inverse safe mode" is > the logical next step, but just reporting it as a warning seems easy enough > to accomplish and worth doing. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9807) Add an optional StorageID to writes
[ https://issues.apache.org/jira/browse/HDFS-9807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HDFS-9807: Attachment: HDFS-9807.002.patch bq. I am planning to push the information like storageID and storageType FsVolumeList (as described in the last suggestion in my previous comment) and probably also into the interface for the VolumeChoosingPolicy, so it can decide to choose the volume based on all the info. +1 for this approach. The configured {{VolumeChoosingPolicy}} can elect to ignore the particular storageid chosen by the NameNode, as it does now. Uploaded a new patch with really minor changes (removing some vestigial code, fixed findbugs, etc.). A couple questions: * The {{BlockTokenIdentifier}} also updates the legacy, {{Writable}} token. This wouldn't work with older clients? * There's an assertion in {{StripedWriter}} that targetStorageIDs is not null; does that always hold? > Add an optional StorageID to writes > --- > > Key: HDFS-9807 > URL: https://issues.apache.org/jira/browse/HDFS-9807 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha2 >Reporter: Chris Douglas >Assignee: Ewan Higgs > Attachments: HDFS-9807.001.patch, HDFS-9807.002.patch > > > The {{BlockPlacementPolicy}} considers specific storages, but when the > replica is written the DN {{VolumeChoosingPolicy}} is unaware of any > preference or constraints from other policies affecting placement. This > limits heterogeneity to the declared storage types, which are treated as > fungible within the target DN. It should be possible to influence or > constrain the DN policy to select a particular storage. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11221) Have StorageDirectory return Optional instead of File/null
[ https://issues.apache.org/jira/browse/HDFS-11221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929226#comment-15929226 ] Virajith Jalaparti commented on HDFS-11221: --- Thanks [~jiajia] for taking on this work. Going through the latest patch, it is unclear to me if we need this fix. The current patch essentially replaces calls to {{getRoot()}}, {{getCurrentDir()}} etc. with calls to {{getRoot().get()}}, {{getCurrentDir().get()}} etc. So, as [~ehiggs] mentioned in his earlier comment, instead of a {{NullPointerException}}, we get a {{NoSuchElementException}} when the {{root}} of a {{StorageDirectory}} is {{null}}. This does not add much functionality. Ideally, the fix should be such that we take actions based on whether {{root}} (and/or the other directories) is (are) {{null}} or not. It seems hard to come up with a generic action to take when, for example, {{root}} indeed turns out to be {{null}}. The latest patch for HDFS-10675 deals with this case in the Datanode assuming that {{root}} is {{null}} only for a provided {{StorageDirectory}}. When calls to {{getCurrentDir()}}, {{getRoot()}} etc return {{null}}, it takes the appropriate action (e.g., {{StorageDirectory#analyzeStorage}}, {{BlockPoolSliceStorage#doTransition}}, etc.) that is specific to the function where they are called from. Doing this generally seems hard as it will not be clear why {{root}} is {{null}}. [~jiajia], [~ehiggs] [~andrew.wang] what do you think? > Have StorageDirectory return Optional instead of File/null > > > Key: HDFS-11221 > URL: https://issues.apache.org/jira/browse/HDFS-11221 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Ewan Higgs >Assignee: Jiajia Li >Priority: Minor > Attachments: HDFS-11221-002.patch, HDFS-11221-v1.patch > > > In HDFS-10675, {{StorageDirectory.root}} can be {{null}} because {{PROVIDED}} > storage locations will not have any directories associated with them. Hence, > we need to add checks to StorageDirectory to make sure we handle this. This > would also lead to changes in code that call {{StorageDirectory.getRoot}}, > {{StorageDirectory.getCurrentDir}}, {{StorageDirectory.getVersionFile}} etc. > as the return value can be {{nul}}l (if {{StorageDirectory.root}} is null). > The proposal to handle this is to change the return type of the above > functions to {{Optional}}. According to my preliminary check, this will > result in changes in ~70 places, which is why it's not appropriate to put it > in the patch for HDFS-10675. But it is certainly a valuable fix. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10530) BlockManager reconstruction work scheduling should correctly adhere to EC block placement policy
[ https://issues.apache.org/jira/browse/HDFS-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929217#comment-15929217 ] Manoj Govindassamy commented on HDFS-10530: --- Thanks for the review and commit help [~andrew.wang]. Sure, will file a jira to track parity block not written out on insufficient DNs. And, will track all other logging related issued in a separate Jira. > BlockManager reconstruction work scheduling should correctly adhere to EC > block placement policy > > > Key: HDFS-10530 > URL: https://issues.apache.org/jira/browse/HDFS-10530 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Rui Gao >Assignee: Manoj Govindassamy > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-10530.1.patch, HDFS-10530.2.patch, > HDFS-10530.3.patch, HDFS-10530.4.patch, HDFS-10530.5.patch > > > This issue was found by [~tfukudom]. > Under RS-DEFAULT-6-3-64k EC policy, > 1. Create an EC file, the file was witten to all the 5 racks( 2 dns for each) > of the cluster. > 2. Reconstruction work would be scheduled if the 6th rack is added. > 3. While adding the 7th rack or more racks will not trigger reconstruction > work. > Based on default EC block placement policy defined in > âBlockPlacementPolicyRackFaultTolerant.javaâ, EC file should be able to be > scheduled to distribute to 9 racks if possible. > In *BlockManager#isPlacementPolicySatisfied(BlockInfo storedBlock)* , > *numReplicas* of striped blocks might should be *getRealTotalBlockNum()*, > instead of *getRealDataBlockNum()*. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929210#comment-15929210 ] Junping Du commented on HDFS-11431: --- bq. I want to target HDFS-11538 at 2.9.0 and 3.0.0-alpha3 Sure. Add back 2.9.0 to HDFS-11538. bq. but if HDFS-11431 stays in branch-2, then committing HDFS-11538 to branch-2 also requires reverting HDFS-11431, and it wouldn't for trunk. It makes tracking what's where more complicated. We want to revert HDFS-11431 from trunk because it cause build failure. We don't want to revert HDFS-11431 from branch-2 because it works (even like a hack way as your said). I would like branch-2 to keep in a safe place even adding a bit more effort to tracking differences between branch-2 and trunk. bq. That's not possible here since HDFS-11431 doesn't work for trunk. Which is why I suggested the above course of action. Agree. That's why we should have one patch for branch-2/branch-2.8 and have a different patch for trunk later. bq. Like I said before too, since 2.9.0 isn't imminently being released, I'd prefer the default action be "fix HDFS-13715" than "maintain the hack of HDFS-11431". It's also easy to revisit this when 2.9.0 is closer to an RC. I don't see HDFS-13715 will get immediately fixed in short term also - it even haven't get any assignee yet. My key points here: 1. HDFS-13715 is somethings TBD, for branch-2. better to have HDFS-11431 patch than nothing. 2. HDFS-13715 is not a blocker but something nice to have for 2.9. As I mentioned earlier, the whole feature to make hdfs-client jar thinner is not a must given many features on 2.9 are also in pipeline. 3. If you really think tracking the revert of this patch (when we have HDFS-13715) is a big problem, then we could file a separated JIRA and mark that one as a blocker for 2.9 to revisit reverting patch here when we are in RC stage. Make sense? > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11538) Move ConfiguredFailoverProxyProvider into hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-11538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929207#comment-15929207 ] Andrew Wang commented on HDFS-11538: WFM thanks Junping. Multi-release tracking like this is always difficult. > Move ConfiguredFailoverProxyProvider into hadoop-hdfs-client > > > Key: HDFS-11538 > URL: https://issues.apache.org/jira/browse/HDFS-11538 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Andrew Wang >Priority: Blocker > > Follow-up for HDFS-11431. We should move this missing class over rather than > pulling in the whole hadoop-hdfs dependency. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-6984) In Hadoop 3, make FileStatus serialize itself via protobuf
[ https://issues.apache.org/jira/browse/HDFS-6984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929203#comment-15929203 ] Andrew Wang commented on HDFS-6984: --- Hi Chris, replies inline, lots of agreement with your direction: bq. Should we also try to remove Writable from FsPermission? We could deprecate the Writable API instead of removing it from these classes, in case projects/users depend on it downstream... the serialization/conversion can still live in a library, but be called from the deprecated methods. SGTM bq. Since both FsPermission#getAclBit and FsPermission#getEncryptedBit/FileStatus#isEncrypted are user-facing, should these also be part of FSProtos? ...While we're at it, should we also change HdfsFileStatusProto to stop packing the acl/encryption bits among the permission bits? Also sounds great, I'd love a bitfield rather than stuffing these in FsPermission. bq. is there a reason encryption info is included in HdfsFileStatus, but ACLs are not? Would it be inappropriate to add a FileSystem#getAclStatus(FileStatus), in case an implementation returns this information in its response (potentially avoiding the 2-RPC overhead)? We need the FEInfo to read or write a file, and the ACLs are rarely needed by the client since they're enforced server-side. I don't think there are any plans to include ACLs in HdfsFileStatus because of the bloat, but your suggestion makes sense if there is such an FS implementation. Good follow-on. > In Hadoop 3, make FileStatus serialize itself via protobuf > -- > > Key: HDFS-6984 > URL: https://issues.apache.org/jira/browse/HDFS-6984 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha1 >Reporter: Colin P. McCabe >Assignee: Colin P. McCabe > Labels: BB2015-05-TBR > Attachments: HDFS-6984.001.patch, HDFS-6984.002.patch, > HDFS-6984.003.patch, HDFS-6984.004.patch, HDFS-6984.005.patch, > HDFS-6984.nowritable.patch > > > FileStatus was a Writable in Hadoop 2 and earlier. Originally, we used this > to serialize it and send it over the wire. But in Hadoop 2 and later, we > have the protobuf {{HdfsFileStatusProto}} which serves to serialize this > information. The protobuf form is preferable, since it allows us to add new > fields in a backwards-compatible way. Another issue is that already a lot of > subclasses of FileStatus don't override the Writable methods of the > superclass, breaking the interface contract that read(status.write) should be > equal to the original status. > In Hadoop 3, we should just make FileStatus serialize itself via protobuf so > that we don't have to deal with these issues. It's probably too late to do > this in Hadoop 2, since user code may be relying on the existing FileStatus > serialization there. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10394) move declaration of okhttp version from hdfs-client to hadoop-project POM
[ https://issues.apache.org/jira/browse/HDFS-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929191#comment-15929191 ] Hadoop QA commented on HDFS-10394: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 58s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 22s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 54m 12s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-10394 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12859191/HDFS-10394.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 19096a4df6a2 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4812518 | | Default Java | 1.8.0_121 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/18746/testReport/ | | modules | C: hadoop-project hadoop-hdfs-project/hadoop-hdfs-client U: . | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/18746/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > move declaration of okhttp version from hdfs-client to hadoop-project POM > - > > Key: HDFS-10394 > URL: https://issues.apache.org/jira/browse/HDFS-10394 > Project: Hadoop HDFS > Issue Type: Bug > Components: build >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Xiaobing Zhou >Priority: Minor >
[jira] [Commented] (HDFS-11538) Move ConfiguredFailoverProxyProvider into hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-11538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929180#comment-15929180 ] Junping Du commented on HDFS-11538: --- Add back 2.9.0 given discussion in HDFS-11431. However, it is not a blocker for 2.9.0 as workaround of HDFS-11431 still works for branch-2. If we have patch here before 2.9.0 going out, then we should revert HDFS-11431. > Move ConfiguredFailoverProxyProvider into hadoop-hdfs-client > > > Key: HDFS-11538 > URL: https://issues.apache.org/jira/browse/HDFS-11538 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Andrew Wang >Priority: Blocker > > Follow-up for HDFS-11431. We should move this missing class over rather than > pulling in the whole hadoop-hdfs dependency. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11538) Move ConfiguredFailoverProxyProvider into hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-11538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated HDFS-11538: -- Target Version/s: 2.9.0, 3.0.0-alpha3 (was: 3.0.0-alpha3) > Move ConfiguredFailoverProxyProvider into hadoop-hdfs-client > > > Key: HDFS-11538 > URL: https://issues.apache.org/jira/browse/HDFS-11538 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Andrew Wang >Priority: Blocker > > Follow-up for HDFS-11431. We should move this missing class over rather than > pulling in the whole hadoop-hdfs dependency. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-6708) StorageType should be encoded in the block token
[ https://issues.apache.org/jira/browse/HDFS-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929148#comment-15929148 ] Hadoop QA commented on HDFS-6708: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 14s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 8s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 8s{color} | {color:orange} root: The patch generated 3 new + 675 unchanged - 7 fixed = 678 total (was 682) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 33s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 55s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 3 new + 9 unchanged - 0 fixed = 12 total (was 9) {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 56s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 8s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 42s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 50s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}146m 11s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ha.TestZKFailoverController | | | hadoop.security.TestKDiag | | | hadoop.hdfs.TestEncryptionZones | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-6708 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12859166/HDFS-6708.0006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux cc08746fd8d9 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 09ad8ef | | Default
[jira] [Commented] (HDFS-10394) move declaration of okhttp version from hdfs-client to hadoop-project POM
[ https://issues.apache.org/jira/browse/HDFS-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929092#comment-15929092 ] Arpit Agarwal commented on HDFS-10394: -- +1 pending Jenkins. > move declaration of okhttp version from hdfs-client to hadoop-project POM > - > > Key: HDFS-10394 > URL: https://issues.apache.org/jira/browse/HDFS-10394 > Project: Hadoop HDFS > Issue Type: Bug > Components: build >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-10394.000.patch, HDFS-10394.001.patch, > HDFS-10394.002.patch > > > The POM dependency on okhttp in hadoop-hdfs-client declares its version in > that POM instead. > the root declaration, including version, must go into the > hadoop-project/pom.xml so that its easy to track use and have only one place > if this version were ever to be incremented. As it stands, if any other > module picked up the library, they could adopt a different version. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10394) move declaration of okhttp version from hdfs-client to hadoop-project POM
[ https://issues.apache.org/jira/browse/HDFS-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929082#comment-15929082 ] Xiaobing Zhou commented on HDFS-10394: -- v2 addressed the comments, thanks [~arpitagarwal] > move declaration of okhttp version from hdfs-client to hadoop-project POM > - > > Key: HDFS-10394 > URL: https://issues.apache.org/jira/browse/HDFS-10394 > Project: Hadoop HDFS > Issue Type: Bug > Components: build >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-10394.000.patch, HDFS-10394.001.patch, > HDFS-10394.002.patch > > > The POM dependency on okhttp in hadoop-hdfs-client declares its version in > that POM instead. > the root declaration, including version, must go into the > hadoop-project/pom.xml so that its easy to track use and have only one place > if this version were ever to be incremented. As it stands, if any other > module picked up the library, they could adopt a different version. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10394) move declaration of okhttp version from hdfs-client to hadoop-project POM
[ https://issues.apache.org/jira/browse/HDFS-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10394: - Attachment: HDFS-10394.002.patch > move declaration of okhttp version from hdfs-client to hadoop-project POM > - > > Key: HDFS-10394 > URL: https://issues.apache.org/jira/browse/HDFS-10394 > Project: Hadoop HDFS > Issue Type: Bug > Components: build >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-10394.000.patch, HDFS-10394.001.patch, > HDFS-10394.002.patch > > > The POM dependency on okhttp in hadoop-hdfs-client declares its version in > that POM instead. > the root declaration, including version, must go into the > hadoop-project/pom.xml so that its easy to track use and have only one place > if this version were ever to be incremented. As it stands, if any other > module picked up the library, they could adopt a different version. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10530) BlockManager reconstruction work scheduling should correctly adhere to EC block placement policy
[ https://issues.apache.org/jira/browse/HDFS-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929054#comment-15929054 ] Hudson commented on HDFS-10530: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11417 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11417/]) HDFS-10530. BlockManager reconstruction work scheduling should correctly (wang: rev 4812518b23cac496ab5cdad5258773bcd9728770) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java > BlockManager reconstruction work scheduling should correctly adhere to EC > block placement policy > > > Key: HDFS-10530 > URL: https://issues.apache.org/jira/browse/HDFS-10530 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Rui Gao >Assignee: Manoj Govindassamy > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-10530.1.patch, HDFS-10530.2.patch, > HDFS-10530.3.patch, HDFS-10530.4.patch, HDFS-10530.5.patch > > > This issue was found by [~tfukudom]. > Under RS-DEFAULT-6-3-64k EC policy, > 1. Create an EC file, the file was witten to all the 5 racks( 2 dns for each) > of the cluster. > 2. Reconstruction work would be scheduled if the 6th rack is added. > 3. While adding the 7th rack or more racks will not trigger reconstruction > work. > Based on default EC block placement policy defined in > âBlockPlacementPolicyRackFaultTolerant.javaâ, EC file should be able to be > scheduled to distribute to 9 racks if possible. > In *BlockManager#isPlacementPolicySatisfied(BlockInfo storedBlock)* , > *numReplicas* of striped blocks might should be *getRealTotalBlockNum()*, > instead of *getRealDataBlockNum()*. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-5567) CacheAdmin operations not supported with viewfs
[ https://issues.apache.org/jira/browse/HDFS-5567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929027#comment-15929027 ] Arpit Agarwal commented on HDFS-5567: - [~boky01], correct if cache admin commands accept generic options then we can resolve this. > CacheAdmin operations not supported with viewfs > --- > > Key: HDFS-5567 > URL: https://issues.apache.org/jira/browse/HDFS-5567 > Project: Hadoop HDFS > Issue Type: Bug > Components: caching >Affects Versions: 3.0.0-alpha1 >Reporter: Stephen Chu >Assignee: Colin P. McCabe > > On a federated cluster with viewfs configured, we'll run into the following > error when using CacheAdmin commands: > {code} > bash-4.1$ hdfs cacheadmin -listPools > Exception in thread "main" java.lang.IllegalArgumentException: FileSystem > viewfs://cluster3/ is not an HDFS file system > at org.apache.hadoop.hdfs.tools.CacheAdmin.getDFS(CacheAdmin.java:96) > at > org.apache.hadoop.hdfs.tools.CacheAdmin.access$100(CacheAdmin.java:50) > at > org.apache.hadoop.hdfs.tools.CacheAdmin$ListCachePoolsCommand.run(CacheAdmin.java:748) > at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:84) > at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:89) > bash-4.1$ > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929021#comment-15929021 ] Andrew Wang commented on HDFS-11431: bq. I don't understand your point. In our current practice, all 2.8.x patches should be in branch-2 first. I think that's easier for track. Sorry, I meant "target version" rather than "fix version" here. I want to target HDFS-11538 at 2.9.0 and 3.0.0-alpha3, but if HDFS-11431 stays in branch-2, then committing HDFS-11538 to branch-2 also requires reverting HDFS-11431, and it wouldn't for trunk. It makes tracking what's where more complicated. Our current practice tries to make "newer" branches supersets of each other, which also includes trunk. That's not possible here since HDFS-11431 doesn't work for trunk. Which is why I suggested the above course of action. Like I said before too, since 2.9.0 isn't imminently being released, I'd prefer the default action be "fix HDFS-13715" than "maintain the hack of HDFS-11431". It's also easy to revisit this when 2.9.0 is closer to an RC. > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-5713) ViewFS doesn't work with -lsr command
[ https://issues.apache.org/jira/browse/HDFS-5713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929018#comment-15929018 ] Arpit Agarwal commented on HDFS-5713: - Resolving as a dup of HDFS-8413. Though that was created later, it has a patch attached. > ViewFS doesn't work with -lsr command > - > > Key: HDFS-5713 > URL: https://issues.apache.org/jira/browse/HDFS-5713 > Project: Hadoop HDFS > Issue Type: Bug > Components: federation >Affects Versions: 3.0.0-alpha1 >Reporter: Brandon Li > > -lsr doesn't show the namespace subtree but only shows the top level > directory/file objects. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-5713) ViewFS doesn't work with -lsr command
[ https://issues.apache.org/jira/browse/HDFS-5713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal resolved HDFS-5713. - Resolution: Duplicate > ViewFS doesn't work with -lsr command > - > > Key: HDFS-5713 > URL: https://issues.apache.org/jira/browse/HDFS-5713 > Project: Hadoop HDFS > Issue Type: Bug > Components: federation >Affects Versions: 3.0.0-alpha1 >Reporter: Brandon Li > > -lsr doesn't show the namespace subtree but only shows the top level > directory/file objects. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-2921) HA: HA docs need to cover decomissioning
[ https://issues.apache.org/jira/browse/HDFS-2921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou reassigned HDFS-2921: --- Assignee: Xiaobing Zhou > HA: HA docs need to cover decomissioning > > > Key: HDFS-2921 > URL: https://issues.apache.org/jira/browse/HDFS-2921 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation, ha >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Xiaobing Zhou > > We need to cover decomissioning in the HA docs as is done in the [federation > decomissioning > docs|http://hadoop.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/Federation.html#Decommissioning]. > The same process should apply, we need to refresh all the namenodes (same > commands should work). -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-1956) HDFS federation configuration should be documented
[ https://issues.apache.org/jira/browse/HDFS-1956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal resolved HDFS-1956. - Resolution: Not A Problem Our documenation covers Federation now, resolving. https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/Federation.html > HDFS federation configuration should be documented > -- > > Key: HDFS-1956 > URL: https://issues.apache.org/jira/browse/HDFS-1956 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 0.23.0 >Reporter: Ari Rabkin >Assignee: Jitendra Nath Pandey > > HDFS-1689 didn't document any of the new configuration options it introduced. > These should be in a "Federation user guide", or at the very least in Javadoc. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10530) BlockManager reconstruction work scheduling should correctly adhere to EC block placement policy
[ https://issues.apache.org/jira/browse/HDFS-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-10530: --- Resolution: Fixed Fix Version/s: 3.0.0-alpha3 Status: Resolved (was: Patch Available) Committed to trunk, thanks for the initial patch Rui Gao and Manoj for taking this over, credited you both in the commit message. > BlockManager reconstruction work scheduling should correctly adhere to EC > block placement policy > > > Key: HDFS-10530 > URL: https://issues.apache.org/jira/browse/HDFS-10530 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Rui Gao >Assignee: Manoj Govindassamy > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-10530.1.patch, HDFS-10530.2.patch, > HDFS-10530.3.patch, HDFS-10530.4.patch, HDFS-10530.5.patch > > > This issue was found by [~tfukudom]. > Under RS-DEFAULT-6-3-64k EC policy, > 1. Create an EC file, the file was witten to all the 5 racks( 2 dns for each) > of the cluster. > 2. Reconstruction work would be scheduled if the 6th rack is added. > 3. While adding the 7th rack or more racks will not trigger reconstruction > work. > Based on default EC block placement policy defined in > âBlockPlacementPolicyRackFaultTolerant.javaâ, EC file should be able to be > scheduled to distribute to 9 racks if possible. > In *BlockManager#isPlacementPolicySatisfied(BlockInfo storedBlock)* , > *numReplicas* of striped blocks might should be *getRealTotalBlockNum()*, > instead of *getRealDataBlockNum()*. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10530) BlockManager reconstruction work scheduling should correctly adhere to EC block placement policy
[ https://issues.apache.org/jira/browse/HDFS-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928998#comment-15928998 ] Andrew Wang commented on HDFS-10530: This looks good to me. I can fix up the checkstyle whitespace issue at commit time. +1 will commit shortly. I raised a bunch of other questions in my previous comment, which we can address in follow-on JIRAs. > BlockManager reconstruction work scheduling should correctly adhere to EC > block placement policy > > > Key: HDFS-10530 > URL: https://issues.apache.org/jira/browse/HDFS-10530 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Rui Gao >Assignee: Manoj Govindassamy > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-10530.1.patch, HDFS-10530.2.patch, > HDFS-10530.3.patch, HDFS-10530.4.patch, HDFS-10530.5.patch > > > This issue was found by [~tfukudom]. > Under RS-DEFAULT-6-3-64k EC policy, > 1. Create an EC file, the file was witten to all the 5 racks( 2 dns for each) > of the cluster. > 2. Reconstruction work would be scheduled if the 6th rack is added. > 3. While adding the 7th rack or more racks will not trigger reconstruction > work. > Based on default EC block placement policy defined in > âBlockPlacementPolicyRackFaultTolerant.javaâ, EC file should be able to be > scheduled to distribute to 9 racks if possible. > In *BlockManager#isPlacementPolicySatisfied(BlockInfo storedBlock)* , > *numReplicas* of striped blocks might should be *getRealTotalBlockNum()*, > instead of *getRealDataBlockNum()*. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10530) BlockManager reconstruction work scheduling should correctly adhere to EC block placement policy
[ https://issues.apache.org/jira/browse/HDFS-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928996#comment-15928996 ] Andrew Wang commented on HDFS-10530: Thanks for digging in Manoj! Few follow-up Q's: bq. DFSStripedOutputStream verifies if the allocated block locations length is at least equals numDataBlocks, otherwise it throws IOException and the client halts. So, the relaxation is only for the parity blocks. Ran the test myself, looking through the output. It looks like with 6 DNs, we don't allocate any locations for the parity blocks (only 6 replicas): {noformat} 2017-03-16 13:57:38,902 [IPC Server handler 0 on 45189] INFO hdfs.StateChange (FSDirWriteFileOp.java:logAllocatedBlock(777)) - BLOCK* allocate blk_-9223372036854775792_1001, replicas=127.0.0.1:37655, 127.0.0.1:33575, 127.0.0.1:38319, 127.0.0.1:46751, 127.0.0.1:44029, 127.0.0.1:37065 for /ec/test1 {noformat} Could you file a JIRA to dig into this? It looks like we can't write blocks from the same EC group to the same DN. It's still better to write the parities then not at all though. bq. WARN hdfs.DFSOutputStream (DFSStripedOutputStream.java:logCorruptBlocks(1117)) - Block group <1> has 3 corrupt blocks. It's at high risk of losing data. Agree that this log is not accurate, mind filing a JIRA to correct this message? "corrupt" means we have data loss. Here, we haven't lost data yet, but are suffering extremely lowered durability. I'd prefer we also quantify the risk in the message, e.g. "loss of any block" or "loss of two blocks will result in data loss". {noformat} 2017-03-16 13:57:40,898 [DataNode: [[[DISK]file:/home/andrew/dev/hadoop/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data17, [DISK]file:/home/andrew/dev/hadoop/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data18]] heartbeating to localhost/127.0.0.1:45189] INFO datanode.DataNode (BPOfferService.java:processCommandFromActive(738)) - DatanodeCommand action: DNA_ERASURE_CODING_RECOVERY 2017-03-16 13:57:40,943 [DataXceiver for client at /127.0.0.1:47340 [Receiving block BP-1145201309-127.0.1.1-1489697856256:blk_-9223372036854775786_1002]] INFO datanode.DataNode (DataXceiver.java:writeBlock(717)) - Receiving BP-1145201309-127.0.1.1-1489697856256:blk_-9223372036854775786_1002 src: /127.0.0.1:47340 dest: /127.0.0.1:44841 2017-03-16 13:57:40,944 [DataXceiver for client at /127.0.0.1:54478 [Receiving block BP-1145201309-127.0.1.1-1489697856256:blk_-9223372036854775785_1002]] INFO datanode.DataNode (DataXceiver.java:writeBlock(717)) - Receiving BP-1145201309-127.0.1.1-1489697856256:blk_-9223372036854775785_1002 src: /127.0.0.1:54478 dest: /127.0.0.1:38977 2017-03-16 13:57:40,945 [DataXceiver for client at /127.0.0.1:51622 [Receiving block BP-1145201309-127.0.1.1-1489697856256:blk_-9223372036854775784_1002]] INFO datanode.DataNode (DataXceiver.java:writeBlock(717)) - Receiving BP-1145201309-127.0.1.1-1489697856256:blk_-9223372036854775784_1002 src: /127.0.0.1:51622 dest: /127.0.0.1:41895 {noformat} Based on this, I think there's one DN doing reconstruction work to make three parity blocks, which get written to the three new nodes. The above logs are all from the receiving DNs. Seems like we've got a serious lack of logging though in ECWorker / StripedBlockReconstructor / etc, since I determined the above via code inspection. I'd like to see logs for what blocks are being read in, for decoding, and also for writing the blocks out. Another JIRA? > BlockManager reconstruction work scheduling should correctly adhere to EC > block placement policy > > > Key: HDFS-10530 > URL: https://issues.apache.org/jira/browse/HDFS-10530 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Rui Gao >Assignee: Manoj Govindassamy > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-10530.1.patch, HDFS-10530.2.patch, > HDFS-10530.3.patch, HDFS-10530.4.patch, HDFS-10530.5.patch > > > This issue was found by [~tfukudom]. > Under RS-DEFAULT-6-3-64k EC policy, > 1. Create an EC file, the file was witten to all the 5 racks( 2 dns for each) > of the cluster. > 2. Reconstruction work would be scheduled if the 6th rack is added. > 3. While adding the 7th rack or more racks will not trigger reconstruction > work. > Based on default EC block placement policy defined in > âBlockPlacementPolicyRackFaultTolerant.javaâ, EC file should be able to be > scheduled to distribute to 9 racks if possible. > In *BlockManager#isPlacementPolicySatisfied(BlockInfo storedBlock)* , > *numReplicas* of striped blocks might should be *getRealTotalBlockNum()*, > instead of *getRealDataBlockNum()*. -- This message was sent by
[jira] [Commented] (HDFS-10675) Datanode support to read from external stores.
[ https://issues.apache.org/jira/browse/HDFS-10675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928941#comment-15928941 ] Hadoop QA commented on HDFS-10675: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 11 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 47s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 33s{color} | {color:green} HDFS-9806 passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 7m 9s{color} | {color:red} root in HDFS-9806 failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 14s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 54s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 13s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 2s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 17s{color} | {color:green} HDFS-9806 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 6m 30s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 6m 30s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 30s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 13s{color} | {color:orange} root: The patch generated 47 new + 1075 unchanged - 6 fixed = 1122 total (was 1081) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 22s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 51s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 7 unchanged - 0 fixed = 8 total (was 7) {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 41s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 6s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 57s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}146m 25s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSRollback | | | hadoop.hdfs.TestDFSUpgrade | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | | | hadoop.tools.TestHdfsConfigFields | | | hadoop.hdfs.TestDFSStartupVersions | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-10675 | | JIRA Patch URL |
[jira] [Updated] (HDFS-2980) HA: Balancer should have a test with the combination of Federation and HA enabled.
[ https://issues.apache.org/jira/browse/HDFS-2980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-2980: Component/s: federation > HA: Balancer should have a test with the combination of Federation and HA > enabled. > -- > > Key: HDFS-2980 > URL: https://issues.apache.org/jira/browse/HDFS-2980 > Project: Hadoop HDFS > Issue Type: Test > Components: balancer & mover, federation, ha >Affects Versions: 2.0.0-alpha >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G > > Balancer should have test with Federation and HA enabled. > One scenario: > Enable federation with 2 namespace ids and configure HA only for one > namespace id and check the behaviors as expected. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10394) move declaration of okhttp version from hdfs-client to hadoop-project POM
[ https://issues.apache.org/jira/browse/HDFS-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928931#comment-15928931 ] Hadoop QA commented on HDFS-10394: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 52s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 7s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 48m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-10394 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12859165/HDFS-10394.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 985ba723fea8 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 09ad8ef | | Default Java | 1.8.0_121 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/18744/testReport/ | | modules | C: hadoop-project hadoop-hdfs-project/hadoop-hdfs-client U: . | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/18744/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > move declaration of okhttp version from hdfs-client to hadoop-project POM > - > > Key: HDFS-10394 > URL: https://issues.apache.org/jira/browse/HDFS-10394 > Project: Hadoop HDFS > Issue Type: Bug > Components: build >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Xiaobing Zhou >Priority: Minor >
[jira] [Updated] (HDFS-11537) Block Storage : add cache layer
[ https://issues.apache.org/jira/browse/HDFS-11537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-11537: -- Attachment: HDFS-11537-HDSF-7240.002.patch Thanks [~xyao] for the review and comments! All except the following are addressed in v002 patch. bq. CBlockConfigKeys.java I realized that a number of keys in {{CBlockConfigKeys.java}} are currently not, and will probably never be used. So I've removed all the currently unused keys in this file, and will add keys later on when needed. bq. Line 612: can we track the TODO with a Apache JIRA? filed HDFS-11539 bq. Line 97: should we ensure shutdown of xceiverClientManager? I think this should be fine, as there is nothing in particular to clean up in this class, it does not have a shutdown method or anything similar anyway. And existing unit tests already uses this class like this. > Block Storage : add cache layer > --- > > Key: HDFS-11537 > URL: https://issues.apache.org/jira/browse/HDFS-11537 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11537-HDSF-7240.001.patch, > HDFS-11537-HDSF-7240.002.patch > > > This JIRA adds the cache layer. Specifically, this JIRA implements the cache > interface in HDFS-11361 and adds the code that actually talks to containers. > The upper layer can simply view the storage as a cache with simple put and > get interface, while in the backend the get and put are actually talking to > containers. This is a critical part to the cblock performance. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928927#comment-15928927 ] Junping Du commented on HDFS-11431: --- bq. Why not try and fix it properly for 2.9? +1 on fixing it more properly for 2.9. However, we shouldn't be risky if the missing class are not moved by then or other classes are found missing. I checked the code with some HDFS guy it seems ConfiguredFailoverProxyProvider is not very clean to move as include some server side logic. So, I think keeping this patch on branch-2 is benign which is a different case for trunk where build will be broken by the patch. bq. I think it also makes the tracking easier, since otherwise the fix versions don't reflect where the code is. I don't understand your point. In our current practice, all 2.8.x patches should be in branch-2 first. I think that's easier for track. bq. The current fix thus falls in the "hack" category, and I'd rather we not default to carrying it forward to future bramch-2 releases. If we have elegant fixes, I am OK with get fixes in. Otherwise, HDFS-6200 doesn't achieve its goal. However, I would rather exclude one feature which could cause regression rather than stopping the whole branch-2 release trains. In this sense, the patch here is still benign for branch-2. > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10530) BlockManager reconstruction work scheduling should correctly adhere to EC block placement policy
[ https://issues.apache.org/jira/browse/HDFS-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928919#comment-15928919 ] Hadoop QA commented on HDFS-10530: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 36s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 373 unchanged - 0 fixed = 374 total (was 373) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 20s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 94m 15s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-10530 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12859151/HDFS-10530.5.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 0df06d50311b 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 09ad8ef | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/18743/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/18743/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/18743/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/18743/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > BlockManager reconstruction work scheduling should correctly adhere to EC > block placement policy >
[jira] [Updated] (HDFS-6708) StorageType should be encoded in the block token
[ https://issues.apache.org/jira/browse/HDFS-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ewan Higgs updated HDFS-6708: - Attachment: HDFS-6708.0006.patch Attaching patch with test function parameters inverted based on [~virajith]'s feedback. It was indeed a silly inconsistency. > StorageType should be encoded in the block token > > > Key: HDFS-6708 > URL: https://issues.apache.org/jira/browse/HDFS-6708 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Affects Versions: 2.4.1 >Reporter: Arpit Agarwal >Assignee: Ewan Higgs > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-6708.0001.patch, HDFS-6708.0002.patch, > HDFS-6708.0003.patch, HDFS-6708.0004.patch, HDFS-6708.0005.patch, > HDFS-6708.0006.patch > > > HDFS-6702 is adding support for file creation based on StorageType. > The block token is used as a tamper-proof channel for communicating block > parameters from the NN to the DN during block creation. The StorageType > should be included in this block token. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-6708) StorageType should be encoded in the block token
[ https://issues.apache.org/jira/browse/HDFS-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928886#comment-15928886 ] Virajith Jalaparti commented on HDFS-6708: -- Thanks for the quick response [~ehiggs]. One minor comment: could you please swap the names of the parameters ({{allowed}}, and {{requested}}) in {{TestBlockStoragePolicy#testStorageTypeCheckAccessResult}}, to be consistent with the names in {{BlockTokenSecretManager#checkAccess}}? It is a bit confusing. Other than that, lgtm +1 > StorageType should be encoded in the block token > > > Key: HDFS-6708 > URL: https://issues.apache.org/jira/browse/HDFS-6708 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Affects Versions: 2.4.1 >Reporter: Arpit Agarwal >Assignee: Ewan Higgs > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-6708.0001.patch, HDFS-6708.0002.patch, > HDFS-6708.0003.patch, HDFS-6708.0004.patch, HDFS-6708.0005.patch > > > HDFS-6702 is adding support for file creation based on StorageType. > The block token is used as a tamper-proof channel for communicating block > parameters from the NN to the DN during block creation. The StorageType > should be included in this block token. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-6984) In Hadoop 3, make FileStatus serialize itself via protobuf
[ https://issues.apache.org/jira/browse/HDFS-6984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928873#comment-15928873 ] Chris Douglas commented on HDFS-6984: - Thanks for taking a look, [~andrew.wang]. Sorry, I had forgotten your earlier comment. Should we also try to remove {{Writable}} from {{FsPermission}}? We could deprecate the {{Writable}} API instead of removing it from these classes, in case projects/users depend on it downstream... the serialization/conversion can still live in a library, but be called from the deprecated methods. Since both {{FsPermission#getAclBit}} and {{FsPermission#getEncryptedBit}}/{{FileStatus#isEncrypted}} are user-facing, should these also be part of FSProtos? The payload for {{FileEncryptionInfoProto}} is likely HDFS-specific, or I'd suggest populating these using the presence of the fields. While we're at it, should we also change {{HdfsFileStatusProto}} to stop packing the acl/encryption bits among the permission bits? Kind of a sidebar: is there a reason encryption info is included in HdfsFileStatus, but ACLs are not? Would it be inappropriate to add a {{FileSystem#getAclStatus(FileStatus)}}, in case an implementation returns this information in its response (potentially avoiding the 2-RPC overhead)? I'll add some additional tests. > In Hadoop 3, make FileStatus serialize itself via protobuf > -- > > Key: HDFS-6984 > URL: https://issues.apache.org/jira/browse/HDFS-6984 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha1 >Reporter: Colin P. McCabe >Assignee: Colin P. McCabe > Labels: BB2015-05-TBR > Attachments: HDFS-6984.001.patch, HDFS-6984.002.patch, > HDFS-6984.003.patch, HDFS-6984.004.patch, HDFS-6984.005.patch, > HDFS-6984.nowritable.patch > > > FileStatus was a Writable in Hadoop 2 and earlier. Originally, we used this > to serialize it and send it over the wire. But in Hadoop 2 and later, we > have the protobuf {{HdfsFileStatusProto}} which serves to serialize this > information. The protobuf form is preferable, since it allows us to add new > fields in a backwards-compatible way. Another issue is that already a lot of > subclasses of FileStatus don't override the Writable methods of the > superclass, breaking the interface contract that read(status.write) should be > equal to the original status. > In Hadoop 3, we should just make FileStatus serialize itself via protobuf so > that we don't have to deal with these issues. It's probably too late to do > this in Hadoop 2, since user code may be relying on the existing FileStatus > serialization there. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928859#comment-15928859 ] Andrew Wang commented on HDFS-11431: The other note is that adding a dependency on hadoop-hdfs, even with deps excluded, means that we fail to achieve the very purpose of the hadoop-hdfs-client refactor. The current fix thus falls in the "hack" category, and I'd rather we not default to carrying it forward to future bramch-2 releases. > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10394) move declaration of okhttp version from hdfs-client to hadoop-project POM
[ https://issues.apache.org/jira/browse/HDFS-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928851#comment-15928851 ] Xiaobing Zhou commented on HDFS-10394: -- Don't know what happened to previous build. I posted v1 patch to trigger new build. > move declaration of okhttp version from hdfs-client to hadoop-project POM > - > > Key: HDFS-10394 > URL: https://issues.apache.org/jira/browse/HDFS-10394 > Project: Hadoop HDFS > Issue Type: Bug > Components: build >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-10394.000.patch, HDFS-10394.001.patch > > > The POM dependency on okhttp in hadoop-hdfs-client declares its version in > that POM instead. > the root declaration, including version, must go into the > hadoop-project/pom.xml so that its easy to track use and have only one place > if this version were ever to be incremented. As it stands, if any other > module picked up the library, they could adopt a different version. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10394) move declaration of okhttp version from hdfs-client to hadoop-project POM
[ https://issues.apache.org/jira/browse/HDFS-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928847#comment-15928847 ] Arpit Agarwal commented on HDFS-10394: -- You don't need to declare the okhttp version in hadoop-hdfs-project/hadoop-hdfs-client/pom.xml (just leave out the version field). Also since okhttp.version is used in just one place you can eliminate the variable and use the version string directly in the hadoop-project/pom.xml dependency section. > move declaration of okhttp version from hdfs-client to hadoop-project POM > - > > Key: HDFS-10394 > URL: https://issues.apache.org/jira/browse/HDFS-10394 > Project: Hadoop HDFS > Issue Type: Bug > Components: build >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-10394.000.patch, HDFS-10394.001.patch > > > The POM dependency on okhttp in hadoop-hdfs-client declares its version in > that POM instead. > the root declaration, including version, must go into the > hadoop-project/pom.xml so that its easy to track use and have only one place > if this version were ever to be incremented. As it stands, if any other > module picked up the library, they could adopt a different version. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928831#comment-15928831 ] Andrew Wang commented on HDFS-11431: Why not try and fix it properly for 2.9? It's marked as a blocker for 3.0.0-alpha3, which is likely coming out before 2.9.0. I think it also makes the tracking easier, since otherwise the fix versions don't reflect where the code is. > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10394) move declaration of okhttp version from hdfs-client to hadoop-project POM
[ https://issues.apache.org/jira/browse/HDFS-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10394: - Attachment: HDFS-10394.001.patch > move declaration of okhttp version from hdfs-client to hadoop-project POM > - > > Key: HDFS-10394 > URL: https://issues.apache.org/jira/browse/HDFS-10394 > Project: Hadoop HDFS > Issue Type: Bug > Components: build >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-10394.000.patch, HDFS-10394.001.patch > > > The POM dependency on okhttp in hadoop-hdfs-client declares its version in > that POM instead. > the root declaration, including version, must go into the > hadoop-project/pom.xml so that its easy to track use and have only one place > if this version were ever to be incremented. As it stands, if any other > module picked up the library, they could adopt a different version. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928808#comment-15928808 ] Junping Du commented on HDFS-11431: --- Reverted. > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11502) dn.js set datanode UI to window.location.hostname, it should use jmx bean property to setup hostname
[ https://issues.apache.org/jira/browse/HDFS-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928772#comment-15928772 ] Jeffrey E Rodriguez commented on HDFS-11502: - checking the TestHASafeMode failure > dn.js set datanode UI to window.location.hostname, it should use jmx bean > property to setup hostname > > > Key: HDFS-11502 > URL: https://issues.apache.org/jira/browse/HDFS-11502 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.7.2, 2.7.3 > Environment: all >Reporter: Jeffrey E Rodriguez >Assignee: Jeffrey E Rodriguez > Attachments: HDFS-11502.001.patch, HDFS-11502.002.patch, > HDFS-11502.003.patch > > > Datanode UI calls "dn.js" which loads properties for datanode. "dn.js" sets > "data.dn.HostName" datanode UI to "window.location.hostname", it should use a > datanode property from jmx beans or an appropriate property. The issue is > that if we use a proxy to access datanode UI we would show proxy hostanme > instead of actual datanode hostname. > I am proposing using Hadoop:service=DataNode,name=JvmMetrics" tag.Hostname > field to do that. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11517) Expose slow disks via DataNode JMX
[ https://issues.apache.org/jira/browse/HDFS-11517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928739#comment-15928739 ] Arpit Agarwal commented on HDFS-11517: -- The v2 patch looks good. One minor suggestion, DataNode#getSlowDisks should be package private. +1 otherwise. > Expose slow disks via DataNode JMX > -- > > Key: HDFS-11517 > URL: https://issues.apache.org/jira/browse/HDFS-11517 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HDFS-11517.000.patch, HDFS-11517.001.patch > > > HDFS-11461 introduces slow disk detection. We can expose these findings > through JMX for visibility. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928731#comment-15928731 ] Hudson commented on HDFS-11431: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11416 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11416/]) Revert "HDFS-11431. hadoop-hdfs-client JAR does not include (stevel: rev 79ede403eed49f77e3f0e4b103fc8619cac67168) * (edit) hadoop-client-modules/hadoop-client/pom.xml > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11533) reuseAddress option should be used for child channels in Portmap and SimpleTcpServer
[ https://issues.apache.org/jira/browse/HDFS-11533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928732#comment-15928732 ] Hudson commented on HDFS-11533: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11416 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11416/]) HDFS-11533. reuseAddress option should be used for child channels in (jitendra: rev 09ad8effb825eddbf0ee2ef591a0d16a58468f56) * (edit) hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleTcpServer.java * (edit) hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/Portmap.java > reuseAddress option should be used for child channels in Portmap and > SimpleTcpServer > > > Key: HDFS-11533 > URL: https://issues.apache.org/jira/browse/HDFS-11533 > Project: Hadoop HDFS > Issue Type: Bug > Components: nfs >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HDFS-11533.001.patch > > > Bind can fail in SimpleTcpServer because reuseAddress option is not used for > child channels. Binding to the port for child channels can fail because this > option is currently only set for parent channel. > This options is not needed in SimpleUdpServer, Portmap(udpServer) > as they use ConnectionlessBootstrap, where the child option is not needed > https://docs.jboss.org/netty/3.2/api/org/jboss/netty/bootstrap/ConnectionlessBootstrap.html > However Portmap(tcpServer) and SimpleTcpServer > uses ServerBootstrap, where the child option is needed > https://docs.jboss.org/netty/3.2/api/org/jboss/netty/bootstrap/ServerBootstrap.html > {code:java} > Failed to start the TCP server.\norg.jboss.netty.channel.ChannelException: > Failed to bind to: 0.0.0.0/0.0.0.0:4242\n\tat > org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)\n\tat > org.apache.hadoop.oncrpc.SimpleTcpServer.run(SimpleTcpServer.java:87)\n\tat > org.apache.hadoop.mount.MountdBase.startTCPServer(MountdBase.java:83)\n\tat > org.apache.hadoop.mount.MountdBase.start(MountdBase.java:98)\n\tat > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)\n\tat > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)\n\tat > org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:71)\n\tat > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat > java.lang.reflect.Method.invoke(Method.java:498)\n\tat > org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)\nCaused > by: java.net.BindException: Address already in use\n\tat > sun.nio.ch.Net.bind0(Native Method)\n\tat > sun.nio.ch.Net.bind(Net.java:433)\n\tat > sun.nio.ch.Net.bind(Net.java:425)\n\tat > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)\n\tat > sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)\n\tat > org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)\n\tat > > org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366)\n\tat > > org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290)\n\tat > > org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)\n\tat > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat > java.lang.Thread.run(Thread.java:745)\n > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928720#comment-15928720 ] Junping Du commented on HDFS-11431: --- Hi [~andrew.wang], do you have more concern for patch here landing on branch-2? If not, I will revert the previous revert on branch-2. > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11539) Block Storage : configurable max cache size
Chen Liang created HDFS-11539: - Summary: Block Storage : configurable max cache size Key: HDFS-11539 URL: https://issues.apache.org/jira/browse/HDFS-11539 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Chen Liang Currently, there is no max size limit for CBlock's local cache. In theory, this means the cache can potentially increase unbounded. We should make the max size configurable. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11538) Move ConfiguredFailoverProxyProvider into hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-11538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928708#comment-15928708 ] Junping Du commented on HDFS-11538: --- Drop 2.9 as HDFS-11431 works well for branch-2. > Move ConfiguredFailoverProxyProvider into hadoop-hdfs-client > > > Key: HDFS-11538 > URL: https://issues.apache.org/jira/browse/HDFS-11538 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Andrew Wang >Priority: Blocker > > Follow-up for HDFS-11431. We should move this missing class over rather than > pulling in the whole hadoop-hdfs dependency. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11533) reuseAddress option should be used for child channels in Portmap and SimpleTcpServer
[ https://issues.apache.org/jira/browse/HDFS-11533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928710#comment-15928710 ] Jitendra Nath Pandey commented on HDFS-11533: - I have committed this to trunk and branch-2. Thanks to [~msingh]. > reuseAddress option should be used for child channels in Portmap and > SimpleTcpServer > > > Key: HDFS-11533 > URL: https://issues.apache.org/jira/browse/HDFS-11533 > Project: Hadoop HDFS > Issue Type: Bug > Components: nfs >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HDFS-11533.001.patch > > > Bind can fail in SimpleTcpServer because reuseAddress option is not used for > child channels. Binding to the port for child channels can fail because this > option is currently only set for parent channel. > This options is not needed in SimpleUdpServer, Portmap(udpServer) > as they use ConnectionlessBootstrap, where the child option is not needed > https://docs.jboss.org/netty/3.2/api/org/jboss/netty/bootstrap/ConnectionlessBootstrap.html > However Portmap(tcpServer) and SimpleTcpServer > uses ServerBootstrap, where the child option is needed > https://docs.jboss.org/netty/3.2/api/org/jboss/netty/bootstrap/ServerBootstrap.html > {code:java} > Failed to start the TCP server.\norg.jboss.netty.channel.ChannelException: > Failed to bind to: 0.0.0.0/0.0.0.0:4242\n\tat > org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)\n\tat > org.apache.hadoop.oncrpc.SimpleTcpServer.run(SimpleTcpServer.java:87)\n\tat > org.apache.hadoop.mount.MountdBase.startTCPServer(MountdBase.java:83)\n\tat > org.apache.hadoop.mount.MountdBase.start(MountdBase.java:98)\n\tat > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)\n\tat > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)\n\tat > org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:71)\n\tat > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat > java.lang.reflect.Method.invoke(Method.java:498)\n\tat > org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)\nCaused > by: java.net.BindException: Address already in use\n\tat > sun.nio.ch.Net.bind0(Native Method)\n\tat > sun.nio.ch.Net.bind(Net.java:433)\n\tat > sun.nio.ch.Net.bind(Net.java:425)\n\tat > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)\n\tat > sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)\n\tat > org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)\n\tat > > org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366)\n\tat > > org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290)\n\tat > > org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)\n\tat > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat > java.lang.Thread.run(Thread.java:745)\n > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11533) reuseAddress option should be used for child channels in Portmap and SimpleTcpServer
[ https://issues.apache.org/jira/browse/HDFS-11533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HDFS-11533: Resolution: Fixed Fix Version/s: 3.0.0-alpha3 2.9.0 Status: Resolved (was: Patch Available) > reuseAddress option should be used for child channels in Portmap and > SimpleTcpServer > > > Key: HDFS-11533 > URL: https://issues.apache.org/jira/browse/HDFS-11533 > Project: Hadoop HDFS > Issue Type: Bug > Components: nfs >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HDFS-11533.001.patch > > > Bind can fail in SimpleTcpServer because reuseAddress option is not used for > child channels. Binding to the port for child channels can fail because this > option is currently only set for parent channel. > This options is not needed in SimpleUdpServer, Portmap(udpServer) > as they use ConnectionlessBootstrap, where the child option is not needed > https://docs.jboss.org/netty/3.2/api/org/jboss/netty/bootstrap/ConnectionlessBootstrap.html > However Portmap(tcpServer) and SimpleTcpServer > uses ServerBootstrap, where the child option is needed > https://docs.jboss.org/netty/3.2/api/org/jboss/netty/bootstrap/ServerBootstrap.html > {code:java} > Failed to start the TCP server.\norg.jboss.netty.channel.ChannelException: > Failed to bind to: 0.0.0.0/0.0.0.0:4242\n\tat > org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)\n\tat > org.apache.hadoop.oncrpc.SimpleTcpServer.run(SimpleTcpServer.java:87)\n\tat > org.apache.hadoop.mount.MountdBase.startTCPServer(MountdBase.java:83)\n\tat > org.apache.hadoop.mount.MountdBase.start(MountdBase.java:98)\n\tat > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)\n\tat > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)\n\tat > org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:71)\n\tat > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat > java.lang.reflect.Method.invoke(Method.java:498)\n\tat > org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)\nCaused > by: java.net.BindException: Address already in use\n\tat > sun.nio.ch.Net.bind0(Native Method)\n\tat > sun.nio.ch.Net.bind(Net.java:433)\n\tat > sun.nio.ch.Net.bind(Net.java:425)\n\tat > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)\n\tat > sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)\n\tat > org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)\n\tat > > org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366)\n\tat > > org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290)\n\tat > > org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)\n\tat > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat > java.lang.Thread.run(Thread.java:745)\n > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11538) Move ConfiguredFailoverProxyProvider into hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-11538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated HDFS-11538: -- Target Version/s: 3.0.0-alpha3 (was: 2.9.0, 3.0.0-alpha3) > Move ConfiguredFailoverProxyProvider into hadoop-hdfs-client > > > Key: HDFS-11538 > URL: https://issues.apache.org/jira/browse/HDFS-11538 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Andrew Wang >Priority: Blocker > > Follow-up for HDFS-11431. We should move this missing class over rather than > pulling in the whole hadoop-hdfs dependency. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928707#comment-15928707 ] Junping Du commented on HDFS-11431: --- Branch-2 work well. HDFS-11538 should only for 3.0. > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10530) BlockManager reconstruction work scheduling should correctly adhere to EC block placement policy
[ https://issues.apache.org/jira/browse/HDFS-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HDFS-10530: -- Attachment: HDFS-10530.5.patch Thanks for the detailed review comments [~tasanuma0829] and [~andrew.wang]. Much appreciated. Attaching v5 patch with comments addressed. Please take a look. bq. It would be more readable if the names of the additional DNs are different from the first DNs. Sure, modified the same rack hosts as per your suggestion. bq. We can use DFSTestUtil.waitForReplication instead of the GenericTestUtils.waitFor. Good idea. Replaced GenericTestUtils.waitFor() with DFSTestUtil.waitForReplication bq. Are these necessarily the parity blocks, or could they be any of the blocks that are co-located on the first 6 racks? DFSStripedOutputStream verifies if the allocated block locations length is at least equals numDataBlocks, otherwise it throws IOException and the client halts. So, the relaxation is only for the parity blocks. {code} [Thread-5] WARN hdfs.DFSOutputStream (DFSStripedOutputStream.java:allocateNewBlock(497)) - Failed to get block location for parity block, index=6 [Thread-5] WARN hdfs.DFSOutputStream (DFSStripedOutputStream.java:allocateNewBlock(497)) - Failed to get block location for parity block, index=7 [Thread-5] WARN hdfs.DFSOutputStream (DFSStripedOutputStream.java:allocateNewBlock(497)) - Failed to get block location for parity block, index=8 {code} So, upon file stream close we get the following warning message (though not accurate) when the parity blocks are not yet written out. {code} INFO namenode.FSNamesystem (FSNamesystem.java:checkBlocksComplete(2726)) - BLOCK* blk_-9223372036854775792_1002 is COMMITTED but not COMPLETE(numNodes= 0 < minimum = 6) in file /ec/test1 INFO hdfs.StateChange (FSNamesystem.java:completeFile(2679)) - DIR* completeFile: /ec/test1 is closed by DFSClient_NONMAPREDUCE_-1900076771_17 WARN hdfs.DFSOutputStream (DFSStripedOutputStream.java:logCorruptBlocks(1117)) - Block group <1> has 3 corrupt blocks. It's at high risk of losing data. {code} bq. Also, does this happen via EC reconstruction, or do we simply copy the blocks over to the new racks? Upon addition of 3 new hosts to the existing racks, and after the heartbeat, we get a follow up command "{{DNA_ERASURE_CODING_RECOVERY}} and I see the following, which looks like copy of block from existing data nodes. {code} INFO datanode.DataNode (DataXceiver.java:writeBlock(717)) - Receiving BP-1357293931-172.16.3.66-1489688993295:blk_-922337203 6854775786_1002 src: /127.0.0.1:63711 dest: /127.0.0.1:63701 INFO datanode.DataNode (DataXceiver.java:writeBlock(717)) - Receiving BP-1357293931-172.16.3.66-1489688993295:blk_-922337203 6854775785_1002 src: /127.0.0.1:63712 dest: /127.0.0.1:63697 INFO datanode.DataNode (DataXceiver.java:writeBlock(717)) - Receiving BP-1357293931-172.16.3.66-1489688993295:blk_-922337203 6854775784_1002 src: /127.0.0.1:63713 dest: /127.0.0.1:63693 INFO datanode.DataNode (DataXceiver.java:writeBlock(893)) - Received BP-1357293931-172.16.3.66-1489688993295:blk_-9223372036 854775786_1002 src: /127.0.0.1:63711 dest: /127.0.0.1:63701 of size 65536 INFO datanode.DataNode (DataXceiver.java:writeBlock(893)) - Received BP-1357293931-172.16.3.66-1489688993295:blk_-9223372036 854775785_1002 src: /127.0.0.1:63712 dest: /127.0.0.1:63697 of size 65536 {code} bq. Is the BPP violated before entering the waitFor? If so we should assert that. This may require pausing reconstruction work and resuming later. BPP is not violated before or after the addition of 3 new hosts in the existing racks as there are only 6 racks which is lesser than the optimal 9 racks. One more asserts after waitFor() is added now. bq. Do you think TestBPPRackFaultTolerant needs any additional unit tests along these lines? Sure, will discuss with you on this. bq. Looks like these have the same names as the initial DNs as Takanobu noted. Might be nice to specify the racks too to be explicit. Done. bq. If we later enhance the NN to automatically fix up misplaced EC blocks, this assert will be flaky. Maybe add a comment? Thats right, my intention is to verify the other proposed fix of automatic correction for misplaced EC blocks via this test. Sure, added a comment on this verification and a TODO. > BlockManager reconstruction work scheduling should correctly adhere to EC > block placement policy > > > Key: HDFS-10530 > URL: https://issues.apache.org/jira/browse/HDFS-10530 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Rui Gao >Assignee: Manoj Govindassamy > Labels: hdfs-ec-3.0-nice-to-have > Attachments:
[jira] [Resolved] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HDFS-11431. Resolution: Fixed Let's close this as fixed only in branch-2.8.0 / branch-2.8. I also reverted this from branch-2. Filed HDFS-11538 to do the real fix for 2.9 and 3.0. > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11538) Move ConfiguredFailoverProxyProvider into hadoop-hdfs-client
Andrew Wang created HDFS-11538: -- Summary: Move ConfiguredFailoverProxyProvider into hadoop-hdfs-client Key: HDFS-11538 URL: https://issues.apache.org/jira/browse/HDFS-11538 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Affects Versions: 3.0.0-alpha1, 2.8.0 Reporter: Andrew Wang Priority: Blocker Follow-up for HDFS-11431. We should move this missing class over rather than pulling in the whole hadoop-hdfs dependency. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-11431: --- Fix Version/s: (was: 3.0.0-alpha3) > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928684#comment-15928684 ] Steve Loughran commented on HDFS-11431: --- rolled back from trunk, re-opened. > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10675) Datanode support to read from external stores.
[ https://issues.apache.org/jira/browse/HDFS-10675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-10675: -- Status: Patch Available (was: Open) > Datanode support to read from external stores. > --- > > Key: HDFS-10675 > URL: https://issues.apache.org/jira/browse/HDFS-10675 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti > Attachments: HDFS-10675-HDFS-9806.001.patch, > HDFS-10675-HDFS-9806.002.patch, HDFS-10675-HDFS-9806.003.patch, > HDFS-10675-HDFS-9806.004.patch, HDFS-10675-HDFS-9806.005.patch > > > This JIRA introduces a new {{PROVIDED}} {{StorageType}} to represent external > stores, along with enabling the Datanode to read from such stores using a > {{ProvidedReplica}} and a {{ProvidedVolume}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Reopened] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reopened HDFS-11431: --- > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928658#comment-15928658 ] Steve Loughran commented on HDFS-11431: --- uh, only did on branch-2.x and I run trunk with -DskipShading as I value my time. How about I revert from trunk for now. I am not seeing problems with maven builds on 2.8 > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-10675) Datanode support to read from external stores.
[ https://issues.apache.org/jira/browse/HDFS-10675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti reassigned HDFS-10675: - Assignee: Virajith Jalaparti > Datanode support to read from external stores. > --- > > Key: HDFS-10675 > URL: https://issues.apache.org/jira/browse/HDFS-10675 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-10675-HDFS-9806.001.patch, > HDFS-10675-HDFS-9806.002.patch, HDFS-10675-HDFS-9806.003.patch, > HDFS-10675-HDFS-9806.004.patch, HDFS-10675-HDFS-9806.005.patch > > > This JIRA introduces a new {{PROVIDED}} {{StorageType}} to represent external > stores, along with enabling the Datanode to read from such stores using a > {{ProvidedReplica}} and a {{ProvidedVolume}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10675) Datanode support to read from external stores.
[ https://issues.apache.org/jira/browse/HDFS-10675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-10675: -- Attachment: HDFS-10675-HDFS-9806.005.patch Updated patch that fixes a failing check ({{StorageLocation#check}}) for PROVIDED locations. > Datanode support to read from external stores. > --- > > Key: HDFS-10675 > URL: https://issues.apache.org/jira/browse/HDFS-10675 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti > Attachments: HDFS-10675-HDFS-9806.001.patch, > HDFS-10675-HDFS-9806.002.patch, HDFS-10675-HDFS-9806.003.patch, > HDFS-10675-HDFS-9806.004.patch, HDFS-10675-HDFS-9806.005.patch > > > This JIRA introduces a new {{PROVIDED}} {{StorageType}} to represent external > stores, along with enabling the Datanode to read from such stores using a > {{ProvidedReplica}} and a {{ProvidedVolume}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11502) dn.js set datanode UI to window.location.hostname, it should use jmx bean property to setup hostname
[ https://issues.apache.org/jira/browse/HDFS-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928640#comment-15928640 ] Hadoop QA commented on HDFS-11502: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 13m 46s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 35s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 9 new + 175 unchanged - 0 fixed = 184 total (was 175) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 3s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 95m 38s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestHASafeMode | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11502 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12859133/HDFS-11502.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux a70707a89f63 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ba62b50 | | Default Java | 1.8.0_121 | | mvninstall | https://builds.apache.org/job/PreCommit-HDFS-Build/18740/artifact/patchprocess/branch-mvninstall-root.txt | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/18740/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/18740/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/18740/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/18740/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This
[jira] [Commented] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928613#comment-15928613 ] Junping Du commented on HDFS-11431: --- bq. Did this run against trunk precommit? I don't think so. The patch should only run against branch-2.8 given the name marked. I verified that branch-2 and branch-2.8 are running well. May be we should revert the patch from trunk and file a separated JIRA to track trunk effort? - Given the fixes for trunk/branch-2 should be significantly different. > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928603#comment-15928603 ] Anu Engineer commented on HDFS-11431: - [~kshukla] I ran into this problem while doing a merge with Ozone branch. [~busbey] was kind enough to explain the issue to me. I still haven't fixed it though. Here is the JIRA tracking that issue : https://issues.apache.org/jira/browse/HDFS-11496 > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928561#comment-15928561 ] Kuhu Shukla edited comment on HDFS-11431 at 3/16/17 6:22 PM: - mvn install is breaking for me with the error that duplicate classes were found while installing "Apache Hadoop Client Packaging Invariants for Test" after this check-in. Let me know if I am missing something here. Thanks a lot! {code} [INFO] Compiling 1 source file to /home/jenkins/jenkins-slave/workspace/Hadoop-trunk-Commit/source/hadoop-client-modules/hadoop-client-integration-tests/target/test-classes [WARNING] Rule 1: org.apache.maven.plugins.enforcer.BanDuplicateClasses failed with message: Duplicate classes found: {code} https://builds.apache.org/job/Hadoop-trunk-Commit/11414/console was (Author: kshukla): mvn install is breaking for me with the error that duplicate classes were found while installing "Apache Hadoop Client Packaging Invariants for Test" after this check-in. Let me know if I am missing something here. Thanks a lot! {code} [INFO] Compiling 1 source file to /home/jenkins/jenkins-slave/workspace/Hadoop-trunk-Commit/source/hadoop-client-modules/hadoop-client-integration-tests/target/test-classes [WARNING] Rule 1: org.apache.maven.plugins.enforcer.BanDuplicateClasses failed with message: Duplicate classes found: {code} > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11533) reuseAddress option should be used for child channels in Portmap and SimpleTcpServer
[ https://issues.apache.org/jira/browse/HDFS-11533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928573#comment-15928573 ] Jitendra Nath Pandey commented on HDFS-11533: - +1 > reuseAddress option should be used for child channels in Portmap and > SimpleTcpServer > > > Key: HDFS-11533 > URL: https://issues.apache.org/jira/browse/HDFS-11533 > Project: Hadoop HDFS > Issue Type: Bug > Components: nfs >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Attachments: HDFS-11533.001.patch > > > Bind can fail in SimpleTcpServer because reuseAddress option is not used for > child channels. Binding to the port for child channels can fail because this > option is currently only set for parent channel. > This options is not needed in SimpleUdpServer, Portmap(udpServer) > as they use ConnectionlessBootstrap, where the child option is not needed > https://docs.jboss.org/netty/3.2/api/org/jboss/netty/bootstrap/ConnectionlessBootstrap.html > However Portmap(tcpServer) and SimpleTcpServer > uses ServerBootstrap, where the child option is needed > https://docs.jboss.org/netty/3.2/api/org/jboss/netty/bootstrap/ServerBootstrap.html > {code:java} > Failed to start the TCP server.\norg.jboss.netty.channel.ChannelException: > Failed to bind to: 0.0.0.0/0.0.0.0:4242\n\tat > org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)\n\tat > org.apache.hadoop.oncrpc.SimpleTcpServer.run(SimpleTcpServer.java:87)\n\tat > org.apache.hadoop.mount.MountdBase.startTCPServer(MountdBase.java:83)\n\tat > org.apache.hadoop.mount.MountdBase.start(MountdBase.java:98)\n\tat > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)\n\tat > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)\n\tat > org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:71)\n\tat > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat > java.lang.reflect.Method.invoke(Method.java:498)\n\tat > org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)\nCaused > by: java.net.BindException: Address already in use\n\tat > sun.nio.ch.Net.bind0(Native Method)\n\tat > sun.nio.ch.Net.bind(Net.java:433)\n\tat > sun.nio.ch.Net.bind(Net.java:425)\n\tat > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)\n\tat > sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)\n\tat > org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)\n\tat > > org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366)\n\tat > > org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290)\n\tat > > org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)\n\tat > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat > java.lang.Thread.run(Thread.java:745)\n > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928569#comment-15928569 ] Andrew Wang commented on HDFS-11431: Did this run against trunk precommit? Sounds like this broke the shaded client. > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-6200) Create a separate jar for hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-6200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928568#comment-15928568 ] Junping Du commented on HDFS-6200: -- Due to HDFS-11431, I just updated release note here with adding "Please note that hadoop-hdfs-client module could miss class like ConfiguredFailoverProxyProvider. So if a cluster is in HA deployment, we should still use hadoop-hdfs instead." HDFS folks, please check if this is proper notes. Thanks! > Create a separate jar for hdfs-client > - > > Key: HDFS-6200 > URL: https://issues.apache.org/jira/browse/HDFS-6200 > Project: Hadoop HDFS > Issue Type: Improvement > Components: build >Reporter: Haohui Mai >Assignee: Haohui Mai > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HDFS-6200.000.patch, HDFS-6200.001.patch, > HDFS-6200.002.patch, HDFS-6200.003.patch, HDFS-6200.004.patch, > HDFS-6200.005.patch, HDFS-6200.006.patch, HDFS-6200.007.patch > > > Currently the hadoop-hdfs jar contain both the hdfs server and the hdfs > client. As discussed in the hdfs-dev mailing list > (http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201404.mbox/browser), > downstream projects are forced to bring in additional dependency in order to > access hdfs. The additional dependency sometimes can be difficult to manage > for projects like Apache Falcon and Apache Oozie. > This jira proposes to create a new project, hadoop-hdfs-cliient, which > contains the client side of the hdfs code. Downstream projects can use this > jar instead of the hadoop-hdfs to avoid unnecessary dependency. > Note that it does not break the compatibility of downstream projects. This is > because old downstream projects implicitly depend on hadoop-hdfs-client > through the hadoop-hdfs jar. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11535) Performance analysis of new DFSNetworkTopology#chooseRandom
[ https://issues.apache.org/jira/browse/HDFS-11535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928566#comment-15928566 ] Chen Liang commented on HDFS-11535: --- Thanks [~arpitagarwal] and [~linyiqun] for the comments! Just to make sure we are on the same page: bq. If there are 99% storage type X and only 1% for storage type Y, actually here we should use the old method. If we only search for X, this is true, but is we will searching for Y, the old method will be exceptionally slow. So based on this, I think your point of searching based on percentage is actually a very good proposal. bq. In some special case, one node will not just contain one storage type. Maybe it will have two or two more different storage types. Based on this, the old method will also be better than the new method no matter how many the target storage has in cluster. As long as one node contain one target storage type then it can be quickly chosen. I'm not sure I understood this scenario...also... the information on the inner nodes already has nothing to do with the actual number of storages, it is the number of datanodes with that storage type. Additionally, an alternative approach I thought about, was that doing this "use old or new method?" check on every inner node: simply replacing "..check root node to .." to "...check current inner node to ..." in my original proposal. For example, in your X and Y example, say we look for X and we decide to use new method at root, because there are two types X and Y. Then we pick a random child node, check again and found that this child node only has X. Then we simply call the old method and return. I think this is probably closest to optimality but it adds more complexity to the already fairly complex code logicI personally think your proposal of threshold-based approach is good enough. Will address the other comments about the unit test later on. (I will probably remove the writeToDisk calls because the data file themselves are barely useful without additional parsing). > Performance analysis of new DFSNetworkTopology#chooseRandom > --- > > Key: HDFS-11535 > URL: https://issues.apache.org/jira/browse/HDFS-11535 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11535.001.patch, PerfTest.pdf > > > This JIRA is created to post the results of some performance experiments we > did. For those who are interested, please the attached .pdf file for more > detail. The attached patch file includes the experiment code we ran. > The key insights we got from these tests is that: although *the new method > outperforms the current one in most cases*. There is still *one case where > the current one is better*. Which is when there is only one storage type in > the cluster, and we also always look for this storage type. In this case, it > is simply a waste of time to perform storage-type-based pruning, blindly > picking up a random node (current methods) would suffice. > Therefore, based on the analysis, we propose to use a *combination of both > the old and the new methods*: > say, we search for a node of type X, since now inner node all keep storage > type info, we can *just check root node to see if X is the only type it has*. > If yes, blindly picking a random leaf will work, so we simply call the old > method, otherwise we call the new method. > There is still at least one missing piece in this performance test, which is > garbage collection. The new method does a few more object creation when doing > the search, which adds overhead to GC. I'm still thinking of any potential > optimization but this seems tricky, also I'm not sure whether this > optimization worth doing at all. Please feel free to leave any > comments/suggestions. > Thanks [~arpitagarwal] and [~szetszwo] for the offline discussion. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-6200) Create a separate jar for hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-6200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated HDFS-6200: - Release Note: Projects that access HDFS can depend on the hadoop-hdfs-client module instead of the hadoop-hdfs module to avoid pulling in unnecessary dependency. Please note that hadoop-hdfs-client module could miss class like ConfiguredFailoverProxyProvider. So if a cluster is in HA deployment, we should still use hadoop-hdfs instead. was:Projects that access HDFS can depend on the hadoop-hdfs-client module instead of the hadoop-hdfs module to avoid pulling in unnecessary dependency. > Create a separate jar for hdfs-client > - > > Key: HDFS-6200 > URL: https://issues.apache.org/jira/browse/HDFS-6200 > Project: Hadoop HDFS > Issue Type: Improvement > Components: build >Reporter: Haohui Mai >Assignee: Haohui Mai > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HDFS-6200.000.patch, HDFS-6200.001.patch, > HDFS-6200.002.patch, HDFS-6200.003.patch, HDFS-6200.004.patch, > HDFS-6200.005.patch, HDFS-6200.006.patch, HDFS-6200.007.patch > > > Currently the hadoop-hdfs jar contain both the hdfs server and the hdfs > client. As discussed in the hdfs-dev mailing list > (http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201404.mbox/browser), > downstream projects are forced to bring in additional dependency in order to > access hdfs. The additional dependency sometimes can be difficult to manage > for projects like Apache Falcon and Apache Oozie. > This jira proposes to create a new project, hadoop-hdfs-cliient, which > contains the client side of the hdfs code. Downstream projects can use this > jar instead of the hadoop-hdfs to avoid unnecessary dependency. > Note that it does not break the compatibility of downstream projects. This is > because old downstream projects implicitly depend on hadoop-hdfs-client > through the hadoop-hdfs jar. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928561#comment-15928561 ] Kuhu Shukla commented on HDFS-11431: mvn install is breaking for me with the error that duplicate classes were found while installing "Apache Hadoop Client Packaging Invariants for Test" after this check-in. Let me know if I am missing something here. Thanks a lot! {code} [INFO] Compiling 1 source file to /home/jenkins/jenkins-slave/workspace/Hadoop-trunk-Commit/source/hadoop-client-modules/hadoop-client-integration-tests/target/test-classes [WARNING] Rule 1: org.apache.maven.plugins.enforcer.BanDuplicateClasses failed with message: Duplicate classes found: {code} > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11531) Expose hedged read metrics via libHDFS API
[ https://issues.apache.org/jira/browse/HDFS-11531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928557#comment-15928557 ] Hadoop QA commented on HDFS-11531: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 14m 51s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 42s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 17 unchanged - 0 fixed = 18 total (was 17) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 5s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 40s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 31m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11531 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12858998/HDFS-11531.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 01ffa7430d96 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool |
[jira] [Commented] (HDFS-11516) Admin command line should print message to stderr in failure case
[ https://issues.apache.org/jira/browse/HDFS-11516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928502#comment-15928502 ] Hadoop QA commented on HDFS-11516: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 8 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 55s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 13m 42s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 54s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 59s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 6s{color} | {color:orange} root: The patch generated 2 new + 72 unchanged - 1 fixed = 74 total (was 73) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 6s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 32s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 51s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}144m 47s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | Timed out junit tests | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11516 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12859116/HDFS-11516.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 16e16461f4d4 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7114bad | | Default Java | 1.8.0_121 | | mvninstall | https://builds.apache.org/job/PreCommit-HDFS-Build/18739/artifact/patchprocess/branch-mvninstall-root.txt | | findbugs | v3.0.0 | | checkstyle |
[jira] [Assigned] (HDFS-11529) libHDFS still does not return appropriate error information in many cases
[ https://issues.apache.org/jira/browse/HDFS-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sailesh Mukil reassigned HDFS-11529: Assignee: Sailesh Mukil > libHDFS still does not return appropriate error information in many cases > - > > Key: HDFS-11529 > URL: https://issues.apache.org/jira/browse/HDFS-11529 > Project: Hadoop HDFS > Issue Type: Bug > Components: libhdfs >Affects Versions: 2.6.0 >Reporter: Sailesh Mukil >Assignee: Sailesh Mukil >Priority: Critical > Labels: errorhandling, libhdfs > > libHDFS uses a table to compare exceptions against and returns a > corresponding error code to the application in case of an error. > However, this table is manually populated and many times is disremembered > when new exceptions are added. > This causes libHDFS to return EINTERNAL (or Unknown Error(255)) whenever > these exceptions are hit. These are some examples of exceptions that have > been observed on an Error(255): > org.apache.hadoop.ipc.StandbyException (Operation category WRITE is not > supported in state standby) > java.io.EOFException: Cannot seek after EOF > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt) > It is of course not possible to have an error code for each and every type of > exception, so one suggestion of how this can be addressed is by having a call > such as hdfsGetLastException() that would return the last exception that a > libHDFS thread encountered. This way, an application may choose to call > hdfsGetLastException() if it receives EINTERNAL. > We can make use of the Thread Local Storage to store this information. Also, > this makes sure that the current functionality is preserved. > This is a follow up from HDFS-4997. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-6708) StorageType should be encoded in the block token
[ https://issues.apache.org/jira/browse/HDFS-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928485#comment-15928485 ] Hadoop QA commented on HDFS-6708: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 12m 59s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 22s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 8s{color} | {color:orange} root: The patch generated 1 new + 675 unchanged - 7 fixed = 676 total (was 682) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 26s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 51s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 3 new + 9 unchanged - 0 fixed = 12 total (was 9) {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 14s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 8s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 5s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 47s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}147m 56s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestKDiag | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-6708 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12859114/HDFS-6708.0005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 840aabe33d29 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7114bad | | Default Java | 1.8.0_121 | | mvninstall |
[jira] [Updated] (HDFS-11531) Expose hedged read metrics via libHDFS API
[ https://issues.apache.org/jira/browse/HDFS-11531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron T. Myers updated HDFS-11531: -- Target Version/s: 3.0.0-alpha3 Status: Patch Available (was: Open) > Expose hedged read metrics via libHDFS API > -- > > Key: HDFS-11531 > URL: https://issues.apache.org/jira/browse/HDFS-11531 > Project: Hadoop HDFS > Issue Type: Improvement > Components: libhdfs >Affects Versions: 2.6.0 >Reporter: Sailesh Mukil >Assignee: Sailesh Mukil > Attachments: HDFS-11531.000.patch > > > It would be good to expose the DFSHedgedReadMetrics via a libHDFS API for > applications to retrieve. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-11531) Expose hedged read metrics via libHDFS API
[ https://issues.apache.org/jira/browse/HDFS-11531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron T. Myers reassigned HDFS-11531: - Assignee: Sailesh Mukil > Expose hedged read metrics via libHDFS API > -- > > Key: HDFS-11531 > URL: https://issues.apache.org/jira/browse/HDFS-11531 > Project: Hadoop HDFS > Issue Type: Improvement > Components: libhdfs >Affects Versions: 2.6.0 >Reporter: Sailesh Mukil >Assignee: Sailesh Mukil > Attachments: HDFS-11531.000.patch > > > It would be good to expose the DFSHedgedReadMetrics via a libHDFS API for > applications to retrieve. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928457#comment-15928457 ] Allen Wittenauer edited comment on HDFS-11431 at 3/16/17 5:28 PM: -- bq. leveldbjni-all Wait, what? Why does this require leveldbjni? (Never mind all the problems that jar causes.) EDIT: NM, I misread that. was (Author: aw): bq. leveldbjni-all Wait, what? Why does this require leveldbjni? (Never mind all the problems that jar causes.) > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928457#comment-15928457 ] Allen Wittenauer commented on HDFS-11431: - bq. leveldbjni-all Wait, what? Why does this require leveldbjni? (Never mind all the problems that jar causes.) > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11502) dn.js set datanode UI to window.location.hostname, it should use jmx bean property to setup hostname
[ https://issues.apache.org/jira/browse/HDFS-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeffrey E Rodriguez updated HDFS-11502: Status: Patch Available (was: In Progress) > dn.js set datanode UI to window.location.hostname, it should use jmx bean > property to setup hostname > > > Key: HDFS-11502 > URL: https://issues.apache.org/jira/browse/HDFS-11502 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.7.3, 2.7.2 > Environment: all >Reporter: Jeffrey E Rodriguez >Assignee: Jeffrey E Rodriguez > Attachments: HDFS-11502.001.patch, HDFS-11502.002.patch, > HDFS-11502.003.patch > > > Datanode UI calls "dn.js" which loads properties for datanode. "dn.js" sets > "data.dn.HostName" datanode UI to "window.location.hostname", it should use a > datanode property from jmx beans or an appropriate property. The issue is > that if we use a proxy to access datanode UI we would show proxy hostanme > instead of actual datanode hostname. > I am proposing using Hadoop:service=DataNode,name=JvmMetrics" tag.Hostname > field to do that. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11502) dn.js set datanode UI to window.location.hostname, it should use jmx bean property to setup hostname
[ https://issues.apache.org/jira/browse/HDFS-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeffrey E Rodriguez updated HDFS-11502: Status: In Progress (was: Patch Available) > dn.js set datanode UI to window.location.hostname, it should use jmx bean > property to setup hostname > > > Key: HDFS-11502 > URL: https://issues.apache.org/jira/browse/HDFS-11502 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.7.3, 2.7.2 > Environment: all >Reporter: Jeffrey E Rodriguez >Assignee: Jeffrey E Rodriguez > Attachments: HDFS-11502.001.patch, HDFS-11502.002.patch, > HDFS-11502.003.patch > > > Datanode UI calls "dn.js" which loads properties for datanode. "dn.js" sets > "data.dn.HostName" datanode UI to "window.location.hostname", it should use a > datanode property from jmx beans or an appropriate property. The issue is > that if we use a proxy to access datanode UI we would show proxy hostanme > instead of actual datanode hostname. > I am proposing using Hadoop:service=DataNode,name=JvmMetrics" tag.Hostname > field to do that. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11502) dn.js set datanode UI to window.location.hostname, it should use jmx bean property to setup hostname
[ https://issues.apache.org/jira/browse/HDFS-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeffrey E Rodriguez updated HDFS-11502: Attachment: HDFS-11502.003.patch The following is a rework of the fix. Added getDataNodeHostname to the datanode bean. Added to the datanode bean. changed dn.js to use DataNodeHostname. > dn.js set datanode UI to window.location.hostname, it should use jmx bean > property to setup hostname > > > Key: HDFS-11502 > URL: https://issues.apache.org/jira/browse/HDFS-11502 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.7.2, 2.7.3 > Environment: all >Reporter: Jeffrey E Rodriguez >Assignee: Jeffrey E Rodriguez > Attachments: HDFS-11502.001.patch, HDFS-11502.002.patch, > HDFS-11502.003.patch > > > Datanode UI calls "dn.js" which loads properties for datanode. "dn.js" sets > "data.dn.HostName" datanode UI to "window.location.hostname", it should use a > datanode property from jmx beans or an appropriate property. The issue is > that if we use a proxy to access datanode UI we would show proxy hostanme > instead of actual datanode hostname. > I am proposing using Hadoop:service=DataNode,name=JvmMetrics" tag.Hostname > field to do that. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated HDFS-11431: -- Fix Version/s: 3.0.0-alpha3 > hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider > --- > > Key: HDFS-11431 > URL: https://issues.apache.org/jira/browse/HDFS-11431 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha3 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Blocker > Labels: maven > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: HDFS-11431-branch-2.8.0.001.patch, > HDFS-11431-branch-2.8.0.002.patch > > > The {{hadoop-hdfs-client-2.8.0.jar}} file does include the > {{ConfiguredFailoverProxyProvider}} class. This breaks client applications > that use this class to communicate with the active NameNode in an HA > deployment of HDFS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10601) Improve log message to include hostname when the NameNode is in safemode
[ https://issues.apache.org/jira/browse/HDFS-10601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928371#comment-15928371 ] Hudson commented on HDFS-10601: --- FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #11415 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11415/]) HDFS-10601. Improve log message to include hostname when the NameNode is (kihwal: rev ba62b50ebacd33b55eafc9db55a2fe5b4c80207a) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java > Improve log message to include hostname when the NameNode is in safemode > > > Key: HDFS-10601 > URL: https://issues.apache.org/jira/browse/HDFS-10601 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Kuhu Shukla >Assignee: Kuhu Shukla >Priority: Minor > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: HDFS-10601.001.patch, HDFS-10601.002.patch, > HDFS-10601.003.patch, HDFS-10601.004.patch > > > When remote NN operations are involved, it would be nice to have the Namenode > hostname in safemode notification log. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10601) Improve log message to include hostname when the NameNode is in safemode
[ https://issues.apache.org/jira/browse/HDFS-10601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HDFS-10601: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha3 2.8.0 Status: Resolved (was: Patch Available) Committed from trunk to branch-2.8. Thanks for working on this Kuhu. > Improve log message to include hostname when the NameNode is in safemode > > > Key: HDFS-10601 > URL: https://issues.apache.org/jira/browse/HDFS-10601 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Kuhu Shukla >Assignee: Kuhu Shukla >Priority: Minor > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: HDFS-10601.001.patch, HDFS-10601.002.patch, > HDFS-10601.003.patch, HDFS-10601.004.patch > > > When remote NN operations are involved, it would be nice to have the Namenode > hostname in safemode notification log. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10601) Improve log message to include hostname when the NameNode is in safemode
[ https://issues.apache.org/jira/browse/HDFS-10601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928313#comment-15928313 ] Kihwal Lee commented on HDFS-10601: --- +1 looks good. > Improve log message to include hostname when the NameNode is in safemode > > > Key: HDFS-10601 > URL: https://issues.apache.org/jira/browse/HDFS-10601 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Kuhu Shukla >Assignee: Kuhu Shukla >Priority: Minor > Attachments: HDFS-10601.001.patch, HDFS-10601.002.patch, > HDFS-10601.003.patch, HDFS-10601.004.patch > > > When remote NN operations are involved, it would be nice to have the Namenode > hostname in safemode notification log. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11516) Admin command line should print message to stderr in failure case
[ https://issues.apache.org/jira/browse/HDFS-11516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Sasaki updated HDFS-11516: -- Attachment: (was: HDFS-11516.02.patch) > Admin command line should print message to stderr in failure case > - > > Key: HDFS-11516 > URL: https://issues.apache.org/jira/browse/HDFS-11516 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kai Sasaki >Assignee: Kai Sasaki >Priority: Minor > Attachments: HDFS-11516.01.patch, HDFS-11516.02.patch > > > {{AdminHelper}} and {{CryptAdmin}} where prints message to stdout instead of > stderr. Since other failure cases prints to stderr, it is necessary to > consolidate that manner. > e.g. > {code} > if (args.size() != 1) { > System.err.println("You must give exactly one argument to -help."); > return 0; > } > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11516) Admin command line should print message to stderr in failure case
[ https://issues.apache.org/jira/browse/HDFS-11516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Sasaki updated HDFS-11516: -- Attachment: HDFS-11516.02.patch > Admin command line should print message to stderr in failure case > - > > Key: HDFS-11516 > URL: https://issues.apache.org/jira/browse/HDFS-11516 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kai Sasaki >Assignee: Kai Sasaki >Priority: Minor > Attachments: HDFS-11516.01.patch, HDFS-11516.02.patch > > > {{AdminHelper}} and {{CryptAdmin}} where prints message to stdout instead of > stderr. Since other failure cases prints to stderr, it is necessary to > consolidate that manner. > e.g. > {code} > if (args.size() != 1) { > System.err.println("You must give exactly one argument to -help."); > return 0; > } > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11516) Admin command line should print message to stderr in failure case
[ https://issues.apache.org/jira/browse/HDFS-11516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Sasaki updated HDFS-11516: -- Attachment: HDFS-11516.02.patch > Admin command line should print message to stderr in failure case > - > > Key: HDFS-11516 > URL: https://issues.apache.org/jira/browse/HDFS-11516 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kai Sasaki >Assignee: Kai Sasaki >Priority: Minor > Attachments: HDFS-11516.01.patch, HDFS-11516.02.patch > > > {{AdminHelper}} and {{CryptAdmin}} where prints message to stdout instead of > stderr. Since other failure cases prints to stderr, it is necessary to > consolidate that manner. > e.g. > {code} > if (args.size() != 1) { > System.err.println("You must give exactly one argument to -help."); > return 0; > } > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-6708) StorageType should be encoded in the block token
[ https://issues.apache.org/jira/browse/HDFS-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ewan Higgs updated HDFS-6708: - Attachment: HDFS-6708.0005.patch Attaching patch that reflects [~virajith]'s comments. > StorageType should be encoded in the block token > > > Key: HDFS-6708 > URL: https://issues.apache.org/jira/browse/HDFS-6708 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Affects Versions: 2.4.1 >Reporter: Arpit Agarwal >Assignee: Ewan Higgs > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-6708.0001.patch, HDFS-6708.0002.patch, > HDFS-6708.0003.patch, HDFS-6708.0004.patch, HDFS-6708.0005.patch > > > HDFS-6702 is adding support for file creation based on StorageType. > The block token is used as a tamper-proof channel for communicating block > parameters from the NN to the DN during block creation. The StorageType > should be included in this block token. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11358) DiskBalancer: Report command supports reading nodes from host file
[ https://issues.apache.org/jira/browse/HDFS-11358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11358: - Target Version/s: 3.0.0-alpha3 > DiskBalancer: Report command supports reading nodes from host file > -- > > Key: HDFS-11358 > URL: https://issues.apache.org/jira/browse/HDFS-11358 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: diskbalancer >Affects Versions: 3.0.0-alpha2 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-11358.001.patch > > > This is a improvement of JIRA HDFS-10821. In HDFS-10821, it supports the > {{diskbalancer report}} command can be executed with multiple nodes. But the > multiple nodes are read from a nodes string. It will be not good to use when > the cluster is big enough. A better way we can do is making that can be read > from a host file as that has been done in many places in HDFS(ex. > decommission, balancer). This JIRA focus on this point, and will makes this > done in diskbalancer. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11358) DiskBalancer: Report command supports reading nodes from host file
[ https://issues.apache.org/jira/browse/HDFS-11358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928181#comment-15928181 ] Yiqun Lin commented on HDFS-11358: -- HI [~anu], could you please take a quick review for this improvement of DiskBalancer that I did before? I am thinking we can apply this in 3.0 alpha3. Thanks. > DiskBalancer: Report command supports reading nodes from host file > -- > > Key: HDFS-11358 > URL: https://issues.apache.org/jira/browse/HDFS-11358 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: diskbalancer >Affects Versions: 3.0.0-alpha2 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-11358.001.patch > > > This is a improvement of JIRA HDFS-10821. In HDFS-10821, it supports the > {{diskbalancer report}} command can be executed with multiple nodes. But the > multiple nodes are read from a nodes string. It will be not good to use when > the cluster is big enough. A better way we can do is making that can be read > from a host file as that has been done in many places in HDFS(ex. > decommission, balancer). This JIRA focus on this point, and will makes this > done in diskbalancer. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org