[ https://issues.apache.org/jira/browse/HDFS-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16134794#comment-16134794 ]
Hadoop QA commented on HDFS-8693: --------------------------------- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 51 unchanged - 1 fixed = 51 total (was 52) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 22s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}107m 20s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.namenode.ha.TestHAAppend | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-8693 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12882813/HDFS-8693.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux bb48ff496ea1 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7a82d7b | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/20773/artifact/patchprocess/whitespace-eol.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/20773/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/20773/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/20773/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > refreshNamenodes does not support adding a new standby to a running DN > ---------------------------------------------------------------------- > > Key: HDFS-8693 > URL: https://issues.apache.org/jira/browse/HDFS-8693 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, ha > Affects Versions: 2.6.0 > Reporter: Jian Fang > Assignee: Ajith S > Priority: Critical > Attachments: HDFS-8693.02.patch, HDFS-8693.03.patch, HDFS-8693.1.patch > > > I tried to run the following command on a Hadoop 2.6.0 cluster with HA > support > $ hdfs dfsadmin -refreshNamenodes datanode-host:port > to refresh name nodes on data nodes after I replaced one name node with a new > one so that I don't need to restart the data nodes. However, I got the > following error: > refreshNamenodes: HA does not currently support adding a new standby to a > running DN. Please do a rolling restart of DNs to reconfigure the list of NNs. > I checked the 2.6.0 code and the error was thrown by the following code > snippet, which led me to this JIRA. > void refreshNNList(ArrayList<InetSocketAddress> addrs) throws IOException { > Set<InetSocketAddress> oldAddrs = Sets.newHashSet(); > for (BPServiceActor actor : bpServices) > { oldAddrs.add(actor.getNNSocketAddress()); } > Set<InetSocketAddress> newAddrs = Sets.newHashSet(addrs); > if (!Sets.symmetricDifference(oldAddrs, newAddrs).isEmpty()) > { // Keep things simple for now -- we can implement this at a later date. > throw new IOException( "HA does not currently support adding a new standby to > a running DN. " + "Please do a rolling restart of DNs to reconfigure the list > of NNs."); } > } > Looks like this the refreshNameNodes command is an uncompleted feature. > Unfortunately, the new name node on a replacement is critical for auto > provisioning a hadoop cluster with HDFS HA support. Without this support, the > HA feature could not really be used. I also observed that the new standby > name node on the replacement instance could stuck in safe mode because no > data nodes check in with it. Even with a rolling restart, it may take quite > some time to restart all data nodes if we have a big cluster, for example, > with 4000 data nodes, let alone restarting DN is way too intrusive and it is > not a preferable operation in production. It also increases the chance for a > double failure because the standby name node is not really ready for a > failover in the case that the current active name node fails. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org