[ https://issues.apache.org/jira/browse/HDFS-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14768907#comment-14768907 ]
Hadoop QA commented on HDFS-9046: --------------------------------- \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 17m 54s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 1 new or modified test files. | | {color:green}+1{color} | javac | 8m 4s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 10m 2s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 26s | The applied patch does not increase the total number of release audit warnings. | | {color:red}-1{color} | checkstyle | 1m 27s | The applied patch generated 1 new checkstyle issues (total was 26, now 27). | | {color:red}-1{color} | whitespace | 0m 0s | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. | | {color:green}+1{color} | install | 1m 35s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 36s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 2m 28s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | native | 3m 6s | Pre-build of native portion | | {color:green}+1{color} | hdfs tests | 162m 43s | Tests passed in hadoop-hdfs. | | | | 208m 26s | | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12756197/HDFS-9046_2.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / bf2f2b4 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/12476/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/12476/artifact/patchprocess/whitespace.txt | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/12476/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/12476/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/12476/console | This message was automatically generated. > Any Error during BPOfferService run can leads to Missing DN. > ------------------------------------------------------------ > > Key: HDFS-9046 > URL: https://issues.apache.org/jira/browse/HDFS-9046 > Project: Hadoop HDFS > Issue Type: Bug > Reporter: nijel > Assignee: nijel > Attachments: HDFS-9046_1.patch, HDFS-9046_2.patch > > > The cluster is ins HA mode and each DN having only one block pool. > The issue is once after switch one DN is missing from the current active NN. > Upon analysis I found that there is one exception in BPOfferService.run() > {noformat} > 2015-08-21 09:02:11,190 | WARN | DataNode: > [[[DISK]file:/srv/BigData/hadoop/data5/dn/ > [DISK]file:/srv/BigData/hadoop/data4/dn/]] heartbeating to > 160-149-0-114/160.149.0.114:25000 | Unexpected exception in block pool Block > pool BP-284203724-160.149.0.114-1438774011693 (Datanode Uuid > 15ce1dd7-227f-4fd2-9682-091aa6bc2b89) service to > 160-149-0-114/160.149.0.114:25000 | BPServiceActor.java:830 > java.lang.OutOfMemoryError: unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:714) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1357) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.execute(FsDatasetAsyncDiskService.java:172) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.deleteAsync(FsDatasetAsyncDiskService.java:221) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(FsDatasetImpl.java:1887) > at > org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:669) > at > org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:616) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:856) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:671) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822) > at java.lang.Thread.run(Thread.java:745) > {noformat} > After this particular BPOfferService is down during the run time. > And this particular NN will not have the details of this DN > Similar issues are discussed in the following JIRAs > https://issues.apache.org/jira/browse/HDFS-2882 > https://issues.apache.org/jira/browse/HDFS-7714 > Can we retry in this case also with a larger interval instead of shutting > down this BPOfferService ? > I think since this exceptions can occur randomly in DN it is not good to keep > the DN running where some NN does not have the info ! -- This message was sent by Atlassian JIRA (v6.3.4#6332)