[ https://issues.apache.org/jira/browse/HDFS-14074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Wei-Chiu Chuang updated HDFS-14074: ----------------------------------- Resolution: Fixed Fix Version/s: (was: 3.0.0) (was: 2.7.3) (was: 2.8.0) 3.1.3 3.2.1 3.3.0 Target Version/s: 2.7.3, 2.8.0 (was: 2.8.0, 2.7.3) Status: Resolved (was: Patch Available) +1 [~luguangyi] added you to Hadoop contributor list and assigned the jira to you. Pushed the last patch to trunk, branch-3.2 and branch-3.1 Thanks [~luguangyi] for the patch and [~arp] for the review > DataNode runs async disk checks maybe throws NullPointerException, and > DataNode failed to register to NameSpace. > ------------------------------------------------------------------------------------------------------------------ > > Key: HDFS-14074 > URL: https://issues.apache.org/jira/browse/HDFS-14074 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs > Affects Versions: 2.8.0, 3.0.0 > Environment: hadoop-2.7.3, hadoop-2.8.0 > Reporter: guangyi lu > Assignee: guangyi lu > Priority: Major > Labels: HDFS, HDFS-11114 > Fix For: 3.3.0, 3.2.1, 3.1.3 > > Attachments: HDFS-14074-latest.patch, HDFS-14074.patch, > WechatIMG83.jpeg > > Original Estimate: 48h > Remaining Estimate: 48h > > In ThrottledAsyncChecker class,it members of the completedChecks is > WeakHashMap, its definition is as follows: > this.completedChecks = new WeakHashMap<>(); > and one of its uses is as follows in schedule method: > if (completedChecks.containsKey(target)) { > // here may be happen garbage collection,and result may be null. > final LastCheckResult<V> result = completedChecks.get(target); > > final long msSinceLastCheck = timer.monotonicNow() - > result.completedAt; > 。。。。 > } > after "completedChecks.containsKey(target)", may be happen garbage > collection, and result may be null. > the solution is: > this.completedChecks = new ReferenceMap(1, 1); > or > this.completedChecks = new HashMap<>(); > -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org