[ 
https://issues.apache.org/jira/browse/HDFS-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13864224#comment-13864224
 ] 

Hudson commented on HDFS-5667:
------------------------------

FAILURE: Integrated in Hadoop-Hdfs-trunk #1637 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1637/])
HDFS-5667. Add test missed in previous checkin (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1555956)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestStorageReport.java
HDFS-5667. Include DatanodeStorage in StorageReport. (Arpit Agarwal) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1555929)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/StorageReport.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSClusterWithNodeGroup.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java


> Include DatanodeStorage in StorageReport
> ----------------------------------------
>
>                 Key: HDFS-5667
>                 URL: https://issues.apache.org/jira/browse/HDFS-5667
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode
>    Affects Versions: 3.0.0
>            Reporter: Eric Sirianni
>            Assignee: Arpit Agarwal
>             Fix For: 3.0.0, 2.4.0
>
>         Attachments: h5667.02.patch, h5667.03.patch, h5667.04.patch, 
> h5667.05.patch
>
>
> The fix for HDFS-5484 was accidentally regressed by the following change made 
> via HDFS-5542
> {code}
> +  DatanodeStorageInfo updateStorage(DatanodeStorage s) {
>      synchronized (storageMap) {
>        DatanodeStorageInfo storage = storageMap.get(s.getStorageID());
>        if (storage == null) {
> @@ -670,8 +658,6 @@
>                   " for DN " + getXferAddr());
>          storage = new DatanodeStorageInfo(this, s);
>          storageMap.put(s.getStorageID(), storage);
> -      } else {
> -        storage.setState(s.getState());
>        }
>        return storage;
>      }
> {code}
> By removing the 'else' and no longer updating the state in the BlockReport 
> processing path, we effectively get the bogus state & type that is set via 
> the first heartbeat (see the fix for HDFS-5455):
> {code}
> +      if (storage == null) {
> +        // This is seen during cluster initialization when the heartbeat
> +        // is received before the initial block reports from each storage.
> +        storage = updateStorage(new DatanodeStorage(report.getStorageID()));
> {code}
> Even reverting the change and reintroducing the 'else' leaves the state & 
> type temporarily inaccurate until the first block report. 
> As discussed with [~arpitagarwal], a better fix would be to simply include 
> the full {{DatanodeStorage}} object in the {{StorageReport}} (as opposed to 
> only the Storage ID).  This requires adding the {{DatanodeStorage}} object to 
> {{StorageReportProto}}. It needs to be a new optional field and we cannot 
> remove the existing {{StorageUuid}} for protocol compatibility.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to