[jira] [Updated] (HDFS-12302) FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl object

2017-08-15 Thread liaoyuxiangqin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liaoyuxiangqin updated HDFS-12302:
--
Attachment: HDFS-12302.002.patch

> FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl 
> object
> ---
>
> Key: HDFS-12302
> URL: https://issues.apache.org/jira/browse/HDFS-12302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20, Red Hat 3.10.0-514.6.1.el7.x86_64, 
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
> Attachments: HDFS-12302.001.patch, HDFS-12302.002.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
>   When i read the  code of Instantiate FsDatasetImpl object on DataNode 
> start process, i find that the getVolumeMap function actually can't get 
> ReplicaMap info for each fsVolume, the reason is fsVolume's  bpSlices hasn't 
> been initialized in this time, the detail code as follows:
> {code:title=FsVolumeImpl.java}
> void getVolumeMap(ReplicaMap volumeMap,
> final RamDiskReplicaTracker ramDiskReplicaMap)
>   throws IOException {
> LOG.info("Added volume -  getVolumeMap bpSlices:" + 
> bpSlices.values().size());
> for(BlockPoolSlice s : bpSlices.values()) {
>   s.getVolumeMap(volumeMap, ramDiskReplicaMap);
> }
>   }
> {code}
> Then, i have add some info log and start DataNode, the log info cord with the 
> code description, the detail log info as follows:
>  INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap end
> INFO: Added new volume: DS-48ac6ef9-fd6f-49b7-a5fb-77b82cadc973
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> end
> INFO: Added new volume: DS-159b615c-144c-4d99-8b63-5f37247fb8ed
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK
> At last i think the getVolumeMap process for each fsVloume not necessary when 
> Instantiate FsDatasetImpl object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12302) FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl object

2017-08-15 Thread liaoyuxiangqin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liaoyuxiangqin updated HDFS-12302:
--
Status: Patch Available  (was: Open)

> FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl 
> object
> ---
>
> Key: HDFS-12302
> URL: https://issues.apache.org/jira/browse/HDFS-12302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20, Red Hat 3.10.0-514.6.1.el7.x86_64, 
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
> Attachments: HDFS-12302.001.patch, HDFS-12302.002.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
>   When i read the  code of Instantiate FsDatasetImpl object on DataNode 
> start process, i find that the getVolumeMap function actually can't get 
> ReplicaMap info for each fsVolume, the reason is fsVolume's  bpSlices hasn't 
> been initialized in this time, the detail code as follows:
> {code:title=FsVolumeImpl.java}
> void getVolumeMap(ReplicaMap volumeMap,
> final RamDiskReplicaTracker ramDiskReplicaMap)
>   throws IOException {
> LOG.info("Added volume -  getVolumeMap bpSlices:" + 
> bpSlices.values().size());
> for(BlockPoolSlice s : bpSlices.values()) {
>   s.getVolumeMap(volumeMap, ramDiskReplicaMap);
> }
>   }
> {code}
> Then, i have add some info log and start DataNode, the log info cord with the 
> code description, the detail log info as follows:
>  INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap end
> INFO: Added new volume: DS-48ac6ef9-fd6f-49b7-a5fb-77b82cadc973
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> end
> INFO: Added new volume: DS-159b615c-144c-4d99-8b63-5f37247fb8ed
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK
> At last i think the getVolumeMap process for each fsVloume not necessary when 
> Instantiate FsDatasetImpl object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12302) FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl object

2017-08-15 Thread liaoyuxiangqin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liaoyuxiangqin updated HDFS-12302:
--
Status: Open  (was: Patch Available)

> FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl 
> object
> ---
>
> Key: HDFS-12302
> URL: https://issues.apache.org/jira/browse/HDFS-12302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20, Red Hat 3.10.0-514.6.1.el7.x86_64, 
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
> Attachments: HDFS-12302.001.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
>   When i read the  code of Instantiate FsDatasetImpl object on DataNode 
> start process, i find that the getVolumeMap function actually can't get 
> ReplicaMap info for each fsVolume, the reason is fsVolume's  bpSlices hasn't 
> been initialized in this time, the detail code as follows:
> {code:title=FsVolumeImpl.java}
> void getVolumeMap(ReplicaMap volumeMap,
> final RamDiskReplicaTracker ramDiskReplicaMap)
>   throws IOException {
> LOG.info("Added volume -  getVolumeMap bpSlices:" + 
> bpSlices.values().size());
> for(BlockPoolSlice s : bpSlices.values()) {
>   s.getVolumeMap(volumeMap, ramDiskReplicaMap);
> }
>   }
> {code}
> Then, i have add some info log and start DataNode, the log info cord with the 
> code description, the detail log info as follows:
>  INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap end
> INFO: Added new volume: DS-48ac6ef9-fd6f-49b7-a5fb-77b82cadc973
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> end
> INFO: Added new volume: DS-159b615c-144c-4d99-8b63-5f37247fb8ed
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK
> At last i think the getVolumeMap process for each fsVloume not necessary when 
> Instantiate FsDatasetImpl object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12302) FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl object

2017-08-14 Thread liaoyuxiangqin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liaoyuxiangqin updated HDFS-12302:
--
Status: Patch Available  (was: Open)

> FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl 
> object
> ---
>
> Key: HDFS-12302
> URL: https://issues.apache.org/jira/browse/HDFS-12302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20, Red Hat 3.10.0-514.6.1.el7.x86_64, 
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
> Attachments: HDFS-12302.001.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
>   When i read the  code of Instantiate FsDatasetImpl object on DataNode 
> start process, i find that the getVolumeMap function actually can't get 
> ReplicaMap info for each fsVolume, the reason is fsVolume's  bpSlices hasn't 
> been initialized in this time, the detail code as follows:
> {code:title=FsVolumeImpl.java}
> void getVolumeMap(ReplicaMap volumeMap,
> final RamDiskReplicaTracker ramDiskReplicaMap)
>   throws IOException {
> LOG.info("Added volume -  getVolumeMap bpSlices:" + 
> bpSlices.values().size());
> for(BlockPoolSlice s : bpSlices.values()) {
>   s.getVolumeMap(volumeMap, ramDiskReplicaMap);
> }
>   }
> {code}
> Then, i have add some info log and start DataNode, the log info cord with the 
> code description, the detail log info as follows:
>  INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap end
> INFO: Added new volume: DS-48ac6ef9-fd6f-49b7-a5fb-77b82cadc973
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> end
> INFO: Added new volume: DS-159b615c-144c-4d99-8b63-5f37247fb8ed
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK
> At last i think the getVolumeMap process for each fsVloume not necessary when 
> Instantiate FsDatasetImpl object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12302) FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl object

2017-08-14 Thread liaoyuxiangqin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liaoyuxiangqin updated HDFS-12302:
--
Attachment: HDFS-12302.001.patch

> FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl 
> object
> ---
>
> Key: HDFS-12302
> URL: https://issues.apache.org/jira/browse/HDFS-12302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20, Red Hat 3.10.0-514.6.1.el7.x86_64, 
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
> Attachments: HDFS-12302.001.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
>   When i read the  code of Instantiate FsDatasetImpl object on DataNode 
> start process, i find that the getVolumeMap function actually can't get 
> ReplicaMap info for each fsVolume, the reason is fsVolume's  bpSlices hasn't 
> been initialized in this time, the detail code as follows:
> {code:title=FsVolumeImpl.java}
> void getVolumeMap(ReplicaMap volumeMap,
> final RamDiskReplicaTracker ramDiskReplicaMap)
>   throws IOException {
> LOG.info("Added volume -  getVolumeMap bpSlices:" + 
> bpSlices.values().size());
> for(BlockPoolSlice s : bpSlices.values()) {
>   s.getVolumeMap(volumeMap, ramDiskReplicaMap);
> }
>   }
> {code}
> Then, i have add some info log and start DataNode, the log info cord with the 
> code description, the detail log info as follows:
>  INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap end
> INFO: Added new volume: DS-48ac6ef9-fd6f-49b7-a5fb-77b82cadc973
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> end
> INFO: Added new volume: DS-159b615c-144c-4d99-8b63-5f37247fb8ed
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK
> At last i think the getVolumeMap process for each fsVloume not necessary when 
> Instantiate FsDatasetImpl object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org