[ 
https://issues.apache.org/jira/browse/HDFS-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15810584#comment-15810584
 ] 

Yuanbo Liu edited comment on HDFS-11293 at 1/9/17 3:51 AM:
-----------------------------------------------------------

[~umamaheswararao] Thanks for your patch. Most part of your code look good to 
me.
Please add {{iterator.remove();}} after 
{{existing.remove(datanodeStorageInfo.getStorageType());}} in line 307. 
Otherwise the source node will be chose twice.
Here is the test case:
{code}
  @Test(timeout = 300000)
  public void testBlockMoveInSameDatanodeWithWARM() throws Exception {
    StorageType[][] diskTypes =
        new StorageType[][]{{StorageType.DISK, StorageType.ARCHIVE},
            {StorageType.ARCHIVE, StorageType.SSD},
            {StorageType.DISK, StorageType.DISK},
            {StorageType.DISK, StorageType.DISK}};

    config.setLong("dfs.block.size", DEFAULT_BLOCK_SIZE);
    hdfsCluster = startCluster(config, diskTypes, diskTypes.length,
        storagesPerDatanode, capacity);
    dfs = hdfsCluster.getFileSystem();
    writeContent(file);

    // Change policy to WARM
    dfs.setStoragePolicy(new Path(file), "WARM");
    FSNamesystem namesystem = hdfsCluster.getNamesystem();
    INode inode = namesystem.getFSDirectory().getINode(file);

    namesystem.getBlockManager().satisfyStoragePolicy(inode.getId());
    hdfsCluster.triggerHeartbeats();

    waitExpectedStorageType(file, StorageType.DISK, 1, 30000);
    waitExpectedStorageType(file, StorageType.ARCHIVE, 2, 30000);
  }
{code}


was (Author: yuanbo):
[~umamaheswararao] Thanks for your patch. Most part of your code look good to 
me.
Please add
{code}
iterator.remove();
{code}
after {{existing.remove(datanodeStorageInfo.getStorageType());}} in line 307. 
Otherwise the source node will be chose twice.
Here is the test case:
{code}
  @Test(timeout = 300000)
  public void testBlockMoveInSameDatanodeWithWARM() throws Exception {
    StorageType[][] diskTypes =
        new StorageType[][]{{StorageType.DISK, StorageType.ARCHIVE},
            {StorageType.ARCHIVE, StorageType.SSD},
            {StorageType.DISK, StorageType.DISK},
            {StorageType.DISK, StorageType.DISK}};

    config.setLong("dfs.block.size", DEFAULT_BLOCK_SIZE);
    hdfsCluster = startCluster(config, diskTypes, diskTypes.length,
        storagesPerDatanode, capacity);
    dfs = hdfsCluster.getFileSystem();
    writeContent(file);

    // Change policy to WARM
    dfs.setStoragePolicy(new Path(file), "WARM");
    FSNamesystem namesystem = hdfsCluster.getNamesystem();
    INode inode = namesystem.getFSDirectory().getINode(file);

    namesystem.getBlockManager().satisfyStoragePolicy(inode.getId());
    hdfsCluster.triggerHeartbeats();

    waitExpectedStorageType(file, StorageType.DISK, 1, 30000);
    waitExpectedStorageType(file, StorageType.ARCHIVE, 2, 30000);
  }
{code}

> [SPS]: Local DN should be given prefernce as source node, when target 
> available in same node
> --------------------------------------------------------------------------------------------
>
>                 Key: HDFS-11293
>                 URL: https://issues.apache.org/jira/browse/HDFS-11293
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Yuanbo Liu
>            Assignee: Uma Maheswara Rao G
>            Priority: Critical
>         Attachments: HDFS-11293-HDFS-10285-00.patch
>
>
> In {{FsDatasetImpl#createTemporary}}, we use {{volumeMap}} to get replica 
> info by block pool id. But in this situation:
> {code}
> datanode A => {DISK, SSD}, datanode B => {DISK, ARCHIVE}.
> 1. the same block replica exists in A[DISK] and B[DISK].
> 2. the block pool id of datanode A and datanode B are the same.
> {code}
> Then we start to change the file's storage policy and move the block replica 
> in the cluster. Very likely we have to move block from B[DISK] to A[SSD], at 
> this time, datanode A throws ReplicaAlreadyExistsException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to