[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203240#comment-17203240
 ] 

YaYun Wang edited comment on HDFS-15025 at 9/29/20, 1:08 AM:
-------------------------------------------------------------

[~liuml07],[~ayushtkn], we have tried to make sure that {{NVDIMM}} and 
{{isRAM}} will not affect the function and existing data on hadoop by 
flollowing ways:
 # Running test case: run test cases on 
{{FsEditLogOp#SetQuotaByStorageTypeOp}}, such as {{TestWebHDFS}}, and that can 
pass without failures or errors.
 # Combing the code about {{FsEditLogOp#SetQuotaByStorageTypeOp}}, including 
classes of {{DistributedFileSystem}}, {{DFSClient}}, {{NamenodeRPCServer}}, 
{{FSNamesystem}}, {{FSDirAttrOp}}, {{FSEditLog}} and {{FSEditLogOp,}} etc.. We 
can draw a conclusion that {{ordinal()}} of StorageType which is enum is 
flexible and compatible, that is, {{ordinal()}} can vary with the value of 
{{StorageType}} and that's different with 
{{TestRouterQuota.testStorageTypeQuota}} where {{ordinal()}} is used as the 
index of array.
{code:java}
public void testStorageTypeQuota() throws Exception {
  ...
 
  //the first parameters is a array which length is 5 for 5 storage types.
  verifyTypeQuotaAndConsume(new long[] {-1, -1, ssQuota * 2, -1, -1}, null, 
usage);

  ...
}

private void verifyTypeQuotaAndConsume(long[] quota, long[] consume,
    QuotaUsage usage) {
  for (StorageType t : StorageType.values()) {
    if (quota != null) {
      assertEquals(quota[t.ordinal()], usage.getTypeQuota(t));  //ordinal() is 
the index of quota
    }
    if (consume != null) {
      assertEquals(consume[t.ordinal()], usage.getTypeConsumed(t));
    }
  }
}{code}

 # Verifing the function of hadoop with {{NVDIMM}} and {{isRAM}} : the 
functions of DFSIO, wordcount, put/get built-in hadoop can used normally. The 
old storage types  and storage policies are normal too after upgrade.
 # Verifing the function after upgrade by existing data: first, we write data 
to hadoop without the patch using DFSIO, wordcount and put/get. Then update the 
version of hadoop with the patch. After that, the existing data of old version 
can be accessed and used normally.


was (Author: wangyayun):
[~liuml07],[~ayushtkn], we have tried to make sure that {{NVDIMM}} and 
{{isRAM}} will not affect the function and existing data on hadoop by 
flollowing ways:
 # Running test case: run test cases on 
{{FsEditLogOp#SetQuotaByStorageTypeOp}}, such as {{TestWebHDFS}}, and that can 
pass without failures or errors.
 # Combing the code about {{FsEditLogOp#SetQuotaByStorageTypeOp}}, including 
{{DistributedFileSystem}}, {{DFSClient}}, {{NamenodeRPCServer}}, 
{{FSNamesystem}}, {{FSDirAttrOp}}, {{FSEditLog}} and {{FSEditLogOp,}} etc.. We 
can draw a conclusion that {{ordinal()}} of StorageType which is enum is 
flexible and compatible, that is, {{ordinal()}} can vary with the value of 
{{StorageType}} and that's different with 
{{TestRouterQuota.testStorageTypeQuota}} where {{ordinal()}} is used as the 
index of array.

{code:java}
public void testStorageTypeQuota() throws Exception {
  ...
 
  //the first parameters is a array which length is 5 for 5 storage types.
  verifyTypeQuotaAndConsume(new long[] {-1, -1, ssQuota * 2, -1, -1}, null, 
usage);

  ...
}

private void verifyTypeQuotaAndConsume(long[] quota, long[] consume,
    QuotaUsage usage) {
  for (StorageType t : StorageType.values()) {
    if (quota != null) {
      assertEquals(quota[t.ordinal()], usage.getTypeQuota(t));  //ordinal() is 
the index of quota
    }
    if (consume != null) {
      assertEquals(consume[t.ordinal()], usage.getTypeConsumed(t));
    }
  }
}{code}
 # Verifing the function of hadoop with {{NVDIMM}} and {{isRAM}} : the 
functions of DFSIO, wordcount, put/get built-in hadoop can used normally. The 
old storage types  and storage policies are normal too after upgrade.
 # Verifing the function after upgrade by existing data: first, we write data 
to hadoop without the patch using DFSIO, wordcount and put/get. Then updata the 
version of hadoop with the patch. After that, the existing data of old version 
can be accessed and used normally.

> Applying NVDIMM storage media to HDFS
> -------------------------------------
>
>                 Key: HDFS-15025
>                 URL: https://issues.apache.org/jira/browse/HDFS-15025
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: datanode, hdfs
>            Reporter: YaYun Wang
>            Assignee: YaYun Wang
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 3.4.0
>
>         Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>          Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to