[jira] [Comment Edited] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-10-10 Thread YaYun Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211600#comment-17211600
 ] 

YaYun Wang edited comment on HDFS-15025 at 10/10/20, 8:12 AM:
--

[~ayushtkn], [~liuml07] we read the relevant codes and test 
{{SetQuotaByStorageTypeOp}} by using {{hdfs dfsadmin -setSpaceQuota}}, and the 
result is that the function of setting quota by storage type will indeed be 
affected after upgrade, this is because the {{ordinal()}} of {{StorageType}} is 
stored when the quota of a directory is setted.

We hava updated the codes and comment at 
[https://github.com/apache/hadoop/pull/2377 
|https://github.com/apache/hadoop/pull/2377]please have a check.

 

 

 


was (Author: wangyayun):
[~ayushtkn], we read the relevant codes and test {{SetQuotaByStorageTypeOp}} by 
using {{hdfs dfsadmin -setSpaceQuota}}, and the result is that the function of 
setting quota by storage type will indeed be affected after upgrade, this is 
because the {{ordinal()}} of {{StorageType}} is stored when the quota of a 
directory is setted.

We hava updated the codes and comment at 
[https://github.com/apache/hadoop/pull/2377 
|https://github.com/apache/hadoop/pull/2377]please have a check.

 

 

 

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-10-10 Thread YaYun Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211600#comment-17211600
 ] 

YaYun Wang commented on HDFS-15025:
---

[~ayushtkn], we read the relevant codes and test {{SetQuotaByStorageTypeOp}} by 
using {{hdfs dfsadmin -setSpaceQuota}}, and the result is that the function of 
setting quota by storage type will indeed be affected after upgrade, this is 
because the {{ordinal()}} of {{StorageType}} is stored when the quota of a 
directory is setted.

We hava updated the codes and comment at 
[https://github.com/apache/hadoop/pull/2377 
|https://github.com/apache/hadoop/pull/2377]please have a check.

 

 

 

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15624) Fix the SetQuotaByStorageTypeOp problem after updating hadoop

2020-10-09 Thread YaYun Wang (Jira)
YaYun Wang created HDFS-15624:
-

 Summary:  Fix the SetQuotaByStorageTypeOp problem after updating 
hadoop 
 Key: HDFS-15624
 URL: https://issues.apache.org/jira/browse/HDFS-15624
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Reporter: YaYun Wang


HDFS-15025 adds a new storage Type NVDIMM, changes the ordinal() of the enum of 
StorageType. And, setting the quota by storageType depends on the ordinal(), 
therefore, it may cause the setting of quota to be invalid after upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-28 Thread YaYun Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17203240#comment-17203240
 ] 

YaYun Wang edited comment on HDFS-15025 at 9/29/20, 1:13 AM:
-

[~liuml07],[~ayushtkn], we have tried to make sure that {{NVDIMM}} and 
{{isRAM}} will not affect the function and existing data on hadoop by 
flollowing ways:
 * *Running test case*: run test cases on 
{{FsEditLogOp#SetQuotaByStorageTypeOp}}, such as {{TestWebHDFS}}, and that can 
pass without failures or errors.
 * *Combing the classes about {{FsEditLogOp#SetQuotaByStorageTypeOp}}*, 
including  {{DistributedFileSystem}}, {{DFSClient}}, {{NamenodeRPCServer}}, 
{{FSNamesystem}}, {{FSDirAttrOp}}, {{FSEditLog}} and {{FSEditLogOp,}} etc.. We 
can draw a conclusion that {{ordinal()}} of StorageType which is enum is 
flexible and compatible, that is, {{ordinal()}} can vary with the value of 
{{StorageType}} and that's different with 
{{TestRouterQuota.testStorageTypeQuota}} where {{ordinal()}} is used as the 
index of array.

 
{code:java}
public void testStorageTypeQuota() throws Exception {
  ...
 
  //the first parameters is a array which length is 5 for 5 storage types.
  verifyTypeQuotaAndConsume(new long[] {-1, -1, ssQuota * 2, -1, -1}, null, 
usage);

  ...
}

private void verifyTypeQuotaAndConsume(long[] quota, long[] consume,
QuotaUsage usage) {
  for (StorageType t : StorageType.values()) {
if (quota != null) {
  assertEquals(quota[t.ordinal()], usage.getTypeQuota(t));  //ordinal() is 
the index of quota
}
if (consume != null) {
  assertEquals(consume[t.ordinal()], usage.getTypeConsumed(t));
}
  }
}{code}
 *  *Verifing the function of hadoop with {{NVDIMM}} and {{isRAM}}* : the 
functions of DFSIO, wordcount, put/get built-in hadoop can used normally. The 
old storage types  and storage policies are normal too after upgrade.
 * *Verifing the function after upgrade by existing data*: first, we write data 
to hadoop without the patch using DFSIO, wordcount and put/get. Then update the 
version of hadoop with the patch. After that, the existing data of old version 
can be accessed and used normally.


was (Author: wangyayun):
[~liuml07],[~ayushtkn], we have tried to make sure that {{NVDIMM}} and 
{{isRAM}} will not affect the function and existing data on hadoop by 
flollowing ways:
 # Running test case: run test cases on 
{{FsEditLogOp#SetQuotaByStorageTypeOp}}, such as {{TestWebHDFS}}, and that can 
pass without failures or errors.
 # Combing the code about {{FsEditLogOp#SetQuotaByStorageTypeOp}}, including 
classes of {{DistributedFileSystem}}, {{DFSClient}}, {{NamenodeRPCServer}}, 
{{FSNamesystem}}, {{FSDirAttrOp}}, {{FSEditLog}} and {{FSEditLogOp,}} etc.. We 
can draw a conclusion that {{ordinal()}} of StorageType which is enum is 
flexible and compatible, that is, {{ordinal()}} can vary with the value of 
{{StorageType}} and that's different with 
{{TestRouterQuota.testStorageTypeQuota}} where {{ordinal()}} is used as the 
index of array.
{code:java}
public void testStorageTypeQuota() throws Exception {
  ...
 
  //the first parameters is a array which length is 5 for 5 storage types.
  verifyTypeQuotaAndConsume(new long[] {-1, -1, ssQuota * 2, -1, -1}, null, 
usage);

  ...
}

private void verifyTypeQuotaAndConsume(long[] quota, long[] consume,
QuotaUsage usage) {
  for (StorageType t : StorageType.values()) {
if (quota != null) {
  assertEquals(quota[t.ordinal()], usage.getTypeQuota(t));  //ordinal() is 
the index of quota
}
if (consume != null) {
  assertEquals(consume[t.ordinal()], usage.getTypeConsumed(t));
}
  }
}{code}

 # Verifing the function of hadoop with {{NVDIMM}} and {{isRAM}} : the 
functions of DFSIO, wordcount, put/get built-in hadoop can used normally. The 
old storage types  and storage policies are normal too after upgrade.
 # Verifing the function after upgrade by existing data: first, we write data 
to hadoop without the patch using DFSIO, wordcount and put/get. Then update the 
version of hadoop with the patch. After that, the existing data of old version 
can be accessed and used normally.

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> The non-volatile 

[jira] [Comment Edited] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-28 Thread YaYun Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17203240#comment-17203240
 ] 

YaYun Wang edited comment on HDFS-15025 at 9/29/20, 1:08 AM:
-

[~liuml07],[~ayushtkn], we have tried to make sure that {{NVDIMM}} and 
{{isRAM}} will not affect the function and existing data on hadoop by 
flollowing ways:
 # Running test case: run test cases on 
{{FsEditLogOp#SetQuotaByStorageTypeOp}}, such as {{TestWebHDFS}}, and that can 
pass without failures or errors.
 # Combing the code about {{FsEditLogOp#SetQuotaByStorageTypeOp}}, including 
classes of {{DistributedFileSystem}}, {{DFSClient}}, {{NamenodeRPCServer}}, 
{{FSNamesystem}}, {{FSDirAttrOp}}, {{FSEditLog}} and {{FSEditLogOp,}} etc.. We 
can draw a conclusion that {{ordinal()}} of StorageType which is enum is 
flexible and compatible, that is, {{ordinal()}} can vary with the value of 
{{StorageType}} and that's different with 
{{TestRouterQuota.testStorageTypeQuota}} where {{ordinal()}} is used as the 
index of array.
{code:java}
public void testStorageTypeQuota() throws Exception {
  ...
 
  //the first parameters is a array which length is 5 for 5 storage types.
  verifyTypeQuotaAndConsume(new long[] {-1, -1, ssQuota * 2, -1, -1}, null, 
usage);

  ...
}

private void verifyTypeQuotaAndConsume(long[] quota, long[] consume,
QuotaUsage usage) {
  for (StorageType t : StorageType.values()) {
if (quota != null) {
  assertEquals(quota[t.ordinal()], usage.getTypeQuota(t));  //ordinal() is 
the index of quota
}
if (consume != null) {
  assertEquals(consume[t.ordinal()], usage.getTypeConsumed(t));
}
  }
}{code}

 # Verifing the function of hadoop with {{NVDIMM}} and {{isRAM}} : the 
functions of DFSIO, wordcount, put/get built-in hadoop can used normally. The 
old storage types  and storage policies are normal too after upgrade.
 # Verifing the function after upgrade by existing data: first, we write data 
to hadoop without the patch using DFSIO, wordcount and put/get. Then update the 
version of hadoop with the patch. After that, the existing data of old version 
can be accessed and used normally.


was (Author: wangyayun):
[~liuml07],[~ayushtkn], we have tried to make sure that {{NVDIMM}} and 
{{isRAM}} will not affect the function and existing data on hadoop by 
flollowing ways:
 # Running test case: run test cases on 
{{FsEditLogOp#SetQuotaByStorageTypeOp}}, such as {{TestWebHDFS}}, and that can 
pass without failures or errors.
 # Combing the code about {{FsEditLogOp#SetQuotaByStorageTypeOp}}, including 
{{DistributedFileSystem}}, {{DFSClient}}, {{NamenodeRPCServer}}, 
{{FSNamesystem}}, {{FSDirAttrOp}}, {{FSEditLog}} and {{FSEditLogOp,}} etc.. We 
can draw a conclusion that {{ordinal()}} of StorageType which is enum is 
flexible and compatible, that is, {{ordinal()}} can vary with the value of 
{{StorageType}} and that's different with 
{{TestRouterQuota.testStorageTypeQuota}} where {{ordinal()}} is used as the 
index of array.

{code:java}
public void testStorageTypeQuota() throws Exception {
  ...
 
  //the first parameters is a array which length is 5 for 5 storage types.
  verifyTypeQuotaAndConsume(new long[] {-1, -1, ssQuota * 2, -1, -1}, null, 
usage);

  ...
}

private void verifyTypeQuotaAndConsume(long[] quota, long[] consume,
QuotaUsage usage) {
  for (StorageType t : StorageType.values()) {
if (quota != null) {
  assertEquals(quota[t.ordinal()], usage.getTypeQuota(t));  //ordinal() is 
the index of quota
}
if (consume != null) {
  assertEquals(consume[t.ordinal()], usage.getTypeConsumed(t));
}
  }
}{code}
 # Verifing the function of hadoop with {{NVDIMM}} and {{isRAM}} : the 
functions of DFSIO, wordcount, put/get built-in hadoop can used normally. The 
old storage types  and storage policies are normal too after upgrade.
 # Verifing the function after upgrade by existing data: first, we write data 
to hadoop without the patch using DFSIO, wordcount and put/get. Then updata the 
version of hadoop with the patch. After that, the existing data of old version 
can be accessed and used normally.

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is 

[jira] [Comment Edited] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-28 Thread YaYun Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17203240#comment-17203240
 ] 

YaYun Wang edited comment on HDFS-15025 at 9/29/20, 1:04 AM:
-

[~liuml07],[~ayushtkn], we have tried to make sure that {{NVDIMM}} and 
{{isRAM}} will not affect the function and existing data on hadoop by 
flollowing ways:
 # Running test case: run test cases on 
{{FsEditLogOp#SetQuotaByStorageTypeOp}}, such as {{TestWebHDFS}}, and that can 
pass without failures or errors.
 # Combing the code about {{FsEditLogOp#SetQuotaByStorageTypeOp}}, including 
{{DistributedFileSystem}}, {{DFSClient}}, {{NamenodeRPCServer}}, 
{{FSNamesystem}}, {{FSDirAttrOp}}, {{FSEditLog}} and {{FSEditLogOp,}} etc.. We 
can draw a conclusion that {{ordinal()}} of StorageType which is enum is 
flexible and compatible, that is, {{ordinal()}} can vary with the value of 
{{StorageType}} and that's different with 
{{TestRouterQuota.testStorageTypeQuota}} where {{ordinal()}} is used as the 
index of array.

{code:java}
public void testStorageTypeQuota() throws Exception {
  ...
 
  //the first parameters is a array which length is 5 for 5 storage types.
  verifyTypeQuotaAndConsume(new long[] {-1, -1, ssQuota * 2, -1, -1}, null, 
usage);

  ...
}

private void verifyTypeQuotaAndConsume(long[] quota, long[] consume,
QuotaUsage usage) {
  for (StorageType t : StorageType.values()) {
if (quota != null) {
  assertEquals(quota[t.ordinal()], usage.getTypeQuota(t));  //ordinal() is 
the index of quota
}
if (consume != null) {
  assertEquals(consume[t.ordinal()], usage.getTypeConsumed(t));
}
  }
}{code}
 # Verifing the function of hadoop with {{NVDIMM}} and {{isRAM}} : the 
functions of DFSIO, wordcount, put/get built-in hadoop can used normally. The 
old storage types  and storage policies are normal too after upgrade.
 # Verifing the function after upgrade by existing data: first, we write data 
to hadoop without the patch using DFSIO, wordcount and put/get. Then updata the 
version of hadoop with the patch. After that, the existing data of old version 
can be accessed and used normally.


was (Author: wangyayun):
[~liuml07],[~ayushtkn], we have tried to make sure that {{NVDIMM}} and 
{{isRAM}} will not affect the function and existing data on hadoop by 
flollowing ways:
 # Running test case: run test cases on 
{{FsEditLogOp#SetQuotaByStorageTypeOp}}, such as {{TestWebHDFS}}, and that can 
pass without failures or errors.
 # Combing the code about {{FsEditLogOp#SetQuotaByStorageTypeOp}}, including 
{{DistributedFileSystem}}, {{DFSClient}}, {{NamenodeRPCServer}}, 
{{FSNamesystem}}, {{FSDirAttrOp}}, {{FSEditLog}} and {{FSEditLogOp,}} etc.. We 
can draw a conclusion that {{ordinal()}} of StorageType which is enum is 
flexible, that is, {{ordinal()}} can vary with the value of {{StorageType}} and 
that's different with {{TestRouterQuota.testStorageTypeQuota}} where 
{{ordinal()}} is used as the index of array.
 # Verifing the function of hadoop with {{NVDIMM}} and {{isRAM}} : the 
functions of DFSIO, wordcount, put/get built-in hadoop can used normally. The 
old storage types  and storage policies are normal too.
 # Verifing the function after upgrade by existing data: in the first, we write 
data to hadoop without the patch using DFSIO, wordcount and put/get, then 
updata the version of hadoop with the patch, and the existing data of old 
version can be accessed and used normally after upgrade.

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-28 Thread YaYun Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17203240#comment-17203240
 ] 

YaYun Wang commented on HDFS-15025:
---

[~liuml07],[~ayushtkn], we have tried to make sure that {{NVDIMM}} and 
{{isRAM}} will not affect the function and existing data on hadoop by 
flollowing ways:
 # Running test case: run test cases on 
{{FsEditLogOp#SetQuotaByStorageTypeOp}}, such as {{TestWebHDFS}}, and that can 
pass without failures or errors.
 # Combing the code about {{FsEditLogOp#SetQuotaByStorageTypeOp}}, including 
{{DistributedFileSystem}}, {{DFSClient}}, {{NamenodeRPCServer}}, 
{{FSNamesystem}}, {{FSDirAttrOp}}, {{FSEditLog}} and {{FSEditLogOp,}} etc.. We 
can draw a conclusion that {{ordinal()}} of StorageType which is enum is 
flexible, that is, {{ordinal()}} can vary with the value of {{StorageType}} and 
that's different with {{TestRouterQuota.testStorageTypeQuota}} where 
{{ordinal()}} is used as the index of array.
 # Verifing the function of hadoop with {{NVDIMM}} and {{isRAM}} : the 
functions of DFSIO, wordcount, put/get built-in hadoop can used normally. The 
old storage types  and storage policies are normal too.
 # Verifing the function after upgrade by existing data: in the first, we write 
data to hadoop without the patch using DFSIO, wordcount and put/get, then 
updata the version of hadoop with the patch, and the existing data of old 
version can be accessed and used normally after upgrade.

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-27 Thread YaYun Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202755#comment-17202755
 ] 

YaYun Wang edited comment on HDFS-15025 at 9/27/20, 7:19 AM:
-

[~ayushtkn],[~liuml07],Sorry didn't notice that before,  I have checked the 
relevant code, the failure with the test case of HDFS-15600 is indeed due to 
NVDIMM newly added, which changes the ordinal of enum of "StorageType". 

Now, i have two solutions: the first one is to put NVDIMM at the end of all 
storage types and modify the comment  In "StorageType", the other one is to 
keep the storage type sorted by speed. of course, both of the above solutions 
must modify "TestRouterQuota" because of  
"TestRouterQuota.testStorageTypeQuota" just set five quotas for 
StorageType(RAM_DISK, SSD, DISK, ARCHIVE and PROVIDED) without NVDIMM. I prefer 
the second solution, that is, StorageType is still  "sorted by the speed of the 
storage types, from fast to slow", and add parameter of quota for NVDIMM, such 
as
{code:java}
verifyTypeQuotaAndConsume(new long[] {-1, -1, -1, ssQuota * 2, -1, -1}, null, 
usage);{code}
So, which solution do you think is better?

 


was (Author: wangyayun):
[~ayushtkn],[~liuml07],Sorry didn't notice that before,  I have checked the 
relevant code, the failure with the test case of HDFS-15600 is indeed due to 
NVDIMM newly added, which changes the ordinal of enum of "StorageType". 

Now, i have two solutions: the first one is to put NVDIMM at the end of all 
storage types and modify the comment  In "StorageType", the other one is to 
keep the storage type sorted by speed. of course, both of the above solutions 
must modify "TestRouterQuota" because of  
"TestRouterQuota.testStorageTypeQuota" just set five quotas for 
StorageType(RAM_DISK, SSD, DISK, ARCHIVE and PROVIDED) without NVDIMM. I prefer 
the second solution, that is, StorageType is still  "sorted by the speed of the 
storage types, from fast to slow", and add parameter of quota for NVDIMM, such 
as
 verifyTypeQuotaAndConsume(new long[] \{-1, -1, -1, ssQuota * 2, -1, -1}, null, 
usage);
  verifyTypeQuotaAndConsume(new long[] \{-1, -1, -1, ssQuota * 2, -1, -1}, 
null, usage);

 So, which solution do you think is better?

 

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-27 Thread YaYun Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202755#comment-17202755
 ] 

YaYun Wang edited comment on HDFS-15025 at 9/27/20, 7:18 AM:
-

[~ayushtkn],[~liuml07],Sorry didn't notice that before,  I have checked the 
relevant code, the failure with the test case of HDFS-15600 is indeed due to 
NVDIMM newly added, which changes the ordinal of enum of "StorageType". 

Now, i have two solutions: the first one is to put NVDIMM at the end of all 
storage types and modify the comment  In "StorageType", the other one is to 
keep the storage type sorted by speed. of course, both of the above solutions 
must modify "TestRouterQuota" because of  
"TestRouterQuota.testStorageTypeQuota" just set five quotas for 
StorageType(RAM_DISK, SSD, DISK, ARCHIVE and PROVIDED) without NVDIMM. I prefer 
the second solution, that is, StorageType is still  "sorted by the speed of the 
storage types, from fast to slow", and add parameter of quota for NVDIMM, such 
as
 verifyTypeQuotaAndConsume(new long[] \{-1, -1, -1, ssQuota * 2, -1, -1}, null, 
usage);
  verifyTypeQuotaAndConsume(new long[] \{-1, -1, -1, ssQuota * 2, -1, -1}, 
null, usage);

 So, which solution do you think is better?

 


was (Author: wangyayun):
[~ayushtkn],[~liuml07],Sorry didn't notice that before,  I have checked the 
relevant code, the failure with the test case of HDFS-15600 is indeed due to 
NVDIMM newly added, which changes the ordinal of enum of "StorageType". 

Now, i have two solutions: the first one is to put NVDIMM at the end of all 
storage types and modify the comment  In "StorageType", the other one is to 
keep the storage type sorted by speed. of course, both of the above solutions 
must modify "TestRouterQuota" because of  
"TestRouterQuota.testStorageTypeQuota" just set five quotas for 
StorageType(RAM_DISK, SSD, DISK, ARCHIVE and PROVIDED) without NVDIMM. I prefer 
the second solution, that is, StorageType is still  "sorted by the speed of the 
storage types, from fast to slow", and add parameter of quota for NVDIMM, such 
as
 ??verifyTypeQuotaAndConsume(new long[] \{-1, -1, -1, ssQuota * 2, -1, -1}, 
null, usage);??
 So, which solution do you think is better?

 

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-27 Thread YaYun Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202755#comment-17202755
 ] 

YaYun Wang edited comment on HDFS-15025 at 9/27/20, 7:17 AM:
-

[~ayushtkn],[~liuml07],Sorry didn't notice that before,  I have checked the 
relevant code, the failure with the test case of HDFS-15600 is indeed due to 
NVDIMM newly added, which changes the ordinal of enum of "StorageType". 

Now, i have two solutions: the first one is to put NVDIMM at the end of all 
storage types and modify the comment  In "StorageType", the other one is to 
keep the storage type sorted by speed. of course, both of the above solutions 
must modify "TestRouterQuota" because of  
"TestRouterQuota.testStorageTypeQuota" just set five quotas for 
StorageType(RAM_DISK, SSD, DISK, ARCHIVE and PROVIDED) without NVDIMM. I prefer 
the second solution, that is, StorageType is still  "sorted by the speed of the 
storage types, from fast to slow", and add parameter of quota for NVDIMM, such 
as
 ??verifyTypeQuotaAndConsume(new long[] \{-1, -1, -1, ssQuota * 2, -1, -1}, 
null, usage);??
 So, which solution do you think is better?

 


was (Author: wangyayun):
[~ayushtkn],[~liuml07],Sorry didn't notice that before,  I have checked the 
relevant code, the failure with the test case of HDFS-15600 is indeed due to 
NVDIMM newly added, which changes the ordinal of enum of "StorageType". 

Now, i have two solutions: the first one is to put NVDIMM at the end of all 
storage types and modify the comment  In "StorageType", the other one is to 
keep the storage type sorted by speed. of course, both of the above solutions 
must modify "TestRouterQuota" because of  
"TestRouterQuota.testStorageTypeQuota" just set five quotas for 
StorageType(RAM_DISK, SSD, DISK, ARCHIVE and PROVIDED) without NVDIMM. I prefer 
the second solution, that is, StorageType is still  "sorted by the speed of the 
storage types, from fast to slow", and add parameter of quota for NVDIMM, such 
as
  verifyTypeQuotaAndConsume(new long[] \{-1, -1, -1, ssQuota * 2, -1, -1}, 
null, usage);
So, which solution do you think is better?

 

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-27 Thread YaYun Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202756#comment-17202756
 ] 

YaYun Wang commented on HDFS-15025:
---

[~liuml07], [~ayushtkn],should i fix based on HDFS-15600? Or that's ok to 
reopen the current issue HDFS-15025?

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-27 Thread YaYun Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202755#comment-17202755
 ] 

YaYun Wang commented on HDFS-15025:
---

[~ayushtkn],[~liuml07],Sorry didn't notice that before,  I have checked the 
relevant code, the failure with the test case of HDFS-15600 is indeed due to 
NVDIMM newly added, which changes the ordinal of enum of "StorageType". 

Now, i have two solutions: the first one is to put NVDIMM at the end of all 
storage types and modify the comment  In "StorageType", the other one is to 
keep the storage type sorted by speed. of course, both of the above solutions 
must modify "TestRouterQuota" because of  
"TestRouterQuota.testStorageTypeQuota" just set five quotas for 
StorageType(RAM_DISK, SSD, DISK, ARCHIVE and PROVIDED) without NVDIMM. I prefer 
the second solution, that is, StorageType is still  "sorted by the speed of the 
storage types, from fast to slow", and add parameter of quota for NVDIMM, such 
as
  verifyTypeQuotaAndConsume(new long[] \{-1, -1, -1, ssQuota * 2, -1, -1}, 
null, usage);
So, which solution do you think is better?

 

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org