[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16803862#comment-16803862
 ] 

Feilong He commented on HDFS-14355:
-----------------------------------

Thanks [~rakeshr] so much for giving me these valuable comments.

 
{quote}How about adding an API to interface 
{{MappableBlockLoader#isTransientCache()}} to avoid checks specific to PMem. It 
can return specific flag value to differentiate NVMe/DRAM based cache.
{quote}
It's a good suggestion. In the new patch, I will add a similar method 
#isNonVolatileCache in interface. According to its return value: true/false, we 
can differentiate two kinds of cache.
{quote}I'd like to avoid type casting. It won't work with another Pmem 
implementation, right?
{quote}
Indeed, it is not a good way to use cast type as #getReplicaCachePath method 
shows. In the new patch, I will make #getReplicaCachePath an API in interface 
MappableBlockLoader. So casting type can be avoided.
{quote}Below type casting can be replaced with HDFS-14393 interface.
{quote}
Lazy Persist Write also use the same code for cache bytes management. For read 
cache, block can either be cached to DRAM or pmem. And for Lazy persist write 
cache, only DRAM cache is supported supported. So in our current 
implementation, actually we made DRAM cache and pmem cache coexist, which 
precisely means DRAM write cache and pmem read cache can be enabled 
simultaneously. So I am lean to keep getPmemCacheUsed & getPmemCacheCapacity, 
as well as getCacheUsed and getCacheCapacity. To avoid the issue you pointed 
out, I am considering that we may instantiate PmemVolumeManager in 
FsDataSetCache. So FsDataSetCache will keep a reference to PmemVolumeManager.
{quote}{{FsDatasetUtil#deleteMappedFile}} - try-catch is not required, can we 
do like,
{quote}
I just noted that and I will update our patch.
{quote}Why cant't we avoid {{LocalReplica}} changes and read directly from Util 
like below,
{quote}
Great insight. It is unnecessary to change LocalReplica then. I will update 
this piece code in our new patch.
{quote}As the class {{PmemVolumeManager}} itself represents {{Pmem}} so its 
good to remove this extra keyword from the methods and entities from this class 
- PmemUsedBytesCount, getPmemCacheUsed, getPmemCacheCapacity etc..
{quote}
Another good suggestion. It is indeed unnecessary to use extra keyword to name 
these class/method. I will update the code accordingly.
{quote}Please avoid unchecked conversion and we can do like,
{quote}
Yes, the return type should be checked as you pointed out.
{quote}Add exception message in PmemVolumeManager#verifyIfValidPmemVolume
{quote}
I will do that to give people more details for debugging.
{quote}Here {{IOException}} clause is not required, please remove it. We can 
add it later, if needed.
{quote}
Yes, it is not required in current implementation. I will remove it.
{quote}Can we include block id into the log message, that would improve 
debugging.
{quote}
Yes, including block id into log message is friendly to debugging. I will 
update the code.

 

Thanks again!

 

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -----------------------------------------------------------------
>
>                 Key: HDFS-14355
>                 URL: https://issues.apache.org/jira/browse/HDFS-14355
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: caching, datanode
>            Reporter: Feilong He
>            Assignee: Feilong He
>            Priority: Major
>         Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch, HDFS-14355.003.patch, HDFS-14355.004.patch, 
> HDFS-14355.005.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available or convenient in some environments or platforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to