[ 
https://issues.apache.org/jira/browse/HADOOP-18740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17728181#comment-17728181
 ] 

ASF GitHub Bot commented on HADOOP-18740:
-----------------------------------------

virajjasani commented on code in PR #5675:
URL: https://github.com/apache/hadoop/pull/5675#discussion_r1212472590


##########
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/SingleFilePerBlockCache.java:
##########
@@ -268,12 +310,15 @@ public void close() throws IOException {
     int numFilesDeleted = 0;
 
     for (Entry entry : blocks.values()) {
+      entry.takeLock(Entry.LockType.WRITE);

Review Comment:
   > also, L303: should closed be atomic?
   
   +1 to this suggestion, let me create a separate patch with HADOOP-18756 to 
better track it.
   
   > good: no race condition in close
   > bad) the usual
   
   sounds reasonable, let me try setting timeout





> s3a prefetch cache blocks should be accessed by RW locks
> --------------------------------------------------------
>
>                 Key: HADOOP-18740
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18740
>             Project: Hadoop Common
>          Issue Type: Sub-task
>            Reporter: Viraj Jasani
>            Assignee: Viraj Jasani
>            Priority: Major
>              Labels: pull-request-available
>
> In order to implement LRU or LFU based cache removal policies for s3a 
> prefetched cache blocks, it is important for all cache reader threads to 
> acquire read lock and similarly cache file removal mechanism (fs close or 
> cache eviction) to acquire write lock before accessing the files.
> As we maintain the block entries in an in-memory map, we should be able to 
> introduce read-write lock per cache file entry, we don't need coarse-grained 
> lock shared by all entries.
>  
> This is a prerequisite to HADOOP-18291.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to