[GitHub] [hadoop] virajjasani commented on a diff in pull request #5675: HADOOP-18740. S3A prefetch cache blocks should be accessed by RW locks

2023-06-01 Thread via GitHub


virajjasani commented on code in PR #5675:
URL: https://github.com/apache/hadoop/pull/5675#discussion_r1213340095


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/SingleFilePerBlockCache.java:
##
@@ -127,11 +128,33 @@ void takeLock(LockType lockType) {
  */
 void releaseLock(LockType lockType) {
   if (LockType.READ == lockType) {
-this.lock.readLock().unlock();
+lock.readLock().unlock();
   } else if (LockType.WRITE == lockType) {
-this.lock.writeLock().unlock();
+lock.writeLock().unlock();
   }
 }
+
+/**
+ * Try to take the read or write lock within the given timeout.
+ *
+ * @param lockType type of the lock.
+ * @param timeout the time to wait for the given lock.
+ * @param unit the time unit of the timeout argument.
+ * @return true if the lock of the given lock type was acquired.
+ */
+boolean takeLock(LockType lockType, long timeout, TimeUnit unit) {

Review Comment:
   sounds good, done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on a diff in pull request #5675: HADOOP-18740. S3A prefetch cache blocks should be accessed by RW locks

2023-06-01 Thread via GitHub


virajjasani commented on code in PR #5675:
URL: https://github.com/apache/hadoop/pull/5675#discussion_r1213323650


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/SingleFilePerBlockCache.java:
##
@@ -310,7 +333,12 @@ public void close() throws IOException {
 int numFilesDeleted = 0;
 
 for (Entry entry : blocks.values()) {
-  entry.takeLock(Entry.LockType.WRITE);
+  boolean lockAcquired = entry.takeLock(Entry.LockType.WRITE, 5, 
TimeUnit.SECONDS);

Review Comment:
   my bad, let me fix this real quick



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on a diff in pull request #5675: HADOOP-18740. S3A prefetch cache blocks should be accessed by RW locks

2023-05-31 Thread via GitHub


virajjasani commented on code in PR #5675:
URL: https://github.com/apache/hadoop/pull/5675#discussion_r1212472590


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/SingleFilePerBlockCache.java:
##
@@ -268,12 +310,15 @@ public void close() throws IOException {
 int numFilesDeleted = 0;
 
 for (Entry entry : blocks.values()) {
+  entry.takeLock(Entry.LockType.WRITE);

Review Comment:
   > also, L303: should closed be atomic?
   
   +1 to this suggestion, let me create a separate patch with HADOOP-18756 to 
better track it.
   
   > good: no race condition in close
   > bad) the usual
   
   sounds reasonable, let me try setting timeout



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org