pranavsaxena-microsoft commented on code in PR #5117:
URL: https://github.com/apache/hadoop/pull/5117#discussion_r1033198624


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java:
##########
@@ -454,31 +597,86 @@ ReadBuffer getNextBlockToRead() throws 
InterruptedException {
    */
   void doneReading(final ReadBuffer buffer, final ReadBufferStatus result, 
final int bytesActuallyRead) {
     if (LOGGER.isTraceEnabled()) {
-      LOGGER.trace("ReadBufferWorker completed read file {} for offset {} 
outcome {} bytes {}",
-          buffer.getStream().getPath(),  buffer.getOffset(), result, 
bytesActuallyRead);
-    }
-    synchronized (this) {
-      // If this buffer has already been purged during
-      // close of InputStream then we don't update the lists.
-      if (inProgressList.contains(buffer)) {
-        inProgressList.remove(buffer);
-        if (result == ReadBufferStatus.AVAILABLE && bytesActuallyRead > 0) {
-          buffer.setStatus(ReadBufferStatus.AVAILABLE);
-          buffer.setLength(bytesActuallyRead);
-        } else {
-          freeList.push(buffer.getBufferindex());
-          // buffer will be deleted as per the eviction policy.
+      LOGGER.trace("ReadBufferWorker completed file {} for offset {} bytes {}; 
{}",
+          buffer.getStream().getPath(),  buffer.getOffset(), 
bytesActuallyRead, buffer);
+    }
+    // decrement counter.
+    buffer.prefetchFinished();
+
+    try {
+      synchronized (this) {
+        // remove from the list
+        if (!inProgressList.remove(buffer)) {
+          // this is a sign of inconsistent state, so a major problem, such as
+          // double invocation of this method with the same buffer.
+          String message =
+              String.format("Read completed from an operation not declared as 
in progress %s",
+                  buffer);
+          LOGGER.warn(message);
+          if (buffer.hasIndexedBuffer()) {
+            // release the buffer (which may raise an exception)
+            placeBufferOnFreeList("read not in progress", buffer);
+          }
+          // report the failure
+          throw new IllegalStateException(message);
         }
-        // completed list also contains FAILED read buffers
-        // for sending exception message to clients.
+
+        // should the read buffer be added to the completed list?
+        boolean addCompleted = true;
+        // flag to indicate buffer should be freed
+        boolean shouldFreeBuffer = false;
+        // and the reason (for logging)
+        String freeBufferReason = "";
+
         buffer.setStatus(result);
         buffer.setTimeStamp(currentTimeMillis());
-        completedReadList.add(buffer);
+        // did the read return any data?
+        if (result == ReadBufferStatus.AVAILABLE
+            && bytesActuallyRead > 0) {
+          // successful read of data; update buffer state.
+          buffer.setLength(bytesActuallyRead);
+        } else {
+          // read failed or there was no data; the buffer can be returned to 
the free list.
+          shouldFreeBuffer = true;
+          freeBufferReason = "failed read";
+          // completed list also contains FAILED read buffers
+          // for sending exception message to clients.
+          // NOTE: checks for closed state may update this.
+          addCompleted = result == ReadBufferStatus.READ_FAILED;

Review Comment:
   Should we not add the buffer in completedList. Reason being it will always 
be removed from the list via oldFailedBuffers removal. Kindly suggest if there 
is a reason for adding it. Thanks.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to