mukund-thakur commented on code in PR #999:
URL: https://github.com/apache/parquet-mr/pull/999#discussion_r993792933


##########
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##########
@@ -1093,10 +1099,38 @@ private ColumnChunkPageReadStore 
internalReadFilteredRowGroup(BlockMetaData bloc
         }
       }
     }
-    // actually read all the chunks
+    // Vectored IO up.
+
+    List<FileRange> ranges = new ArrayList<>();
     for (ConsecutivePartList consecutiveChunks : allParts) {
-      consecutiveChunks.readAll(f, builder);
+      ranges.add(FileRange.createFileRange(consecutiveChunks.offset, (int) 
consecutiveChunks.length));
+    }
+    LOG.warn("Doing vectored IO for ranges {}", ranges);
+    f.readVectored(ranges, ByteBuffer::allocate);

Review Comment:
   Well, I just went through the code of ConsecutivePartList#readAll() again. 
Yes, they are breaking the big range into smaller buffers but allocating all of 
them in one go only, so won't the memory issue still persists?
   
   Also, if I do the change in readAll() like I have already done the commented 
readAllVectored(), we really won't be reducing the number of seek operations 
thus won't be getting the real benefits of vectored IO. It will just be like 
there is a big range to be fetched, we break into smaller ranges and fetch them 
parallelly. ( This is similar to PARQUET-2149 which you proposed and have 
already uploaded the PR :) ).
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@parquet.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to