gudladona opened a new pull request, #18241:
URL: https://github.com/apache/hudi/pull/18241
### Summary and Changelog
This commit introduces significant performance optimizations to the
`HoodieParquetFileBinaryCopier` and `HoodieParquetBinaryCopyBase` classes,
addressing critical bottlenecks in the binary copy clustering strategy.
Key Improvements:
1. **Lazy File Opening**: Refactored `HoodieParquetFileBinaryCopier` to
open input files lazily one by one instead of opening all readers upfront. This
eliminates the initial latency spike and reduces memory pressure when
clustering a large number of files.
2. **Whole-File In-Memory Processing**: Implemented a "Read Whole File"
strategy for files smaller than 2GB. Instead of performing thousands of small
S3 GET requests for footer, bloom filters, column indexes, and row groups, the
entire file is read into memory once. This drastically reduces I/O latency.
3. **Double Buffering & Prefetching**: Introduced a double-buffering
mechanism with asynchronous prefetching. While the main thread processes the
current file (CPU-bound), a background thread fetches the next file (I/O-bound)
into a second buffer. This overlaps computation and I/O, maximizing throughput.
- Buffers are reused across files to eliminate large object allocation
churn and reduce GC pressure.
- Buffers dynamically resize with padding to accommodate varying file
sizes.
4. **Bulk Row Group Reading**: Updated `HoodieParquetBinaryCopyBase` to
read entire row groups into memory with a single contiguous read operation,
replacing per-column chunk reads. This further minimizes S3 calls for larger
files that exceed the in-memory processing limit.
5. **Optimized Seekable Stream**: Introduced
`ByteArraySeekableInputStream`, a specialized in-memory implementation of
`SeekableInputStream` that supports efficient zero-copy windowing and bulk byte
transfer (`read(ByteBuffer)`, `read(byte[], int, int)`).
6. **Correctness Fixes**:
- Fixed row group start position calculation to correctly account for
dictionary pages that precede data pages.
- Ensured proper resource cleanup and buffer release in `close()`.
<!-- Short, plain-English summary of what users gain or what changed in
behavior.
Followed by a detailed log of all the changes. Highlight if any code
was copied. -->
### Impact
Somewhat higher memory usage when clustering large files. But much faster
clustering performance with running in this mode.
### Risk Level (medium)
<!-- Accepted values: none, low, medium or high. Other than `none`, explain
the risk.
If medium or high, explain what verification was done to mitigate the
risks. -->
### Contributor's checklist
- [ ] Read through [contributor's
guide](https://hudi.apache.org/contribute/how-to-contribute)
- [ ] Enough context is provided in the sections above
- [ ] Adequate tests were added if applicable
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]