mapleFU commented on code in PR #36967:
URL: https://github.com/apache/arrow/pull/36967#discussion_r1280347546
##########
cpp/src/arrow/dataset/file_parquet.cc:
##########
@@ -779,6 +787,28 @@ Result<std::vector<int>>
ParquetFileFragment::FilterRowGroups(
return row_groups;
}
+Result<std::vector<int>> ParquetFileFragment::FilterRangeRowGroups(
+ int64_t start_offset, int64_t length) {
+ std::vector<int> row_groups;
+ for (int row_group : *row_groups_) {
+ auto rg_metadata = metadata_->RowGroup(row_group);
+ std::shared_ptr<parquet::ColumnChunkMetaData> cc0 =
rg_metadata->ColumnChunk(0);
+ int64_t r_start = cc0->data_page_offset();
+ if (cc0->has_dictionary_page() && r_start > cc0->dictionary_page_offset())
{
+ r_start = cc0->dictionary_page_offset();
+ }
+ int64_t r_bytes = 0L;
+ for (int col_id = 0; col_id < rg_metadata->num_columns(); col_id++) {
+ r_bytes += rg_metadata->ColumnChunk(col_id)->total_compressed_size();
Review Comment:
`ColumnChunkMetadata.total_compressed_size` doesn't include the
`ColumnChunkMetadata` thrift, which might be at end of ColumnChunk. Which might
make the `length` be sightly smaller than expected.
Personally, I suggest to using other way to do this.
##########
cpp/src/arrow/dataset/file_parquet.cc:
##########
@@ -523,6 +523,10 @@ Result<RecordBatchGenerator>
ParquetFileFormat::ScanBatchesAsync(
ARROW_ASSIGN_OR_RAISE(row_groups,
parquet_fragment->FilterRowGroups(options->filter));
pre_filtered = true;
if (row_groups.empty()) return
MakeEmptyGenerator<std::shared_ptr<RecordBatch>>();
+ if (options->start_offset != kDefaultStartOffset) {
+ ARROW_ASSIGN_OR_RAISE(row_groups,
Review Comment:
This look good to me now!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]