mapleFU commented on code in PR #36967:
URL: https://github.com/apache/arrow/pull/36967#discussion_r1280197113


##########
cpp/src/arrow/dataset/file_parquet.cc:
##########
@@ -779,6 +787,28 @@ Result<std::vector<int>> 
ParquetFileFragment::FilterRowGroups(
   return row_groups;
 }
 
+Result<std::vector<int>> ParquetFileFragment::FilterRangeRowGroups(
+    int64_t start_offset, int64_t length) {
+  std::vector<int> row_groups;
+  for (int row_group : *row_groups_) {
+    auto rg_metadata = metadata_->RowGroup(row_group);
+    std::shared_ptr<parquet::ColumnChunkMetaData> cc0 = 
rg_metadata->ColumnChunk(0);
+    int64_t r_start = cc0->data_page_offset();
+    if (cc0->has_dictionary_page() && r_start > cc0->dictionary_page_offset()) 
{
+      r_start = cc0->dictionary_page_offset();
+    }
+    int64_t r_bytes = 0L;
+    for (int col_id = 0; col_id < rg_metadata->num_columns(); col_id++) {
+      r_bytes += rg_metadata->ColumnChunk(col_id)->total_compressed_size();

Review Comment:
   Nice, but this is comment that "Visible for testing", I'm not sure it can 
handle all cases...
   @wgtmac Do you have any idea that how other system split row-group by 
`[offset, length)`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to