tustvold commented on code in PR #2115:
URL: https://github.com/apache/arrow-rs/pull/2115#discussion_r926089131
##########
parquet/src/arrow/async_reader.rs:
##########
@@ -375,16 +404,27 @@ where
let column = row_group_metadata.column(idx);
let (start, length) = column.byte_range();
- let data = input
- .get_bytes(start as usize..(start +
length) as usize)
- .await?;
+ fetch_ranges
+ .push(start as usize..(start + length) as
usize);
- *chunk = Some(InMemoryColumnChunk {
- num_values: column.num_values(),
- compression: column.compression(),
- physical_type: column.column_type(),
- data,
- });
+ update_chunks.push((chunk, column));
+ }
+
+ for (idx, data) in input
+ .get_byte_ranges(fetch_ranges)
+ .await?
+ .into_iter()
+ .enumerate()
Review Comment:
`.zip(update_chunks.iter_mut())` might be cleaner?
##########
parquet/src/arrow/async_reader.rs:
##########
@@ -366,6 +388,13 @@ where
let mut column_chunks =
vec![None; row_group_metadata.columns().len()];
+ let mut fetch_ranges =
+ Vec::with_capacity(column_chunks.len());
+
+ let mut update_chunks: Vec<(
Review Comment:
My gut says that it would be cleaner to just iterate through the
`column_chunks` and use `filter_map` to extract the ranges, pass this to
`AsyncFileReader`. Convert the result to an iterator and then iterate the
`column_chunks` again, popping the next element from the iterator for each
included column.
Not a big deal though
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]