Ted-Jiang commented on code in PR #4105:
URL: https://github.com/apache/arrow-datafusion/pull/4105#discussion_r1013849132
##########
datafusion/core/src/physical_plan/file_format/parquet.rs:
##########
@@ -1695,6 +1709,35 @@ mod tests {
Ok(())
}
+ #[tokio::test]
+ async fn parquet_page_index_exec_metrics() {
+ let c1: ArrayRef = Arc::new(Int32Array::from(vec![Some(1), None,
Some(2)]));
+ let c2: ArrayRef = Arc::new(Int32Array::from(vec![Some(3), Some(4),
Some(5)]));
+ let batch1 = create_batch(vec![("int", c1.clone())]);
+ let batch2 = create_batch(vec![("int", c2.clone())]);
+
+ let filter = col("int").eq(lit(4_i32));
+
+ let rt =
+ round_trip(vec![batch1, batch2], None, None, Some(filter), false,
true).await;
+
+ let metrics = rt.parquet_exec.metrics().unwrap();
+
+ // todo fix this https://github.com/apache/arrow-rs/issues/2941
release change to row limit.
Review Comment:
https://github.com/apache/arrow-rs/pull/2942/files#r1013838557
I think theres is a bug in `should_add_data_page `
`self.encoder.num_values() ` always zero.
So no matter how to set `data_pagesize_limit ` and `write_batch_size`
always return 1 page in on colunn chunk.
🤔 I think someone metion this before.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]