mbutrovich opened a new pull request, #4128:
URL: https://github.com/apache/datafusion-comet/pull/4128

   ## Which issue does this PR close?
   
   Closes #4002.
   
   ## Rationale for this change
   
   Iceberg native scans report zero for task-level input metrics (bytesRead, 
recordsRead) in Spark UI because iceberg-rust reads files entirely in Rust, 
bypassing Hadoop's Java I/O counters. Upstream iceberg-rust PR 
apache/iceberg-rust#2349 added `ScanMetrics` with a live `bytes_read` counter. 
This PR plumbs that through to Spark.
   
   ## What changes are included in this PR?
   
   - Bump iceberg-rust dep from `a2f067d` to `1ad4bfd` (adds 
`ScanResult`/`ScanMetrics`)
   - `ArrowReader::read()` now returns `ScanResult`: extract stream and clone 
metrics handle
   - Add `bytes_scanned` Count metric to `IcebergScanMetrics`, bridge 
iceberg-rust's live `AtomicU64` into the DF metric tree on each `poll_next` via 
delta tracking
   - Add `bytes_scanned` SQLMetric to `CometIcebergNativeScanExec`
   - Override `CometExecRDD.compute()` to call `reportScanInputMetrics` (same 
pattern as the Parquet path)
   - Remove stale `configured_scheme` field from `OpenDalStorageFactory::S3` 
(upstream API change)
   
   ## How are these changes tested?
   
   - Added `bytes_scanned > 0` assertion to existing "verify all Iceberg 
planning metrics" test
   - New test: "task-level inputMetrics.bytesRead is populated for Iceberg 
native scan" — uses `SparkListener` to verify `bytesRead > 0`, `recordsRead == 
10000`, and cross-checks SQL-level `bytes_scanned` matches task-level 
`bytesRead`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to