mbutrovich commented on code in PR #3349:
URL: https://github.com/apache/datafusion-comet/pull/3349#discussion_r2772130071


##########
spark/src/main/scala/org/apache/spark/sql/comet/CometIcebergNativeScanExec.scala:
##########
@@ -95,17 +216,26 @@ case class CometIcebergNativeScanExec(
     }
   }
 
-  private val capturedMetricValues: Seq[MetricValue] = {
-    originalPlan.metrics
-      .filterNot { case (name, _) =>
-        // Filter out metrics that are now runtime metrics incremented on the 
native side
-        name == "numOutputRows" || name == "numDeletes" || name == "numSplits"
-      }
-      .map { case (name, metric) =>
-        val mappedType = mapMetricType(name, metric.metricType)
-        MetricValue(name, metric.value, mappedType)
-      }
-      .toSeq
+  @transient private lazy val capturedMetricValues: Seq[MetricValue] = {
+    // Guard against null originalPlan (from doCanonicalize)
+    if (originalPlan == null) {
+      Seq.empty
+    } else {
+      // Force serializedPartitionData evaluation first - this triggers 
serializePartitions which
+      // accesses inputRDD, which triggers Iceberg planning and populates 
metrics
+      val _ = serializedPartitionData

Review Comment:
   I added some comments to try to expand on why stuff is where it is in terms 
of control flow. LMK if you still think it should be adjusted.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to