cloud-fan commented on a change in pull request #31451:
URL: https://github.com/apache/spark/pull/31451#discussion_r604572473



##########
File path: 
sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/PartitionReader.java
##########
@@ -51,7 +51,8 @@
   T get();
 
   /**
-   * Returns an array of custom task metrics. By default it returns empty 
array.
+   * Returns an array of custom task metrics. By default it returns empty 
array. Note that it is
+   * not recommended to put heavy logic in this method as it may affect 
reading performance.

Review comment:
       Normally, this method should be very cheap, as it just returns the 
current metrics values that are being tracked. The implementation may track the 
metrics per-row or per-batch, but that's out of our control.
   
   That said, even if we call this method only at the end of partition read, 
the perf overhead can still be big if the data source tracks the metrics 
per-row under the hood.
   
   @wypoon does it answer your concern?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to