PHILO-HE commented on code in PR #10900:
URL: 
https://github.com/apache/incubator-gluten/pull/10900#discussion_r2447034497


##########
gluten-iceberg/src/main/scala/org/apache/iceberg/spark/source/IcebergWriteUtil.scala:
##########
@@ -27,8 +28,8 @@ import org.apache.iceberg.types.Types.{ListType, MapType}
 
 object IcebergWriteUtil {
 
-  def isDataWrite(write: Write): Boolean = {
-    write.isInstanceOf[SparkWrite]
+  def supportsWrite(write: Write): Boolean = {
+    SparkReflectionUtil.isInstanceOfClassName(write, 
"org.apache.iceberg.spark.source.SparkWrite")

Review Comment:
   I see. The iceberg dependency is in provided scope. Thanks for the 
clarification.
   
   Here are some quick thoughts. Please check if they make sense.
   
   1. I'm assuming these iceberg/paimon rules can actually work only when the 
related dependency is available at runtime. If so, can we first do a check 
before injecting the corresponding rules? E.g., if some iceberg class is not 
available at runtime, we will not inject the related rules. Then ClassNotFound 
issue will not occur for the rules' internal code since the execution is always 
skipped. This seems also helpful to improve the plan transformation efficiency.
   2. If we continue the proposed changes, considering the reflection overhead, 
can we just use the fully qualified class name string for comparison like what 
paimon scan validation does?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to