Github user icexelloss commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21650#discussion_r205859891
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/python/ExtractPythonUDFs.scala
 ---
    @@ -94,36 +95,61 @@ object ExtractPythonUDFFromAggregate extends 
Rule[LogicalPlan] {
      */
     object ExtractPythonUDFs extends Rule[SparkPlan] with PredicateHelper {
     
    -  private def hasPythonUDF(e: Expression): Boolean = {
    +  private case class EvalTypeHolder(private var evalType: Int = -1) {
    --- End diff --
    
    I see... You uses a var and nested function definition and var to remove 
the need of a holder object. 
    
    IMHO I usually find nested function definition and function that refers to 
variable outside its definition scope hard to read, but it could be my personal 
preference. 
    
    Another thing I like about the current impl the is `EvalTypeHolder` class 
ensures its value is ever changed once it's set so I think that's more robust.
    
    That being said, I am ok with your suggestions too if you insist or 
@BryanCutler also prefers it.
    



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to