Github user zhzhan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16068#discussion_r91026433
  
    --- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveUDFSuite.scala 
---
    @@ -487,6 +488,52 @@ class HiveUDFSuite extends QueryTest with 
TestHiveSingleton with SQLTestUtils {
         assert(count4 == 1)
         sql("DROP TABLE parquet_tmp")
       }
    +
    +  test("Hive Stateful UDF") {
    +    withUserDefinedFunction("statefulUDF" -> true, "statelessUDF" -> true) 
{
    +      sql(s"CREATE TEMPORARY FUNCTION statefulUDF AS 
'${classOf[StatefulUDF].getName}'")
    +      sql(s"CREATE TEMPORARY FUNCTION statelessUDF AS 
'${classOf[StatelessUDF].getName}'")
    +      withTempView("inputTable") {
    +        val testData = spark.sparkContext.parallelize(
    +          (0 until 10) map (x => IntegerCaseClass(1)), 2).toDF()
    +        testData.createOrReplaceTempView("inputTable")
    +        // Distribute all rows to one partition (all rows have the same 
content),
    +        // and expected Max(s) is 10 as statefulUDF returns the sequence 
number starting from 1.
    +        checkAnswer(
    +          sql(
    +            """
    +            |SELECT MAX(s) FROM
    +            |  (SELECT statefulUDF() as s FROM
    +            |    (SELECT i from inputTable DISTRIBUTE by i) a
    +            |    ) b
    +          """.stripMargin),
    +          Row(10))
    +
    +        // Expected Max(s) is 5, as there are 2 partitions with 5 rows 
each, and statefulUDF
    +        // returns the sequence number of the rows in the partition 
starting from 1.
    +        checkAnswer(
    +          sql(
    +            """
    +              |SELECT MAX(s) FROM
    +              |  (SELECT statefulUDF() as s FROM
    +              |    (SELECT i from inputTable) a
    +              |    ) b
    +            """.stripMargin),
    +          Row(5))
    +
    +        // Expected Max(s) is 1, as stateless UDF is deterministic and 
replaced by constant 1.
    --- End diff --
    
    StatelessUDF is foldable:   override def foldable: Boolean = 
isUDFDeterministic && children.forall(_.foldable)
    
    ConstantFolding optimizer will replace it with constant:
          case e if e.foldable => Literal.create(e.eval(EmptyRow), e.dataType)
    
    Here is the explain(true):
    == Parsed Logical Plan ==
    'Project [unresolvedalias('MAX('s), None)]
    +- 'SubqueryAlias b
       +- 'Project ['statelessUDF() AS s#39]
          +- 'SubqueryAlias a
             +- 'RepartitionByExpression ['i]
                +- 'Project ['i]
                   +- 'UnresolvedRelation `inputTable`
    
    == Analyzed Logical Plan ==
    max(s): bigint
    Aggregate [max(s#39L) AS max(s)#46L]
    +- SubqueryAlias b
       +- Project 
[HiveSimpleUDF#org.apache.spark.sql.hive.execution.StatelessUDF() AS s#39L]
          +- SubqueryAlias a
             +- RepartitionByExpression [i#4]
                +- Project [i#4]
                   +- SubqueryAlias inputtable
                      +- SerializeFromObject 
[assertnotnull(assertnotnull(input[0, 
org.apache.spark.sql.hive.execution.IntegerCaseClass, true], top level Product 
input object), - root class: 
"org.apache.spark.sql.hive.execution.IntegerCaseClass").i AS i#4]
                         +- ExternalRDD [obj#3]
    
    == Optimized Logical Plan ==
    Aggregate [max(s#39L) AS max(s)#46L]
    +- Project [1 AS s#39L]
       +- RepartitionByExpression [i#4]
          +- SerializeFromObject [assertnotnull(assertnotnull(input[0, 
org.apache.spark.sql.hive.execution.IntegerCaseClass, true], top level Product 
input object), - root class: 
"org.apache.spark.sql.hive.execution.IntegerCaseClass").i AS i#4]
             +- ExternalRDD [obj#3]
    
    == Physical Plan ==
    *HashAggregate(keys=[], functions=[max(s#39L)], output=[max(s)#46L])
    +- Exchange SinglePartition
       +- *HashAggregate(keys=[], functions=[partial_max(s#39L)], 
output=[max#48L])
          +- *Project [1 AS s#39L]
             +- Exchange hashpartitioning(i#4, 5)
                +- *SerializeFromObject [assertnotnull(assertnotnull(input[0, 
org.apache.spark.sql.hive.execution.IntegerCaseClass, true], top level Product 
input object), - root class: 
"org.apache.spark.sql.hive.execution.IntegerCaseClass").i AS i#4]
                   +- Scan ExternalRDDScan[obj#3]



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to