Github user cloud-fan commented on the issue:

    https://github.com/apache/spark/pull/22144
  
    @tgravescs please quote my full comment instead of part of it.
    
    > After all, this is a bug and a regression from previous releases, like 
other 1000 we've fixed before.
    
    The point I was making there is, this issue is not the ones that HAVE TO 
block a release, like correctness issue. I immediately list the reasons 
afterward why I don't think it's a blocker.
    
    > hive compatibility is not that important to Spark at this point
    
    I'm sorry if this worries you. It's true that we focus more on Spark itself 
instead of Hive compatibility in the recent development, but this should not be 
applied to existing Hive compatibility features in Spark and we should still 
maintain them.
    
    BTW, I removed the `supportPartial` flag because no aggregate functions in 
Spark need it(including the adapted Hive UDAF), but the problem exists in how 
to adapt Hive UDAF, which was introduced by  
https://issues.apache.org/jira/browse/SPARK-18186


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to