linhongliu-db commented on a change in pull request #31286:
URL: https://github.com/apache/spark/pull/31286#discussion_r565819800



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
##########
@@ -1763,6 +1763,21 @@ class Analyzer(override val catalogManager: 
CatalogManager)
     def expandStarExpression(expr: Expression, child: LogicalPlan): Expression 
= {
       expr.transformUp {
         case f1: UnresolvedFunction if containsStar(f1.arguments) =>
+          // SPECIAL CASE: We want to block count(table.*) because in spark, 
count(table.*) will
+          // be expanded while count(*) will be converted to count(1). They 
will produce different
+          // results and confuse users if there is any null values. For 
count(t1.*, t2.*), it is

Review comment:
       Thanks @maropu  for reviewing.
   
   `select count(t.*, t.*) from values (1, null) t(a, b)` will output 0.
   
   I'm fine with blocking `count(t1.*, t2.*)` as well but since in spark, we 
are allowing other similar cases that other databases don't support (and not 
follow ANSI), it's not harmful to keep `count(t1.*, t2.*)` as one more case. 
After all, introducing unnecessary behavior change (blocking `count(t1.*, 
t2.*)`) doesn't benefit users.
   The similar usages:
   count(col_a, col_b) - count multiple columns is not supported by pgsql, 
oracel. MySQL only support with distinct
   count(struct_col.*) - expand columns in struct data type is not supported by 
the mentioned databases.

##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
##########
@@ -1763,6 +1763,21 @@ class Analyzer(override val catalogManager: 
CatalogManager)
     def expandStarExpression(expr: Expression, child: LogicalPlan): Expression 
= {
       expr.transformUp {
         case f1: UnresolvedFunction if containsStar(f1.arguments) =>
+          // SPECIAL CASE: We want to block count(table.*) because in spark, 
count(table.*) will
+          // be expanded while count(*) will be converted to count(1). They 
will produce different
+          // results and confuse users if there is any null values. For 
count(t1.*, t2.*), it is

Review comment:
       Thanks @maropu  for reviewing.
   
   `select count(t.*, t.*) from values (1, null) t(a, b)` will output 0.
   
   I'm fine with blocking `count(t1.*, t2.*)` as well but since in spark, we 
are allowing other similar cases that other databases don't support (and not 
follow ANSI), it's not harmful to keep `count(t1.*, t2.*)` as one more case. 
After all, introducing unnecessary behavior change (blocking `count(t1.*, 
t2.*)`) doesn't benefit users.
   The similar usages:
   `count(col_a, col_b)` - count multiple columns is not supported by pgsql, 
oracel. MySQL only support with distinct
   `count(struct_col.*)` - expand columns in struct data type is not supported 
by the mentioned databases.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to