[GitHub] [spark] viirya commented on a change in pull request #34038: [SPARK-36797][SQL] Union should resolve nested columns as top-level columns

2021-09-25 Thread GitBox


viirya commented on a change in pull request #34038:
URL: https://github.com/apache/spark/pull/34038#discussion_r716009611



##
File path: 
sql/core/src/test/resources/sql-tests/results/postgreSQL/union.sql.out
##
@@ -684,8 +684,8 @@ SELECT cast('3.4' as decimal(38, 18)) UNION SELECT 'foo'
 -- !query schema
 struct<>
 -- !query output
-org.apache.spark.SparkException
-Failed to merge incompatible data types decimal(38,18) and string
+org.apache.spark.sql.AnalysisException
+Union can only be performed on tables with the compatible column types. string 
<> decimal(38,18) at the first column of the second table

Review comment:
   Updated the error message. Please let me know if it looks good to you.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] viirya commented on a change in pull request #34038: [SPARK-36797][SQL] Union should resolve nested columns as top-level columns

2021-09-24 Thread GitBox


viirya commented on a change in pull request #34038:
URL: https://github.com/apache/spark/pull/34038#discussion_r715750877



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
##
@@ -273,29 +274,37 @@ abstract class TypeCoercionBase {
 @tailrec private def getWidestTypes(
 children: Seq[LogicalPlan],
 attrIndex: Int,
-castedTypes: mutable.Queue[DataType]): Seq[DataType] = {
+castedTypes: mutable.Queue[Option[DataType]]): Seq[Option[DataType]] = 
{
   // Return the result after the widen data types have been found for all 
the children
   if (attrIndex >= children.head.output.length) return castedTypes.toSeq
 
   // For the attrIndex-th attribute, find the widest type
   findWiderCommonType(children.map(_.output(attrIndex).dataType)) match {
 // If unable to find an appropriate widen type for this column, return 
an empty Seq
-case None => Seq.empty[DataType]
+case None =>
+  castedTypes.enqueue(None)
+  getWidestTypes(children, attrIndex + 1, castedTypes)
 // Otherwise, record the result in the queue and find the type for the 
next column
 case Some(widenType) =>
-  castedTypes.enqueue(widenType)
+  castedTypes.enqueue(Some(widenType))
   getWidestTypes(children, attrIndex + 1, castedTypes)

Review comment:
   Just simplified to
   
   ```scala
   val widenTypeOpt = 
findWiderCommonType(children.map(_.output(attrIndex).dataType))
   castedTypes.enqueue(widenTypeOpt)
   getWidestTypes(children, attrIndex + 1, castedTypes)
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] viirya commented on a change in pull request #34038: [SPARK-36797][SQL] Union should resolve nested columns as top-level columns

2021-09-24 Thread GitBox


viirya commented on a change in pull request #34038:
URL: https://github.com/apache/spark/pull/34038#discussion_r715749405



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
##
@@ -273,29 +274,37 @@ abstract class TypeCoercionBase {
 @tailrec private def getWidestTypes(
 children: Seq[LogicalPlan],
 attrIndex: Int,
-castedTypes: mutable.Queue[DataType]): Seq[DataType] = {
+castedTypes: mutable.Queue[Option[DataType]]): Seq[Option[DataType]] = 
{
   // Return the result after the widen data types have been found for all 
the children
   if (attrIndex >= children.head.output.length) return castedTypes.toSeq
 
   // For the attrIndex-th attribute, find the widest type
   findWiderCommonType(children.map(_.output(attrIndex).dataType)) match {
 // If unable to find an appropriate widen type for this column, return 
an empty Seq
-case None => Seq.empty[DataType]
+case None =>
+  castedTypes.enqueue(None)
+  getWidestTypes(children, attrIndex + 1, castedTypes)
 // Otherwise, record the result in the queue and find the type for the 
next column
 case Some(widenType) =>
-  castedTypes.enqueue(widenType)
+  castedTypes.enqueue(Some(widenType))
   getWidestTypes(children, attrIndex + 1, castedTypes)

Review comment:
   `findWiderCommonType` returns `Opion[DataType]`. `map` can iterate over 
the `DataType` if any, but we still need to enqueue the `None`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] viirya commented on a change in pull request #34038: [SPARK-36797][SQL] Union should resolve nested columns as top-level columns

2021-09-24 Thread GitBox


viirya commented on a change in pull request #34038:
URL: https://github.com/apache/spark/pull/34038#discussion_r715745567



##
File path: 
sql/core/src/test/resources/sql-tests/results/postgreSQL/union.sql.out
##
@@ -684,8 +684,8 @@ SELECT cast('3.4' as decimal(38, 18)) UNION SELECT 'foo'
 -- !query schema
 struct<>
 -- !query output
-org.apache.spark.SparkException
-Failed to merge incompatible data types decimal(38,18) and string
+org.apache.spark.sql.AnalysisException
+Union can only be performed on tables with the compatible column types. string 
<> decimal(38,18) at the first column of the second table

Review comment:
   Ha, this comes from `CheckAnalysis`'s original error message. We can 
improve it, although there are some more tests relying on the error message.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] viirya commented on a change in pull request #34038: [SPARK-36797][SQL] Union should resolve nested columns as top-level columns

2021-09-23 Thread GitBox


viirya commented on a change in pull request #34038:
URL: https://github.com/apache/spark/pull/34038#discussion_r715229676



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
##
@@ -401,15 +401,30 @@ trait CheckAnalysis extends PredicateHelper with 
LookupCatalog {
 |the ${ordinalNumber(ti + 1)} table has 
${child.output.length} columns
   """.stripMargin.replace("\n", " ").trim())
   }
+  val isUnion = operator.isInstanceOf[Union]
+  val dataTypesAreCompatibleFn = if (isUnion) {
+// `TypeCoercion` takes care of type coercion already. If any 
columns or nested
+// columns are not compatible, we detect it here and throw 
analysis exception.
+val typeChecker = (dt1: DataType, dt2: DataType) => {
+  !TypeCoercion.findWiderTypeForTwo(dt1.asNullable, 
dt2.asNullable).isEmpty

Review comment:
   Ok, it works. I changed `TypeCoercion` to follow the discussed behavior.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] viirya commented on a change in pull request #34038: [SPARK-36797][SQL] Union should resolve nested columns as top-level columns

2021-09-23 Thread GitBox


viirya commented on a change in pull request #34038:
URL: https://github.com/apache/spark/pull/34038#discussion_r714527623



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
##
@@ -401,15 +401,30 @@ trait CheckAnalysis extends PredicateHelper with 
LookupCatalog {
 |the ${ordinalNumber(ti + 1)} table has 
${child.output.length} columns
   """.stripMargin.replace("\n", " ").trim())
   }
+  val isUnion = operator.isInstanceOf[Union]
+  val dataTypesAreCompatibleFn = if (isUnion) {
+// `TypeCoercion` takes care of type coercion already. If any 
columns or nested
+// columns are not compatible, we detect it here and throw 
analysis exception.
+val typeChecker = (dt1: DataType, dt2: DataType) => {
+  !TypeCoercion.findWiderTypeForTwo(dt1.asNullable, 
dt2.asNullable).isEmpty

Review comment:
   You mean to add cast for compatible ones, and ignore these cannot? let 
me try if it's possible.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] viirya commented on a change in pull request #34038: [SPARK-36797][SQL] Union should resolve nested columns as top-level columns

2021-09-22 Thread GitBox


viirya commented on a change in pull request #34038:
URL: https://github.com/apache/spark/pull/34038#discussion_r714490029



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
##
@@ -401,15 +401,30 @@ trait CheckAnalysis extends PredicateHelper with 
LookupCatalog {
 |the ${ordinalNumber(ti + 1)} table has 
${child.output.length} columns
   """.stripMargin.replace("\n", " ").trim())
   }
+  val isUnion = operator.isInstanceOf[Union]
+  val dataTypesAreCompatibleFn = if (isUnion) {
+// `TypeCoercion` takes care of type coercion already. If any 
columns or nested
+// columns are not compatible, we detect it here and throw 
analysis exception.
+val typeChecker = (dt1: DataType, dt2: DataType) => {
+  !TypeCoercion.findWiderTypeForTwo(dt1.asNullable, 
dt2.asNullable).isEmpty

Review comment:
   Oh, I spent a little time to recall why I keep original check logic.
   
   It is because if `TypeCoercion` fails to find compatible types for any 
column, it won't add cast for all. It is all or nothing logic here.
   
   So if we only check `dt1 == dt2` here, we compare the original data types 
even some of them are compatible.
   
   `AnalysisErrorSuite` has one example. One relation has `short, string, 
double, decimal`, another one has `string, string, string, map`.
   
   The first three columns are compatible, only the fourth isn't. So 
`TypeCoercion` fails to add casts for all.
   
   If we compare `dt1 == dt2`, the error will be like "short is not compatible 
with string". But currently we get like "decimal is not compatible with map".
   

##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
##
@@ -401,15 +401,30 @@ trait CheckAnalysis extends PredicateHelper with 
LookupCatalog {
 |the ${ordinalNumber(ti + 1)} table has 
${child.output.length} columns
   """.stripMargin.replace("\n", " ").trim())
   }
+  val isUnion = operator.isInstanceOf[Union]
+  val dataTypesAreCompatibleFn = if (isUnion) {
+// `TypeCoercion` takes care of type coercion already. If any 
columns or nested
+// columns are not compatible, we detect it here and throw 
analysis exception.
+val typeChecker = (dt1: DataType, dt2: DataType) => {
+  !TypeCoercion.findWiderTypeForTwo(dt1.asNullable, 
dt2.asNullable).isEmpty

Review comment:
   Oh, I spent a little time to recall why I keep original check logic.
   
   It is because if `TypeCoercion` fails to find compatible types for any 
column, it won't add cast for all. It is all or nothing logic there.
   
   So if we only check `dt1 == dt2` here, we compare the original data types 
even some of them are compatible.
   
   `AnalysisErrorSuite` has one example. One relation has `short, string, 
double, decimal`, another one has `string, string, string, map`.
   
   The first three columns are compatible, only the fourth isn't. So 
`TypeCoercion` fails to add casts for all.
   
   If we compare `dt1 == dt2`, the error will be like "short is not compatible 
with string". But currently we get like "decimal is not compatible with map".
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] viirya commented on a change in pull request #34038: [SPARK-36797][SQL] Union should resolve nested columns as top-level columns

2021-09-22 Thread GitBox


viirya commented on a change in pull request #34038:
URL: https://github.com/apache/spark/pull/34038#discussion_r714490029



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
##
@@ -401,15 +401,30 @@ trait CheckAnalysis extends PredicateHelper with 
LookupCatalog {
 |the ${ordinalNumber(ti + 1)} table has 
${child.output.length} columns
   """.stripMargin.replace("\n", " ").trim())
   }
+  val isUnion = operator.isInstanceOf[Union]
+  val dataTypesAreCompatibleFn = if (isUnion) {
+// `TypeCoercion` takes care of type coercion already. If any 
columns or nested
+// columns are not compatible, we detect it here and throw 
analysis exception.
+val typeChecker = (dt1: DataType, dt2: DataType) => {
+  !TypeCoercion.findWiderTypeForTwo(dt1.asNullable, 
dt2.asNullable).isEmpty

Review comment:
   Oh, I spent a little time to recall why I keep original check logic.
   
   It is because if `TypeCoercion` fails to find compatible types for any 
column, it won't add cast for all. It is all or nothing logic there.
   
   So if we only check `dt1 == dt2` here, we compare the original data types 
even some of them are compatible.
   
   `AnalysisErrorSuite` has one example. One relation has `short, string, 
double, decimal`, another one has `string, string, string, map`.
   
   The first three columns are compatible, only the fourth isn't. So 
`TypeCoercion` doesn't add casts for all.
   
   If we compare `dt1 == dt2`, the error will be like "short is not compatible 
with string". But currently we get like "decimal is not compatible with map".
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] viirya commented on a change in pull request #34038: [SPARK-36797][SQL] Union should resolve nested columns as top-level columns

2021-09-22 Thread GitBox


viirya commented on a change in pull request #34038:
URL: https://github.com/apache/spark/pull/34038#discussion_r714490029



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
##
@@ -401,15 +401,30 @@ trait CheckAnalysis extends PredicateHelper with 
LookupCatalog {
 |the ${ordinalNumber(ti + 1)} table has 
${child.output.length} columns
   """.stripMargin.replace("\n", " ").trim())
   }
+  val isUnion = operator.isInstanceOf[Union]
+  val dataTypesAreCompatibleFn = if (isUnion) {
+// `TypeCoercion` takes care of type coercion already. If any 
columns or nested
+// columns are not compatible, we detect it here and throw 
analysis exception.
+val typeChecker = (dt1: DataType, dt2: DataType) => {
+  !TypeCoercion.findWiderTypeForTwo(dt1.asNullable, 
dt2.asNullable).isEmpty

Review comment:
   Oh, I spent a little time to recall why I keep original check logic.
   
   It is because if `TypeCoercion` fails to find compatible types for all 
columns, it won't add cast for all. It is all or nothing logic here.
   
   So if we only check `dt1 == dt2` here, we compare the original data types 
even some of them are compatible.
   
   `AnalysisErrorSuite` has one example. One relation has `short, string, 
double, decimal`, another one has `string, string, string, map`.
   
   The first three columns are compatible, only the fourth isn't. So 
`TypeCoercion` fails to add casts for all.
   
   If we compare `dt1 == dt2`, the error will be like "short is not compatible 
with string". But currently we get like "decimal is not compatible with map".
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] viirya commented on a change in pull request #34038: [SPARK-36797][SQL] Union should resolve nested columns as top-level columns

2021-09-22 Thread GitBox


viirya commented on a change in pull request #34038:
URL: https://github.com/apache/spark/pull/34038#discussion_r714479944



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
##
@@ -401,15 +401,30 @@ trait CheckAnalysis extends PredicateHelper with 
LookupCatalog {
 |the ${ordinalNumber(ti + 1)} table has 
${child.output.length} columns
   """.stripMargin.replace("\n", " ").trim())
   }
+  val isUnion = operator.isInstanceOf[Union]
+  val dataTypesAreCompatibleFn = if (isUnion) {
+// `TypeCoercion` takes care of type coercion already. If any 
columns or nested
+// columns are not compatible, we detect it here and throw 
analysis exception.
+val typeChecker = (dt1: DataType, dt2: DataType) => {
+  !TypeCoercion.findWiderTypeForTwo(dt1.asNullable, 
dt2.asNullable).isEmpty

Review comment:
   It is not always able to cast the types between children of union. For 
incompatible types, we need to find it out and throw analysis error here. Do I 
misunderstand it?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] viirya commented on a change in pull request #34038: [SPARK-36797][SQL] Union should resolve nested columns as top-level columns

2021-09-22 Thread GitBox


viirya commented on a change in pull request #34038:
URL: https://github.com/apache/spark/pull/34038#discussion_r714204628



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
##
@@ -401,16 +401,30 @@ trait CheckAnalysis extends PredicateHelper with 
LookupCatalog {
 |the ${ordinalNumber(ti + 1)} table has 
${child.output.length} columns
   """.stripMargin.replace("\n", " ").trim())
   }
+  val isUnion = operator.isInstanceOf[Union]
   // Check if the data types match.
-  dataTypes(child).zip(ref).zipWithIndex.foreach { case ((dt1, 
dt2), ci) =>
-// SPARK-18058: we shall not care about the nullability of 
columns
-if (TypeCoercion.findWiderTypeForTwo(dt1.asNullable, 
dt2.asNullable).isEmpty) {
-  failAnalysis(
-s"""
-  |${operator.nodeName} can only be performed on tables 
with the compatible
-  |column types. ${dt1.catalogString} <> 
${dt2.catalogString} at the
-  |${ordinalNumber(ci)} column of the ${ordinalNumber(ti + 
1)} table
-""".stripMargin.replace("\n", " ").trim())
+  if (!isUnion) {

Review comment:
   BTW I will make the change for other set operations in another PR 
(JIRA). It might require more change (doc, test..).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] viirya commented on a change in pull request #34038: [SPARK-36797][SQL] Union should resolve nested columns as top-level columns

2021-09-22 Thread GitBox


viirya commented on a change in pull request #34038:
URL: https://github.com/apache/spark/pull/34038#discussion_r714203517



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
##
@@ -401,16 +401,30 @@ trait CheckAnalysis extends PredicateHelper with 
LookupCatalog {
 |the ${ordinalNumber(ti + 1)} table has 
${child.output.length} columns
   """.stripMargin.replace("\n", " ").trim())
   }
+  val isUnion = operator.isInstanceOf[Union]
   // Check if the data types match.
-  dataTypes(child).zip(ref).zipWithIndex.foreach { case ((dt1, 
dt2), ci) =>
-// SPARK-18058: we shall not care about the nullability of 
columns
-if (TypeCoercion.findWiderTypeForTwo(dt1.asNullable, 
dt2.asNullable).isEmpty) {
-  failAnalysis(
-s"""
-  |${operator.nodeName} can only be performed on tables 
with the compatible
-  |column types. ${dt1.catalogString} <> 
${dt2.catalogString} at the
-  |${ordinalNumber(ci)} column of the ${ordinalNumber(ti + 
1)} table
-""".stripMargin.replace("\n", " ").trim())
+  if (!isUnion) {

Review comment:
   Ok. I think it makes more sense. I will make other set operations as 
by-position too at nested column level.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] viirya commented on a change in pull request #34038: [SPARK-36797][SQL] Union should resolve nested columns as top-level columns

2021-09-22 Thread GitBox


viirya commented on a change in pull request #34038:
URL: https://github.com/apache/spark/pull/34038#discussion_r714202853



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala
##
@@ -327,7 +327,7 @@ case class Union(
 child.output.length == children.head.output.length &&
 // compare the data types with the first child
 child.output.zip(children.head.output).forall {
-  case (l, r) => l.dataType.sameType(r.dataType)
+  case (l, r) => DataType.equalsStructurally(l.dataType, r.dataType, 
true)

Review comment:
   `DataType.equalsStructurally` still check data type equality, it just 
ignores field name difference.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] viirya commented on a change in pull request #34038: [SPARK-36797][SQL] Union should resolve nested columns as top-level columns

2021-09-20 Thread GitBox


viirya commented on a change in pull request #34038:
URL: https://github.com/apache/spark/pull/34038#discussion_r712593090



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
##
@@ -401,16 +401,30 @@ trait CheckAnalysis extends PredicateHelper with 
LookupCatalog {
 |the ${ordinalNumber(ti + 1)} table has 
${child.output.length} columns
   """.stripMargin.replace("\n", " ").trim())
   }
+  val isUnion = operator.isInstanceOf[Union]
   // Check if the data types match.
-  dataTypes(child).zip(ref).zipWithIndex.foreach { case ((dt1, 
dt2), ci) =>
-// SPARK-18058: we shall not care about the nullability of 
columns
-if (TypeCoercion.findWiderTypeForTwo(dt1.asNullable, 
dt2.asNullable).isEmpty) {
-  failAnalysis(
-s"""
-  |${operator.nodeName} can only be performed on tables 
with the compatible
-  |column types. ${dt1.catalogString} <> 
${dt2.catalogString} at the
-  |${ordinalNumber(ci)} column of the ${ordinalNumber(ti + 
1)} table
-""".stripMargin.replace("\n", " ").trim())
+  if (!isUnion) {

Review comment:
   I think these set operations work basically the same. But at the API 
doc, we don't have document it for all set operations except for union. The 
by-position resolution for union, I think, is to follow SQL. It only requires 
the columns to union have the same data types in same order, but not column 
names.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] viirya commented on a change in pull request #34038: [SPARK-36797][SQL] Union should resolve nested columns as top-level columns

2021-09-20 Thread GitBox


viirya commented on a change in pull request #34038:
URL: https://github.com/apache/spark/pull/34038#discussion_r712593090



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
##
@@ -401,16 +401,30 @@ trait CheckAnalysis extends PredicateHelper with 
LookupCatalog {
 |the ${ordinalNumber(ti + 1)} table has 
${child.output.length} columns
   """.stripMargin.replace("\n", " ").trim())
   }
+  val isUnion = operator.isInstanceOf[Union]
   // Check if the data types match.
-  dataTypes(child).zip(ref).zipWithIndex.foreach { case ((dt1, 
dt2), ci) =>
-// SPARK-18058: we shall not care about the nullability of 
columns
-if (TypeCoercion.findWiderTypeForTwo(dt1.asNullable, 
dt2.asNullable).isEmpty) {
-  failAnalysis(
-s"""
-  |${operator.nodeName} can only be performed on tables 
with the compatible
-  |column types. ${dt1.catalogString} <> 
${dt2.catalogString} at the
-  |${ordinalNumber(ci)} column of the ${ordinalNumber(ti + 
1)} table
-""".stripMargin.replace("\n", " ").trim())
+  if (!isUnion) {

Review comment:
   I think these set operations work basically the same. The by-position 
resolution for union, I think, is to follow SQL. It only requires the columns 
to union have the same data types in same order, but not column names.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] viirya commented on a change in pull request #34038: [SPARK-36797][SQL] Union should resolve nested columns as top-level columns

2021-09-20 Thread GitBox


viirya commented on a change in pull request #34038:
URL: https://github.com/apache/spark/pull/34038#discussion_r712593090



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
##
@@ -401,16 +401,30 @@ trait CheckAnalysis extends PredicateHelper with 
LookupCatalog {
 |the ${ordinalNumber(ti + 1)} table has 
${child.output.length} columns
   """.stripMargin.replace("\n", " ").trim())
   }
+  val isUnion = operator.isInstanceOf[Union]
   // Check if the data types match.
-  dataTypes(child).zip(ref).zipWithIndex.foreach { case ((dt1, 
dt2), ci) =>
-// SPARK-18058: we shall not care about the nullability of 
columns
-if (TypeCoercion.findWiderTypeForTwo(dt1.asNullable, 
dt2.asNullable).isEmpty) {
-  failAnalysis(
-s"""
-  |${operator.nodeName} can only be performed on tables 
with the compatible
-  |column types. ${dt1.catalogString} <> 
${dt2.catalogString} at the
-  |${ordinalNumber(ci)} column of the ${ordinalNumber(ti + 
1)} table
-""".stripMargin.replace("\n", " ").trim())
+  if (!isUnion) {

Review comment:
   I think these set operations work basically the same. The by-position 
resolution, I think, is to follow SQL. It only requires the columns to union 
have the same data types in same order, but not column names.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] viirya commented on a change in pull request #34038: [SPARK-36797][SQL] Union should resolve nested columns as top-level columns

2021-09-18 Thread GitBox


viirya commented on a change in pull request #34038:
URL: https://github.com/apache/spark/pull/34038#discussion_r711537729



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
##
@@ -401,16 +401,30 @@ trait CheckAnalysis extends PredicateHelper with 
LookupCatalog {
 |the ${ordinalNumber(ti + 1)} table has 
${child.output.length} columns
   """.stripMargin.replace("\n", " ").trim())
   }
+  val isUnion = operator.isInstanceOf[Union]
   // Check if the data types match.
-  dataTypes(child).zip(ref).zipWithIndex.foreach { case ((dt1, 
dt2), ci) =>
-// SPARK-18058: we shall not care about the nullability of 
columns
-if (TypeCoercion.findWiderTypeForTwo(dt1.asNullable, 
dt2.asNullable).isEmpty) {
-  failAnalysis(
-s"""
-  |${operator.nodeName} can only be performed on tables 
with the compatible
-  |column types. ${dt1.catalogString} <> 
${dt2.catalogString} at the
-  |${ordinalNumber(ci)} column of the ${ordinalNumber(ti + 
1)} table
-""".stripMargin.replace("\n", " ").trim())
+  if (!isUnion) {

Review comment:
   Not sure if we should also generalize to all set operations? Although it 
looks reasonable, but by their API definition seems we don't have the 
by-position definition as Union.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org