[GitHub] [spark] cloud-fan commented on a change in pull request #28996: [SPARK-29358][SQL] Make unionByName optionally fill missing columns with nulls

2020-07-13 Thread GitBox


cloud-fan commented on a change in pull request #28996:
URL: https://github.com/apache/spark/pull/28996#discussion_r453777434



##
File path: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala
##
@@ -2048,19 +2088,34 @@ class Dataset[T] private[sql](
 // Builds a project list for `other` based on `logicalPlan` output names
 val rightProjectList = leftOutputAttrs.map { lattr =>
   rightOutputAttrs.find { rattr => resolver(lattr.name, rattr.name) 
}.getOrElse {
-throw new AnalysisException(
-  s"""Cannot resolve column name "${lattr.name}" among """ +
-s"""(${rightOutputAttrs.map(_.name).mkString(", ")})""")
+if (allowMissingColumns) {

Review comment:
   Yea it's better to have a new JIRA.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28996: [SPARK-29358][SQL] Make unionByName optionally fill missing columns with nulls

2020-07-12 Thread GitBox


cloud-fan commented on a change in pull request #28996:
URL: https://github.com/apache/spark/pull/28996#discussion_r453460284



##
File path: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala
##
@@ -2048,19 +2088,34 @@ class Dataset[T] private[sql](
 // Builds a project list for `other` based on `logicalPlan` output names
 val rightProjectList = leftOutputAttrs.map { lattr =>
   rightOutputAttrs.find { rattr => resolver(lattr.name, rattr.name) 
}.getOrElse {
-throw new AnalysisException(
-  s"""Cannot resolve column name "${lattr.name}" among """ +
-s"""(${rightOutputAttrs.map(_.name).mkString(", ")})""")
+if (allowMissingColumns) {

Review comment:
   I think the major problem here is we put the by-name logic in the API 
method, not in the `Analyzer`. Shall we add 2 boolean parameters(byName and 
allowMissingCol) to `Union`, and move the by-name logic to the type coercion 
rules?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28996: [SPARK-29358][SQL] Make unionByName optionally fill missing columns with nulls

2020-07-12 Thread GitBox


cloud-fan commented on a change in pull request #28996:
URL: https://github.com/apache/spark/pull/28996#discussion_r453460284



##
File path: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala
##
@@ -2048,19 +2088,34 @@ class Dataset[T] private[sql](
 // Builds a project list for `other` based on `logicalPlan` output names
 val rightProjectList = leftOutputAttrs.map { lattr =>
   rightOutputAttrs.find { rattr => resolver(lattr.name, rattr.name) 
}.getOrElse {
-throw new AnalysisException(
-  s"""Cannot resolve column name "${lattr.name}" among """ +
-s"""(${rightOutputAttrs.map(_.name).mkString(", ")})""")
+if (allowMissingColumns) {

Review comment:
   I think the major problem here is we put the by-name logic in the API 
method, not in the `Analyzer`. Shall we add a boolean parameter to `Union`, and 
move the by-name logic to the type coercion rules?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28996: [SPARK-29358][SQL] Make unionByName optionally fill missing columns with nulls

2020-07-12 Thread GitBox


cloud-fan commented on a change in pull request #28996:
URL: https://github.com/apache/spark/pull/28996#discussion_r453459502



##
File path: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala
##
@@ -2030,7 +2030,25 @@ class Dataset[T] private[sql](
* @group typedrel
* @since 2.3.0
*/
-  def unionByName(other: Dataset[T]): Dataset[T] = withSetOperator {
+  def unionByName(other: Dataset[T]): Dataset[T] = unionByName(other, false)
+
+  /**
+   * Returns a new Dataset containing union of rows in this Dataset and 
another Dataset.
+   *
+   * This is different from both `UNION ALL` and `UNION DISTINCT` in SQL. To 
do a SQL-style set
+   * union (that does deduplication of elements), use this function followed 
by a [[distinct]].

Review comment:
   Seems like we mistakenly copied the doc from `union` to `unionByName`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28996: [SPARK-29358][SQL] Make unionByName optionally fill missing columns with nulls

2020-07-12 Thread GitBox


cloud-fan commented on a change in pull request #28996:
URL: https://github.com/apache/spark/pull/28996#discussion_r453434564



##
File path: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala
##
@@ -2048,19 +2088,34 @@ class Dataset[T] private[sql](
 // Builds a project list for `other` based on `logicalPlan` output names
 val rightProjectList = leftOutputAttrs.map { lattr =>
   rightOutputAttrs.find { rattr => resolver(lattr.name, rattr.name) 
}.getOrElse {
-throw new AnalysisException(
-  s"""Cannot resolve column name "${lattr.name}" among """ +
-s"""(${rightOutputAttrs.map(_.name).mkString(", ")})""")
+if (allowMissingColumns) {

Review comment:
   Does it work with nested columns?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28996: [SPARK-29358][SQL] Make unionByName optionally fill missing columns with nulls

2020-07-09 Thread GitBox


cloud-fan commented on a change in pull request #28996:
URL: https://github.com/apache/spark/pull/28996#discussion_r452061943



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##
@@ -2656,6 +2656,14 @@ object SQLConf {
   .checkValue(_ > 0, "The difference must be positive.")
   .createWithDefault(4)
 
+  val ALLOW_MISSING_COLUMNS_IN_UNION_BY_NAME =
+buildConf("spark.sql.allowMissingColumnsInUnionByName")
+.doc("If this config is enabled, `Dataset.unionByName` allows different 
set of column names " +
+  "between two Datasets. Missing columns at each side, will be filled with 
null values.")

Review comment:
   We can add an overload method instead of using default parameter value.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28996: [SPARK-29358][SQL] Make unionByName optionally fill missing columns with nulls

2020-07-08 Thread GitBox


cloud-fan commented on a change in pull request #28996:
URL: https://github.com/apache/spark/pull/28996#discussion_r451359616



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##
@@ -2656,6 +2656,14 @@ object SQLConf {
   .checkValue(_ > 0, "The difference must be positive.")
   .createWithDefault(4)
 
+  val ALLOW_MISSING_COLUMNS_IN_UNION_BY_NAME =
+buildConf("spark.sql.allowMissingColumnsInUnionByName")
+.doc("If this config is enabled, `Dataset.unionByName` allows different 
set of column names " +
+  "between two Datasets. Missing columns at each side, will be filled with 
null values.")

Review comment:
   Seems like `Dataset` already has many APIs taking a boolean parameter. 
I'm OK with adding a `allowMissingColumns` parameter to `union`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28996: [SPARK-29358][SQL] Make unionByName optionally fill missing columns with nulls

2020-07-06 Thread GitBox


cloud-fan commented on a change in pull request #28996:
URL: https://github.com/apache/spark/pull/28996#discussion_r450162816



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##
@@ -2656,6 +2656,14 @@ object SQLConf {
   .checkValue(_ > 0, "The difference must be positive.")
   .createWithDefault(4)
 
+  val ALLOW_MISSING_COLUMNS_IN_UNION_BY_NAME =
+buildConf("spark.sql.allowMissingColumnsInUnionByName")
+.doc("If this config is enabled, `Dataset.unionByName` allows different 
set of column names " +
+  "between two Datasets. Missing columns at each side, will be filled with 
null values.")

Review comment:
   It seems not a breaking change if this case fails before and now we 
allow it by filling missing columns with nulls. Do we really need a config? cc 
@gatorsmile @bart-samwel





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org