Github user yhuai commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13766#discussion_r67610305
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
    @@ -245,29 +245,17 @@ final class DataFrameWriter[T] private[sql](ds: 
Dataset[T]) {
         if (partitioningColumns.isDefined) {
           throw new AnalysisException(
             "insertInto() can't be used together with partitionBy(). " +
    -          "Partition columns are defined by the table into which is being 
inserted."
    +          "Partition columns have already be defined for the table. " +
    +          "It is not necessary to use partitionBy()."
           )
         }
     
    -    val partitions = normalizedParCols.map(_.map(col => col -> 
Option.empty[String]).toMap)
    -    val overwrite = mode == SaveMode.Overwrite
    -
    -    // A partitioned relation's schema can be different from the input 
logicalPlan, since
    -    // partition columns are all moved after data columns. We Project to 
adjust the ordering.
    -    // TODO: this belongs to the analyzer.
    -    val input = normalizedParCols.map { parCols =>
    -      val (inputPartCols, inputDataCols) = df.logicalPlan.output.partition 
{ attr =>
    -        parCols.contains(attr.name)
    -      }
    -      Project(inputDataCols ++ inputPartCols, df.logicalPlan)
    -    }.getOrElse(df.logicalPlan)
    -
    --- End diff --
    
    These lines are not needed because we do not allow users to use partitionBy 
with insertInto together.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to