Github user leachbj commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16898#discussion_r208094538
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
 ---
    @@ -119,23 +130,45 @@ object FileFormatWriter extends Logging {
           uuid = UUID.randomUUID().toString,
           serializableHadoopConf = new 
SerializableConfiguration(job.getConfiguration),
           outputWriterFactory = outputWriterFactory,
    -      allColumns = queryExecution.logical.output,
    -      partitionColumns = partitionColumns,
    +      allColumns = allColumns,
           dataColumns = dataColumns,
    -      bucketSpec = bucketSpec,
    +      partitionColumns = partitionColumns,
    +      bucketIdExpression = bucketIdExpression,
           path = outputSpec.outputPath,
           customPartitionLocations = outputSpec.customPartitionLocations,
           maxRecordsPerFile = options.get("maxRecordsPerFile").map(_.toLong)
             .getOrElse(sparkSession.sessionState.conf.maxRecordsPerFile)
         )
     
    +    // We should first sort by partition columns, then bucket id, and 
finally sorting columns.
    +    val requiredOrdering = partitionColumns ++ bucketIdExpression ++ 
sortColumns
    +    // the sort order doesn't matter
    +    val actualOrdering = 
queryExecution.executedPlan.outputOrdering.map(_.child)
    --- End diff --
    
    @cloud-fan would it be possible to use the logical plan rather than the 
executedPlan?  If the optimizer decides the data is already sorted according 
according to the logical plan the executedPlan won't include the fields.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to