Github user liancheng commented on a diff in the pull request:

    https://github.com/apache/spark/pull/5526#discussion_r28540438
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/sources/interfaces.scala ---
    @@ -197,3 +233,69 @@ trait InsertableRelation {
     trait CatalystScan {
       def buildScan(requiredColumns: Seq[Attribute], filters: 
Seq[Expression]): RDD[Row]
     }
    +
    +/**
    + * ::Experimental::
    + * [[OutputWriter]] is used together with [[FSBasedRelation]] for 
persisting rows to the
    + * underlying file system.  An [[OutputWriter]] instance is created when a 
new output file is
    + * opened.  This instance is used to persist rows to this single output 
file.
    + */
    +@Experimental
    +trait OutputWriter {
    +  /**
    +   * Persists a single row.  Invoked on the executor side.
    +   */
    +  def write(row: Row): Unit
    --- End diff --
    
    Summary of our offline discussion:
    
    - For dynamic partitioning, partition column values must be retrieved from 
given rows. However, when writing to a partition directory, we can drop dynamic 
columns. So the `row` argument of `write(row: Row): Unit` needn't to contain 
partition columns.
    - Dropping dynamic columns is compatible with Hive
    - Keeping dynamic columns can be more convenient in the sense that the data 
files can be accessed independently without extracting partition columns from 
partition directory paths. However
    
    For this version, we drop all dynamic partition columns for Hive 
compatibility.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to