Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16664#discussion_r100232628
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
    @@ -218,7 +247,14 @@ final class DataFrameWriter[T] private[sql](ds: 
Dataset[T]) {
           bucketSpec = getBucketSpec,
           options = extraOptions.toMap)
     
    -    dataSource.write(mode, df)
    +    val destination = source match {
    +      case "jdbc" => extraOptions.get(JDBCOptions.JDBC_TABLE_NAME)
    +      case _ => extraOptions.get("path")
    --- End diff --
    
    In Spark SQL, for metadata-like info, we store it as a key-value map. For 
example, 
[MetadataBuilder](https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/types/Metadata.scala)
 is used for this purpose. So far, the solution proposed in this PR is not good 
to me. I do not think it is a good design. 
    
    Even if we add a structured type, this could be possibly changed in the 
future. If you want to introduce an external public interface (like our data 
source APIs), we need a careful design. This should be done in a separate PR. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to