[ 
https://issues.apache.org/jira/browse/SPARK-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15238621#comment-15238621
 ] 

Justin Pihony commented on SPARK-14525:
---------------------------------------

I don't mind putting together a PR for this, however I am curious as to whether 
there is an opinion on the implementation. I see two options. Have the save 
method redirect to the jdbc method, or move the logic in the jdbc method into 
the jdbc.DefaultSource, allowing the DataFrameWriter to not have to be 
responsible; jdbc would delegate to save which would delegate to 
DataSource.write which would delegate to a new method in the jdbc.DefaultSource.

After languishing on the seemingly unclean choice having save redirect to jdbc, 
I am leaning towards the second option. I think it's a better design choice.

> DataFrameWriter's save method should delegate to jdbc for jdbc datasource
> -------------------------------------------------------------------------
>
>                 Key: SPARK-14525
>                 URL: https://issues.apache.org/jira/browse/SPARK-14525
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 1.6.1
>            Reporter: Justin Pihony
>            Priority: Minor
>
> If you call {code}df.write.format("jdbc")...save(){code} then you get an 
> error  
> bq. org.apache.spark.sql.execution.datasources.jdbc.DefaultSource does not 
> allow create table as select
> save is a more intuitive guess on the appropriate method to call, so the user 
> should not be punished for not knowing about the jdbc method. 
> Obviously, this will require the caller to have set up the correct parameters 
> for jdbc to work :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to