[jira] [Commented] (SPARK-30098) Use default datasource as provider for CREATE TABLE command
[ https://issues.apache.org/jira/browse/SPARK-30098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17240897#comment-17240897 ] Apache Spark commented on SPARK-30098: -- User 'cloud-fan' has created a pull request for this issue: https://github.com/apache/spark/pull/30554 > Use default datasource as provider for CREATE TABLE command > --- > > Key: SPARK-30098 > URL: https://issues.apache.org/jira/browse/SPARK-30098 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 3.0.0 >Reporter: wuyi >Priority: Major > Fix For: 3.0.0 > > > Changing the default provider from `hive` to the value of > `spark.sql.sources.default` for "CREATE TABLE" command to make it be > consistent with DataFrameWriter.saveAsTable API. > Also, it brings more friendly to end users since Spark is well know of using > parquet(default value of `spark.sql.sources.default`) as its default I/O > format. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-30098) Use default datasource as provider for CREATE TABLE command
[ https://issues.apache.org/jira/browse/SPARK-30098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17240899#comment-17240899 ] Apache Spark commented on SPARK-30098: -- User 'cloud-fan' has created a pull request for this issue: https://github.com/apache/spark/pull/30554 > Use default datasource as provider for CREATE TABLE command > --- > > Key: SPARK-30098 > URL: https://issues.apache.org/jira/browse/SPARK-30098 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 3.0.0 >Reporter: wuyi >Assignee: Apache Spark >Priority: Major > Fix For: 3.0.0 > > > Changing the default provider from `hive` to the value of > `spark.sql.sources.default` for "CREATE TABLE" command to make it be > consistent with DataFrameWriter.saveAsTable API. > Also, it brings more friendly to end users since Spark is well know of using > parquet(default value of `spark.sql.sources.default`) as its default I/O > format. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org