[ 
https://issues.apache.org/jira/browse/SPARK-33369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gengliang Wang updated SPARK-33369:
-----------------------------------
    Description: 
For all the v2 data sources which are not FileDataSourceV2, Spark always infers 
the table schema/partitioning on DataframeWriter.save().
The inference of table schema/partitioning can be expensive. However, there is 
no such trait or flag for indicating a V2 source can use the input DataFrame's 
schema on DataframeWriter.save(). We can resolve the problem by adding a new 
expected behavior for the method TableProvider.supportsExternalMetadata():
When TableProvider.supportsExternalMetadata() is true, Spark will use the input 
Dataframe's schema in DataframeWriter.save() and skip schema/partitioning 
inference.

  was:
For all the v2 data sources which are not FileDataSourceV2, Spark always infer 
the table schema/partitioning on DataframeWriter.save().
Currently, there is no such trait or flag for indicating a V2 source can use 
the input DataFrame's schema on DataframeWriter.save().  We can resolve the 
problem by adding a new expected behavior for the method 
TableProvider.supportsExternalMetadata():
when TableProvider.supportsExternalMetadata() is true, Spark will use the input 
Dataframe's schema in DataframeWriter.save() and skip schema/partitioning 
inference.


> Skip schema inference in DataframeWriter.save() if table provider supports 
> external metadata
> --------------------------------------------------------------------------------------------
>
>                 Key: SPARK-33369
>                 URL: https://issues.apache.org/jira/browse/SPARK-33369
>             Project: Spark
>          Issue Type: New Feature
>          Components: SQL
>    Affects Versions: 3.1.0
>            Reporter: Gengliang Wang
>            Assignee: Gengliang Wang
>            Priority: Major
>
> For all the v2 data sources which are not FileDataSourceV2, Spark always 
> infers the table schema/partitioning on DataframeWriter.save().
> The inference of table schema/partitioning can be expensive. However, there 
> is no such trait or flag for indicating a V2 source can use the input 
> DataFrame's schema on DataframeWriter.save(). We can resolve the problem by 
> adding a new expected behavior for the method 
> TableProvider.supportsExternalMetadata():
> When TableProvider.supportsExternalMetadata() is true, Spark will use the 
> input Dataframe's schema in DataframeWriter.save() and skip 
> schema/partitioning inference.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to