[ 
https://issues.apache.org/jira/browse/SPARK-38230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17677162#comment-17677162
 ] 

Xiaomin Zhang commented on SPARK-38230:
---------------------------------------

Hello [~coalchan] Thanks for working on this.  I created PR based on your work 
with some improvements as per [~Jackey Lee]'s comment. Now we don't need a new 
parameter and Spark will only invoke listPartitions for the case of overwriting 
hive static partitions.
[~roczei] Can you please review the PR and let me know if I missed anything? 
Thank you.

> InsertIntoHadoopFsRelationCommand unnecessarily fetches details of partitions 
> in most cases
> -------------------------------------------------------------------------------------------
>
>                 Key: SPARK-38230
>                 URL: https://issues.apache.org/jira/browse/SPARK-38230
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 3.0.2
>            Reporter: Coal Chan
>            Priority: Major
>
> In 
> `org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand`,
>  `sparkSession.sessionState.catalog.listPartitions` will call method 
> `org.apache.hadoop.hive.metastore.listPartitionsPsWithAuth` of hive metastore 
> client, this method will produce multiple queries per partition on hive 
> metastore db. So when you insert into a table which has too many 
> partitions(ie: 10k), it will produce too many queries on hive metastore 
> db(ie: n * 10k = 10nk), it puts a lot of strain on the database.
> In fact, it calls method `listPartitions` in order to get locations of 
> partitions and get `customPartitionLocations`. But in most cases, we do not 
> have custom partitions, we can just get partition names, so we can call 
> methodĀ listPartitionNames.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to