[ 
https://issues.apache.org/jira/browse/SPARK-38230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17735519#comment-17735519
 ] 

jeanlyn commented on SPARK-38230:
---------------------------------

We found Hive metastore crash frequently after upgrade Spark from 2.4.7 to 
3.3.2. After investigation, I found `InsertIntoHadoopFsRelationCommand` will 
pull all partitions when using dynamicPartitionOverwrite, and i find this issue 
after solves the problem by using generate paths to get partitions to get 
partitions in our environment. So, I have submitted a new pull request, hoping 
to help you.

> InsertIntoHadoopFsRelationCommand unnecessarily fetches details of partitions 
> in most cases
> -------------------------------------------------------------------------------------------
>
>                 Key: SPARK-38230
>                 URL: https://issues.apache.org/jira/browse/SPARK-38230
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 3.0.2, 3.3.0, 3.4.0, 3.5.0
>            Reporter: Coal Chan
>            Priority: Major
>
> In 
> `org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand`,
>  `sparkSession.sessionState.catalog.listPartitions` will call method 
> `org.apache.hadoop.hive.metastore.listPartitionsPsWithAuth` of hive metastore 
> client, this method will produce multiple queries per partition on hive 
> metastore db. So when you insert into a table which has too many 
> partitions(ie: 10k), it will produce too many queries on hive metastore 
> db(ie: n * 10k = 10nk), it puts a lot of strain on the database.
> In fact, it calls method `listPartitions` in order to get locations of 
> partitions and get `customPartitionLocations`. But in most cases, we do not 
> have custom partitions, we can just get partition names, so we can call 
> methodĀ listPartitionNames.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to