[ https://issues.apache.org/jira/browse/SPARK-32118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17810776#comment-17810776 ]
xichenglin commented on SPARK-32118: ------------------------------------ Why does spark3 have this problem, but spark2 does not? spark2 does not use read-write locks > Use fine-grained read write lock for each database in HiveExternalCatalog > ------------------------------------------------------------------------- > > Key: SPARK-32118 > URL: https://issues.apache.org/jira/browse/SPARK-32118 > Project: Spark > Issue Type: Improvement > Components: SQL > Affects Versions: 3.0.0, 3.1.0 > Reporter: Lantao Jin > Priority: Major > > In HiveExternalCatalog, all metastore operations are synchronized by a same > object lock. In a heavy traffic Spark thriftserver or Spark Driver, users's > queries may be stuck by any a long operation. For example, if a user is > accessing a table which contains mass partitions, the operation > loadDynamicPartitions() holds the object lock for a long time. All queries > are blocking to wait for the lock. -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org