Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16514#discussion_r95247659
  
    --- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
    @@ -119,7 +119,30 @@ private[hive] class HiveMetastoreCatalog(sparkSession: 
SparkSession) extends Log
           qualifiedTableName.database, qualifiedTableName.name)
     
         if (DDLUtils.isDatasourceTable(table)) {
    -      val dataSourceTable = cachedDataSourceTables(qualifiedTableName)
    +      val dataSourceTable =
    +        cachedDataSourceTables(qualifiedTableName) match {
    +          case l @ LogicalRelation(relation: HadoopFsRelation, _, _) =>
    +            // Ignore the scheme difference when comparing the paths
    +            val isSamePath =
    +              table.storage.locationUri.isDefined && 
relation.location.rootPaths.size == 1 &&
    +                table.storage.locationUri.get == 
relation.location.rootPaths.head.toUri.getPath
    +            // If we have the same paths, same schema, and same partition 
spec,
    +            // we will use the cached relation.
    +            val useCached =
    +              isSamePath &&
    +              l.schema == table.schema &&
    +              relation.bucketSpec == table.bucketSpec &&
    +              relation.partitionSchema == table.partitionSchema
    +            if (useCached) {
    +              l
    +            } else {
    +              // If the cached relation is not updated, we invalidate it 
right away.
    +              cachedDataSourceTables.invalidate(qualifiedTableName)
    +              // Reload it from the external catalog
    +              cachedDataSourceTables(qualifiedTableName)
    +            }
    +          case o => o
    +        }
    --- End diff --
    
    The above fix is for ensuring the fix is small enough to be backported to 
the 2.1 branch. 
    
    In the master branch, I tried to combine it with the existing 
[`getCached`](https://github.com/gatorsmile/spark/blob/d7cd667fa7ffdcccb1470c1e2ef6087d5e60f6b3/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala#L164-L212).
 However, it requires refactoring. Thus, it will be submitted as a separate PR. 
Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to