Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17001#discussion_r102648988
  
    --- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala 
---
    @@ -339,10 +340,17 @@ private[hive] class HiveClientImpl(
     
       override def getDatabase(dbName: String): CatalogDatabase = 
withHiveState {
         Option(client.getDatabase(dbName)).map { d =>
    +      // default database's location always use the warehouse path,
    +      // and since the location of database stored in metastore is 
qualified,
    +      // here we also make qualify for warehouse location
    +      val dbLocation = if (dbName == SessionCatalog.DEFAULT_DATABASE) {
    +        SessionCatalog.makeQualifiedPath(sparkConf.get(WAREHOUSE_PATH), 
hadoopConf).toString
    --- End diff --
    
    What I want is consistency. Now we decide to define the location of default 
database as warehouse path, and we should stick with it. The main goal of this 
PR is not to fix the bug when sharing metastore db, but to change the 
definition of default database location.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to