Github user andrewor14 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11836#discussion_r56753390
  
    --- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala ---
    @@ -666,6 +470,241 @@ private[hive] object HiveContext {
         defaultValue = Some(true),
         doc = "When set to true, Hive Thrift server executes SQL queries in an 
asynchronous way.")
     
    +  /**
    +   * The version of the hive client that will be used to communicate with 
the metastore.  Note that
    +   * this does not necessarily need to be the same version of Hive that is 
used internally by
    +   * Spark SQL for execution.
    +   */
    +  private def hiveMetastoreVersion(conf: SQLConf): String = {
    +    conf.getConf(HIVE_METASTORE_VERSION)
    +  }
    +
    +  /**
    +   * The location of the jars that should be used to instantiate the 
HiveMetastoreClient.  This
    +   * property can be one of three options:
    +   *  - a classpath in the standard format for both hive and hadoop.
    +   *  - builtin - attempt to discover the jars that were used to load 
Spark SQL and use those. This
    +   *              option is only valid when using the execution version of 
Hive.
    +   *  - maven - download the correct version of hive on demand from maven.
    +   */
    +  private def hiveMetastoreJars(conf: SQLConf): String = {
    +    conf.getConf(HIVE_METASTORE_JARS)
    +  }
    +
    +  /**
    +   * A comma separated list of class prefixes that should be loaded using 
the classloader that
    +   * is shared between Spark SQL and a specific version of Hive. An 
example of classes that should
    +   * be shared is JDBC drivers that are needed to talk to the metastore. 
Other classes that need
    +   * to be shared are those that interact with classes that are already 
shared.  For example,
    +   * custom appenders that are used by log4j.
    +   */
    +  private def hiveMetastoreSharedPrefixes(conf: SQLConf): Seq[String] = {
    +    conf.getConf(HIVE_METASTORE_SHARED_PREFIXES).filterNot(_ == "")
    +  }
    +
    +  /**
    +   * A comma separated list of class prefixes that should explicitly be 
reloaded for each version
    +   * of Hive that Spark SQL is communicating with.  For example, Hive UDFs 
that are declared in a
    +   * prefix that typically would be shared (i.e. org.apache.spark.*)
    +   */
    +  private def hiveMetastoreBarrierPrefixes(conf: SQLConf): Seq[String] = {
    +    conf.getConf(HIVE_METASTORE_BARRIER_PREFIXES).filterNot(_ == "")
    +  }
    +
    +  /**
    +   * Configurations needed to create a [[HiveClient]].
    +   */
    +  private[hive] def hiveClientConfigurations(hiveconf: HiveConf): 
Map[String, String] = {
    --- End diff --
    
    I don't understand. This is used to create a `HiveClient`, which happens 
before we create `HiveCatalog`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to