Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/14757#discussion_r76016337
  
    --- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveExternalCatalogSuite.scala
 ---
    @@ -21,26 +21,26 @@ import org.apache.hadoop.conf.Configuration
     
     import org.apache.spark.SparkConf
     import org.apache.spark.sql.catalyst.catalog._
    -import org.apache.spark.sql.hive.client.HiveClient
     
     /**
      * Test suite for the [[HiveExternalCatalog]].
      */
     class HiveExternalCatalogSuite extends ExternalCatalogSuite {
     
    -  private val client: HiveClient = {
    -    // We create a metastore at a temp location to avoid any potential
    -    // conflict of having multiple connections to a single derby instance.
    -    HiveUtils.newClientForExecution(new SparkConf, new Configuration)
    +  private val externalCatalog: HiveExternalCatalog = {
    +    new HiveExternalCatalog(new SparkConf, new Configuration)
       }
     
       protected override val utils: CatalogTestUtils = new CatalogTestUtils {
         override val tableInputFormat: String = 
"org.apache.hadoop.mapred.SequenceFileInputFormat"
         override val tableOutputFormat: String = 
"org.apache.hadoop.mapred.SequenceFileOutputFormat"
    -    override def newEmptyCatalog(): ExternalCatalog =
    -      new HiveExternalCatalog(client, new Configuration())
    +    // We create a metastore at a temp location to avoid any potential
    --- End diff --
    
    is it still valid?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to