Github user jinxing64 commented on the issue:

    https://github.com/apache/spark/pull/19068
  
    @yaooqinn 
    This change works well for me, thanks for fix !
    After this change, hive client for execution(points to a dummy local 
metastore) will never be used when running sql in`spark-sql`, hive client 
points a true metastore. Right ?
    So why it is designed to have the hive client in `SparkSQLCLIDriver` points 
to a dummy local metastore before ? Is this change breaking some design?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to