Github user liufengdb commented on the issue:

    https://github.com/apache/spark/pull/20864
  
    I thought the directory is also created from this line: 
https://github.com/apache/spark/blob/master/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/session/HiveSessionImpl.java#L143.
 For this one, we need to think about whether we can remove all the temp 
directories creation, because the statements are executed by spark sql and it 
has nothing about the Hive in the thrift server.
    
    You are right that HiveClientImpl (the Hive inside spark sql) will also 
produce such temp directories. However, it seems like the following line alone 
is sufficient to add the jar to the class loader: 
https://github.com/apache/spark/blob/master/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala#L836.
 So I doubt we still need the `runSqlHive(s"ADD JAR $path")` to download the 
jar to a temp directory.
    
    Overall, I think we need an overall design to remove the Hive legacy in 
both the thrift server and Spark SQL. Adding more temp fixes will make such a 
design harder.
    



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to