Nan Zhu created SPARK-2459:
------------------------------

             Summary: the user should be able to configure the resources used 
by JDBC server
                 Key: SPARK-2459
                 URL: https://issues.apache.org/jira/browse/SPARK-2459
             Project: Spark
          Issue Type: Improvement
          Components: SQL
    Affects Versions: 1.1.0
            Reporter: Nan Zhu


I'm trying the jdbc server

I found that the jdbc server always occupy all cores in the cluster

the reason is that when creating HiveContext, it doesn't set anything related 
to spark.cores.max or spark.executor.memory

SparkSQLEnv.scala(https://github.com/apache/spark/blob/8032fe2fae3ac40a02c6018c52e76584a14b3438/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLEnv.scala)
  L41-L43

[~liancheng] 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to