[ 
https://issues.apache.org/jira/browse/SPARK-2459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14059636#comment-14059636
 ] 

Nan Zhu edited comment on SPARK-2459 at 7/12/14 4:35 AM:
---------------------------------------------------------

I discussed with [~liancheng], he is working on merging the branch to master, 
so a new pull request may interrupt his work, he asked me to submit a JIRA first


was (Author: codingcat):
I discussed with [~liancheng], he is working on merging the branch to master, 
so a new merge request may interrupt his work, he asked me to submit a JIRA 
first

> the user should be able to configure the resources used by JDBC server
> ----------------------------------------------------------------------
>
>                 Key: SPARK-2459
>                 URL: https://issues.apache.org/jira/browse/SPARK-2459
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 1.1.0
>            Reporter: Nan Zhu
>
> I'm trying the jdbc server
> I found that the jdbc server always occupies all cores in the cluster
> the reason is that when creating HiveContext, it doesn't set anything related 
> to spark.cores.max or spark.executor.memory
> SparkSQLEnv.scala(https://github.com/apache/spark/blob/8032fe2fae3ac40a02c6018c52e76584a14b3438/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLEnv.scala)
>   L41-L43
> [~liancheng] 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to