[ 
https://issues.apache.org/jira/browse/MRQL-73?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14557854#comment-14557854
 ] 

Hudson commented on MRQL-73:
----------------------------

SUCCESS: Integrated in mrql-master-snapshot #20 (See 
[https://builds.apache.org/job/mrql-master-snapshot/20/])
[MRQL-73] Set the max number of tasks in Spark mode (fegaras: 
https://git-wip-us.apache.org/repos/asf?p=incubator-mrql.git&a=commit&h=5eb81d992cec29084d2a97686f95b08cdc727809)
* conf/mrql-env.sh
* bin/mrql.spark


> Set the max number of tasks in Spark mode
> -----------------------------------------
>
>                 Key: MRQL-73
>                 URL: https://issues.apache.org/jira/browse/MRQL-73
>             Project: MRQL
>          Issue Type: Bug
>          Components: Run-Time/Spark
>    Affects Versions: 0.9.6
>            Reporter: Leonidas Fegaras
>            Assignee: Leonidas Fegaras
>            Priority: Critical
>
> The number of worker nodes in Spark distributed mode, which are specified by 
> the MRQL -nodes parameter, must set the parameters SPARK_WORKER_INSTANCES 
> (called SPARK_EXECUTOR_INSTANCES in Spark 1.3.*) and SPARK_WORKER_CORES; 
> otherwise, Spark will always use all the available cores in the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to