Hi Satish,

I am using spark 2.0.2.  And no I have not passed those variables because I
didn't want to shoot in the dark. According to the documentation it looks
like SPARK_WORKER_CORES is the one which should do it. If not, can you
please explain how these variables inter play together?

--num-executors
--executor-cores
–total-executor-cores
SPARK_WORKER_CORES

Thanks!


On Fri, Feb 17, 2017 at 5:13 PM, Satish Lalam <sati...@microsoft.com> wrote:

> Have you tried passing --executor-cores or –total-executor-cores as
> arguments, , depending on the spark version?
>
>
>
>
>
> *From:* kant kodali [mailto:kanth...@gmail.com]
> *Sent:* Friday, February 17, 2017 5:03 PM
> *To:* Alex Kozlov <ale...@gmail.com>
> *Cc:* user @spark <user@spark.apache.org>
> *Subject:* Re: question on SPARK_WORKER_CORES
>
>
>
> Standalone.
>
>
>
> On Fri, Feb 17, 2017 at 5:01 PM, Alex Kozlov <ale...@gmail.com> wrote:
>
> What Spark mode are you running the program in?
>
>
>
> On Fri, Feb 17, 2017 at 4:55 PM, kant kodali <kanth...@gmail.com> wrote:
>
> when I submit a job using spark shell I get something like this
>
>
>
> [Stage 0:========>    (36814 + 4) / 220129]
>
>
>
> Now all I want is I want to increase number of parallel tasks running from
> 4 to 16 so I exported an env variable called SPARK_WORKER_CORES=16 in
> conf/spark-env.sh. I though that should do it but it doesn't. It still
> shows me 4. any idea?
>
>
>
> Thanks much!
>
>
>
>
>
> --
>
> Alex Kozlov
> (408) 507-4987
> (650) 887-2135 efax
> ale...@gmail.com
>
>
>

Reply via email to