one executor per Spark slave should be fine right I am not really sure what
benefit one would get by starting more executors (jvm's) on one node? End
of the day JVM creates native/kernel threads through system calls so if
those threads are spawned by one or multiple processes I dont see much
benefit (In theory it should be the same). With different processes one
would get different address spaces in the kernel but memory isn't an issue
so far.

On Fri, Feb 17, 2017 at 5:32 PM, Alex Kozlov <ale...@gmail.com> wrote:

> I found in some previous CDH versions that Spark starts only one executor
> per Spark slave, and DECREASING the executor-cores in standalone makes
> the total # of executors go up.  Just my 2¢.
>
> On Fri, Feb 17, 2017 at 5:20 PM, kant kodali <kanth...@gmail.com> wrote:
>
>> Hi Satish,
>>
>> I am using spark 2.0.2.  And no I have not passed those variables because
>> I didn't want to shoot in the dark. According to the documentation it looks
>> like SPARK_WORKER_CORES is the one which should do it. If not, can you
>> please explain how these variables inter play together?
>>
>> --num-executors
>> --executor-cores
>> –total-executor-cores
>> SPARK_WORKER_CORES
>>
>> Thanks!
>>
>>
>> On Fri, Feb 17, 2017 at 5:13 PM, Satish Lalam <sati...@microsoft.com>
>> wrote:
>>
>>> Have you tried passing --executor-cores or –total-executor-cores as
>>> arguments, , depending on the spark version?
>>>
>>>
>>>
>>>
>>>
>>> *From:* kant kodali [mailto:kanth...@gmail.com]
>>> *Sent:* Friday, February 17, 2017 5:03 PM
>>> *To:* Alex Kozlov <ale...@gmail.com>
>>> *Cc:* user @spark <user@spark.apache.org>
>>> *Subject:* Re: question on SPARK_WORKER_CORES
>>>
>>>
>>>
>>> Standalone.
>>>
>>>
>>>
>>> On Fri, Feb 17, 2017 at 5:01 PM, Alex Kozlov <ale...@gmail.com> wrote:
>>>
>>> What Spark mode are you running the program in?
>>>
>>>
>>>
>>> On Fri, Feb 17, 2017 at 4:55 PM, kant kodali <kanth...@gmail.com> wrote:
>>>
>>> when I submit a job using spark shell I get something like this
>>>
>>>
>>>
>>> [Stage 0:========>    (36814 + 4) / 220129]
>>>
>>>
>>>
>>> Now all I want is I want to increase number of parallel tasks running
>>> from 4 to 16 so I exported an env variable called SPARK_WORKER_CORES=16 in
>>> conf/spark-env.sh. I though that should do it but it doesn't. It still
>>> shows me 4. any idea?
>>>
>>>
>>>
>>> Thanks much!
>>>
>>>
>>>
>>>
>>>
>>> --
>>>
>>> Alex Kozlov
>>> (408) 507-4987
>>> (650) 887-2135 efax
>>> ale...@gmail.com
>>>
>>>
>>>
>>
>>
>
>
> --
> Alex Kozlov
> (408) 507-4987
> (650) 887-2135 efax
> ale...@gmail.com
>

Reply via email to