Thanks. I'll try that. Hopefully that should work.
On Mon, Jul 4, 2016 at 9:12 PM, Mathieu Longtin <math...@closetwork.org>
wrote:
> I started with a download of 1.6.0. These days, we use a self compiled
> 1.6.2.
>
> On Mon, Jul 4, 2016 at 11:39 AM Ashwin Raaghav <ashraag.
Longtin <math...@closetwork.org>
wrote:
> 1.6.1.
>
> I have no idea. SPARK_WORKER_CORES should do the same.
>
> On Mon, Jul 4, 2016 at 11:24 AM Ashwin Raaghav <ashraag...@gmail.com>
> wrote:
>
>> Which version of Spark are you using? 1.6.1?
>>
>
Which version of Spark are you using? 1.6.1?
Any ideas as to why it is not working in ours?
On Mon, Jul 4, 2016 at 8:51 PM, Mathieu Longtin <math...@closetwork.org>
wrote:
> 16.
>
> On Mon, Jul 4, 2016 at 11:16 AM Ashwin Raaghav <ashraag...@gmail.com>
> wrote:
>
&g
se more than 1 core per server. However, it seems it will
> start as many pyspark as there are cores, but maybe not use them.
>
> On Mon, Jul 4, 2016 at 10:44 AM Ashwin Raaghav <ashraag...@gmail.com>
> wrote:
>
>> Hi Mathieu,
>>
>> Isn't that the same as setting &
node to 1. But the number of
>> pyspark.daemons process is still not coming down. It looks like initially
>> there is one Pyspark.daemons process and this in turn spawns as many
>> pyspark.daemons processes as the number of cores in the machine.
>>
>> Any help is apprecia
List mailing list archive at Nabble.com.
> >
> > -
> > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> > For additional commands, e-mail: user-h...@spark.apache.org
> >
>
>
>
--
Regards,
Ashwin Raaghav