Hi All,
I have a system with 6 physical cores and each core has 8 hardware threads
resulting in 48 virtual cores. Following are the setting in configuration
files.
*spark-env.sh*
export SPARK_WORKER_CORES=1
*spark-defaults.conf*
spark.driver.cores 1
spark.executor.cores 1
spark.cores.max 1
So
Hi,
I am getting following error while processing large input size.
...
[Stage 18:> (90 + 24) /
240]16/08/10 19:39:54 WARN TaskSetManager: Lost task 86.1 in stage 18.0
(TID 2517, bscpower8n2-data): FetchFailed(null, shuffleId=0, mapId=-1,
ebzadeh
>
>
>
> LinkedIn *
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 11 Ma
t;
> On Wed, May 11, 2016 at 9:08 PM, شجاع الرحمن بیگ <shujamug...@gmail.com>
> wrote:
>
>> Hi All,
>>
>> I need to set same memory and core for each worker on same machine and
>> for this purpose, I have set the following properties in conf/spark-env.
Hi All,
I need to set same memory and core for each worker on same machine and for
this purpose, I have set the following properties in conf/spark-env.sh
export SPARK_EXECUTOR_INSTANCE=3
export SPARK_WORKER_CORES=3
export SPARK_WORKER_MEMORY=8g
but only one worker is getting desired memory and