Thank you Matthias! I'll follow your suggestions. Regarding TB, I had this 
confusion that "g" implies 512 mb. That's why I kept around 2TB memory.


Thanks again!

Arijit

________________________________
From: Matthias Boehm <[email protected]>
Sent: Tuesday, July 11, 2017 10:42:58 PM
To: [email protected]
Subject: Re: Decaying performance of SystemML

without any specifics of scripts or datasets, it's unfortunately, hard
if not impossible to help you here. However, note that the memory
configuration seems wrong. Why would you configure the driver and
executors with 2TB if you only have 256GB per node. Maybe you observe an
issue of swapping. Also note that the maxResultSize is irrelevant in
case SystemML creates the spark context because we would anyway set it
to unlimited.

Regarding generally recommend configurations, it's usually a good idea
to use one executor per worker node with the number of cores set to the
number of virtual cores. This allows maximum sharing of broadcasts
across tasks and hence reduces memory pressure.

Regards,
Matthias

On 7/11/2017 9:36 AM, arijit chakraborty wrote:
> Hi,
>
>
> I'm creating a process using systemML. But after certain period of time, the 
> performance decreases.
>
>
> 1) This warning message: WARN TaskSetManager: Stage 25254 contains a task of 
> very large size (3954 KB). The maximum recommended task size is 100 KB.
>
>
> 2) For Spark, we are implementing this setting:
>
>                      spark.executor.memory 2048g
>
>                       spark.driver.memory 2048g
>
>                 spark.driver.maxResultSize 2048
>
> is this good enough, or we can do something else to improve the performance? 
> WE tried the spark implementation suggested in the documentation. But it 
> didn't help much.
>
>
> 3) We are running on a system with 244 gb ram 32 cores and 100 gb hard disk 
> space.
>
>
> it will be great if anyone can guide me how to improve the performance.
>
>
> Thank you!
>
> Arijit
>

Reply via email to