Re: OOM Error

2019-09-07 Thread Ankit Khettry
Sure folks, will try later today! Best Regards Ankit Khettry On Sat, 7 Sep, 2019, 6:56 PM Sunil Kalra, wrote: > Ankit > > Can you try reducing number of cores or increasing memory. Because with > below configuration your each core is getting ~3.5 GB. Otherwise your data > is s

Re: OOM Error

2019-09-07 Thread Ankit Khettry
Thanks Chris Going to try it soon by setting maybe spark.sql.shuffle.partitions to 2001. Also, I was wondering if it would help if I repartition the data by the fields I am using in group by and window operations? Best Regards Ankit Khettry On Sat, 7 Sep, 2019, 1:05 PM Chris Teoh, wrote: >

Re: OOM Error

2019-09-07 Thread Ankit Khettry
Nope, it's a batch job. Best Regards Ankit Khettry On Sat, 7 Sep, 2019, 6:52 AM Upasana Sharma, <028upasana...@gmail.com> wrote: > Is it a streaming job? > > On Sat, Sep 7, 2019, 5:04 AM Ankit Khettry > wrote: > >> I have a Spark job that consists of a large

OOM Error

2019-09-06 Thread Ankit Khettry
of them are even marked resolved. Can someone guide me as to how to approach this problem? I am using Databricks Spark 2.4.1. Best Regards Ankit Khettry

Re: An alternative logic to collaborative filtering works fine but we are facing run time issues in executing the job

2019-04-16 Thread Ankit Khettry
node resources. Try running the job in yarn mode and if the issue persists, try increasing the disc volumes. Best Regards Ankit Khettry On Wed, 17 Apr, 2019, 9:44 AM Balakumar iyer S, wrote: > Hi , > > > While running the following spark code in the cluster with following >