Can you try decreasing the level of parallelism that you are giving for
those functions? I had this issue when i gave a value 500 and it was gone
when i dropped it to 200.
Thanks
Best Regards
On Wed, Oct 8, 2014 at 9:28 AM, Andrew Ash and...@andrewash.com wrote:
Hi Meethu,
I believe you may
Hi all,
My code was working fine in spark 1.0.2 ,but after upgrading to 1.1.0, its
throwing exceptions and tasks are getting failed.
The code contains some map and filter transformations followed by groupByKey
(reduceByKey in another code ). What I could find out is that the code works
fine
Hi Meethu,
I believe you may be hitting a regression in
https://issues.apache.org/jira/browse/SPARK-3633
If you are able, could you please try running a patched version of Spark
1.1.0 that has commit 4fde28c reverted and see if the errors go away?
Posting your results on that bug would be