Hi,
You should not directly use these JVM options, and
you can use `spark.executor.memory` and `spark.driver.memory` for the
optimization.
// maropu
On Thu, Apr 14, 2016 at 11:32 AM, Divya Gehlot
wrote:
> Hi,
> I am using Spark 1.5.2 with Scala 2.10 and my Spark job
Hi,
I am using Spark 1.5.2 with Scala 2.10 and my Spark job keeps failing with
exit code 143 .
except one job where I am using unionAll and groupBy operation on multiple
columns .
Please advice me the options to optimize it .
The one option which I am using it now
--conf