Am 24/01/2017 um 02:43 schrieb Matthew Dailey:
In general, Java processes fail with an OutOfMemoryError when your code
and data does not fit into the memory allocated to the runtime. In
Spark, that memory is controlled through the --executor-memory flag.
If you are running Spark on YARN, then
t; failures that are caused by incorrect settings on my side (e.g. because my
>> data does not fit into memory), and those failures that are caused by
>> resource consumption/blocking from other jobs?
>>
>> Thanks for sharing yo
ures that are caused by incorrect settings on my side (e.g. because my
> data does not fit into memory), and those failures that are caused by
> resource consumption/blocking from other jobs?
>
> Thanks for sharing your thoughts and experiences!
>
>
>
>
>
> --
> View this
jobs?
Thanks for sharing your thoughts and experiences!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Do-jobs-fail-because-of-other-users-of-a-cluster-tp28318.html
Sent from the Apache Spark User List mail