Re: Do jobs fail because of other users of a cluster?

2017-01-24 Thread David Frese
Am 24/01/2017 um 02:43 schrieb Matthew Dailey: In general, Java processes fail with an OutOfMemoryError when your code and data does not fit into the memory allocated to the runtime. In Spark, that memory is controlled through the --executor-memory flag. If you are running Spark on YARN, then

Re: Do jobs fail because of other users of a cluster?

2017-01-23 Thread Sirisha Cheruvu
t; failures that are caused by incorrect settings on my side (e.g. because my >> data does not fit into memory), and those failures that are caused by >> resource consumption/blocking from other jobs? >> >> Thanks for sharing yo

Re: Do jobs fail because of other users of a cluster?

2017-01-23 Thread Matthew Dailey
ures that are caused by incorrect settings on my side (e.g. because my > data does not fit into memory), and those failures that are caused by > resource consumption/blocking from other jobs? > > Thanks for sharing your thoughts and experiences! > > > > > > -- > View this

Do jobs fail because of other users of a cluster?

2017-01-18 Thread David Frese
jobs? Thanks for sharing your thoughts and experiences! -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Do-jobs-fail-because-of-other-users-of-a-cluster-tp28318.html Sent from the Apache Spark User List mail