Re: Limit on number of simultaneous Spark frameworks on Mesos?

2014-08-21 Thread Martin Weindel
This sounds like you hit the same problem as I: https://issues.apache.org/jira/browse/MESOS-1688 Note that there is a patch for Spark as work around for this dead lock: https://github.com/apache/spark/pull/1860 Regards, Martin Am 20.08.2014 21:39, schrieb Cody Koeninger: I'm seeing situation

Re: Limit on number of simultaneous Spark frameworks on Mesos?

2014-08-20 Thread Claudiu Barbura
Hi, There¹s a ³framework starvation² thread you should look up Š we provided a patch for it in 0.18 and promised Vinod to write a detailed blog about it Š but I¹ve been swamped and did not get to it yet. One of these days .. Claudiu On 8/20/14, 1:23 PM, "Timothy Chen" wrote: >Can you share you

Re: Limit on number of simultaneous Spark frameworks on Mesos?

2014-08-20 Thread Cody Koeninger
At least some of the jobs are typically doing work that would make it difficult to share, e.g. accessing hdfs. I'll see if I can get a smaller reproducible case. On Wed, Aug 20, 2014 at 3:23 PM, Timothy Chen wrote: > Can you share your spark / mesos configurations and the spark job? I'd > like

Re: Limit on number of simultaneous Spark frameworks on Mesos?

2014-08-20 Thread Timothy Chen
Can you share your spark / mesos configurations and the spark job? I'd like to repro it. Tim > On Aug 20, 2014, at 12:39 PM, Cody Koeninger wrote: > > I'm seeing situations where starting e.g. a 4th spark job on Mesos results in > none of the jobs making progress. This happens even with --e

Limit on number of simultaneous Spark frameworks on Mesos?

2014-08-20 Thread Cody Koeninger
I'm seeing situations where starting e.g. a 4th spark job on Mesos results in none of the jobs making progress. This happens even with --executor-memory set to values that should not come close to exceeding the availability per node, and even if the 4th job is doing something completely trivial (e