Re: [Yarn] Spark AMs dead lock

2016-04-06 Thread Peter Rudenko
It doesn't matter - just an example. Imagine yarn cluster with 100GB of ram and i submit simultaneously a lot of jobs in a loop. Thanks, Peter Rudenko On 4/6/16 7:22 PM, Ted Yu wrote: Which hadoop release are you using ? bq. yarn cluster with 2GB RAM I assume 2GB is per node. Isn't this too

Re: [Yarn] Spark AMs dead lock

2016-04-06 Thread Ted Yu
Which hadoop release are you using ? bq. yarn cluster with 2GB RAM I assume 2GB is per node. Isn't this too low for your use case ? Cheers On Wed, Apr 6, 2016 at 9:19 AM, Peter Rudenko wrote: > Hi i have a situation, say i have a yarn cluster with 2GB RAM. I'm >