It doesn't matter - just an example. Imagine yarn cluster with 100GB of ram and i submit simultaneously a lot of jobs in a loop.

Thanks,
Peter Rudenko

On 4/6/16 7:22 PM, Ted Yu wrote:
Which hadoop release are you using ?

bq. yarn cluster with 2GB RAM

I assume 2GB is per node. Isn't this too low for your use case ?

Cheers

On Wed, Apr 6, 2016 at 9:19 AM, Peter Rudenko <petro.rude...@gmail.com <mailto:petro.rude...@gmail.com>> wrote:

    Hi i have a situation, say i have a yarn cluster with 2GB RAM. I'm
    submitting 2 spark jobs with "driver-memory 1GB --num-executors 2
    --executor-memory 1GB". So i see 2 spark AM running, but they are
    unable to allocate workers containers and start actual job. And
    they are hanging for a while. Is it possible to set some sort of
    timeout for acquiring executors otherwise kill application?

    Thanks,
    Peter Rudenko

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
    <mailto:user-unsubscr...@spark.apache.org>
    For additional commands, e-mail: user-h...@spark.apache.org
    <mailto:user-h...@spark.apache.org>



Reply via email to