Hi Matteo,

It depends on configurations - yarn-site.xml (nodemanager's capacity
of memory) and requests of containers by spark(spark.yarn.am.memory
and executor's Memory) if you don't use Dominant Resource Fairness.
Could you share them?

Thanks,
- Tsuyoshi

On Wed, Sep 2, 2015 at 7:25 AM, Matteo Luzzi <matteo.lu...@gmail.com> wrote:
> Hi all!
> I'm developing a system where I need to run spark jobs over yarn. I'm using
> a two node cluster (one master and one slave) for testing and I'm submitting
> the application through oozie, but after the first application starts
> running (the oozie container) the other one remains in accepted stated. I am
> new to yarn so probably I am missing some concepts about how containers are
> requested and assigned to the applications. It seems that I can execute only
> one container at the time, even though there are still free resources. When
> I kill the first running application, the other one passes to running state.
> I'm also using the Fair Scheduler as according the documentation, it should
> avoid any starvation problems.
> I don't know if it is a problem of spark or yarn. Please come with
> suggestion if you have any.
>
>

Reply via email to