r which yarn stops spawning tasks. I may be wrong here.
>
>
> Regards,
> Sandeep
>
> On Fri, Nov 6, 2015 at 6:15 PM, Laxman Ch <laxman@gmail.com> wrote:
>
>> Hi Sandeep,
>>
>> Please configure the following items to the cores and memory per node you
>
Hi Sandeep,
Please configure the following items to the cores and memory per node you
wanted to allocate for Yarn containers.
Their defaults are 8 cores and 8GB. So that's the reason you were stuck at
31 (4nodes * 8cores - 1 AppMaster)
2015 at 07:37, Harsh J <ha...@cloudera.com> wrote:
> If all your Apps are MR, then what you are looking for is MAPREDUCE-5583
> (it can be set per-job).
>
> On Thu, Oct 1, 2015 at 3:03 PM Laxman Ch <laxman@gmail.com> wrote:
>
>> Hi Naga,
>>
>> Lik
rced from queue/scheduler side.
>
> + Naga
>
> --
> *From:* Laxman Ch [laxman@gmail.com]
> *Sent:* Tuesday, September 29, 2015 16:52
>
> *To:* user@hadoop.apache.org
> *Subject:* Re: Concurrency control
>
> IMO, its better to have a application level configuration t
Bouncing this thread again. Any other thoughts please?
On 17 September 2015 at 23:21, Laxman Ch <laxman@gmail.com> wrote:
> No Naga. That wont help.
>
> I am running two applications (app1 - 100 vcores, app2 - 100 vcores) with
> same user which runs in same queue (
h high
> > demand may be prioritized ahead of an application with less usage. This
> is
> > to offset the tendency to favor small apps, which could result in
> starvation
> > for large apps if many small ones enter and leave the queue continuously
> > (optional,
pecting it to be configured for per app by the user.
>
> And for Rohith's suggestion of FairOrdering policy , I think it should
> solve the problem if the App which is submitted first is not already hogged
> all the queue's resources.
>
> + Naga
>
> -
Hi,
In YARN, do we have any way to control the amount of resources (vcores,
memory) used by an application SIMULTANEOUSLY.
- In my cluster, noticed some large and long running mr-app occupied all
the slots of the queue and blocking other apps to get started.
- I'm using Capacity schedulers
guration, seems like capacity configuration and splitting of the queue
> is not rightly done or you might refer to Fair Scheduler if you want more
> fairness for container allocation for different apps.
>
> On Thu, Sep 17, 2015 at 4:10 PM, Laxman Ch <laxman@gmail.com> wrote:
>
Hi Chetna,
All capacity scheduler queue configurations are in terms of percentage only
and not absolute (as you asked). This is done to auto-scale the queues when
new nodes are added to the cluster. Capacity scheduler enforces the
following
- Sumtotal of all allocations at any given level of
; will enforce strict cpu usage for a given container if required.
>
> + Naga
>
> On Thu, Sep 17, 2015 at 4:42 PM, Laxman Ch <laxman@gmail.com> wrote:
>
>> Yes. I'm already using cgroups. Cgroups helps in controlling the
>> resources at container level. But my require
11 matches
Mail list logo