Bouncing this thread again. Any other thoughts please?
On 17 September 2015 at 23:21, Laxman Ch wrote:
> No Naga. That wont help.
>
> I am running two applications (app1 - 100 vcores, app2 - 100 vcores) with
> same user which runs in same queue (capacity=100vcores). In
Because there are none of the key hadoop contributors that have a
signature that contains the unsubscribe email. Which technically could
classify these emails as spam. If one notes all mailing lists be it in a
key members signature or otherwise they always explain how you can
unsubscribe.
---
Thanks Rohith for your thoughts ,
But i think by this configuration it might not completely solve the
scenario mentioned by Laxman, As if the there is some time gap between first
and and the second app then though we have fairness or priority set for apps
starvation will be there.
IIUC we
I think Laxman should also tell us more about which application type
he is running. The normal use cas of MAPREDUCE should be working as
intended, but if he has for example one MAP using 100 vcores, then the
second map will have to wait until the app completes. Same would
happen if the
Hi Laxman,
What i meant was, suppose if we support and configure
yarn.scheduler.capacity..app-limit-factor to .25 then a single app
should not take more than 25 % of resources in the queue.
This would be a more generic configuration which can be enforced by the admin,
than expecting it to be
Hi Ted Yu,
Im using hbase 0.96
Hi Anubhav,
yes.i can ping the machine.also,i have added the IP
in the hostfile.Im able to telnet to the zookeeper port to the host machine.
Thanks
On Fri, Sep 25, 2015 at 7:32 PM, Anubhav Agarwal
well ... why, when they can always send UNSUBSCRIBE to the whole group :)
On Tue, Sep 22, 2015 at 5:31 PM, Namikaze Minato
wrote:
> Step 1:
> Send an e-mail to user-unsubscr...@hadoop.apache.org
>
>
> Done.
>
Thanks Rohit, Naga and Lloyd for the responses.
> I think Laxman should also tell us more about which application type he
is running.
We run mr jobs mostly with default core/memory allocation (1 vcore, 1.5GB).
Our problem is more about controlling the *resources used simultaneously by
all
IMO, its better to have a application level configuration than to have a
scheduler/queue level configuration.
Having a queue level configuration will restrict every single application
that runs in that queue.
But, we may want to configure these limits for only some set of jobs and
also for every
Hello,
I would like to know how it works with Fair Scheduler in YARN-hadoop. I'm
trying to configure parent queue with MaxResorces in allocations file, e.i.
fair-scheduler.xml. Then I want to create child queues. As documentation
about fair scheduler says "Queues can be arranged in a hierarchy to
Hi Laxman,
Ideally i understand it would be better its available @ application level, but
its like each user is expected to ensure that he gives the right configuration
which is within the limits of max capacity.
And what if user submits some app (kind of a query execution app) with out this
11 matches
Mail list logo