Thanks Jörn,

Fairscheduler is already enabled in yarn-site.xml

yarn.resourcemanager.scheduler.class -
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler

yarn.scheduler.fair.allow-undeclared-pools -
true

yarn.scheduler.fair.user-as-default-queue
true

yarn.scheduler.fair.preemption
true

yarn.scheduler.fair.preemption.cluster-utilization-threshold
0.8

On Sat, Feb 24, 2018 at 6:26 PM, Jörn Franke <jornfra...@gmail.com> wrote:

> Fairscheduler in yarn provides you the possibility to use more resources
> than configured if they are available
>
> On 24. Feb 2018, at 13:47, akshay naidu <akshaynaid...@gmail.com> wrote:
>
> it sure is not able to get sufficient resources from YARN to start the
>> containers.
>>
> that's right. I worked when I reduced executors from thrift but it also
> reduced thrift's performance.
>
> But it is not the solution i am looking forward to. my sqoop import job
> runs just once a day, and thrift apps will running for 24/7 for
> fetching-processing-displaying online reports on website. reducing
> executors and keeping some in spare is helping in running more jobs other
> than thrift parallely but it's wasting the core when other jobs are not
> working.
>
> is there something which can help in allocating resources dynamically?
> which will automatically allocate maximum resources to thrift when there
> are no other jobs running, and automatically share resources with jobs/apps
> other than thrift.?
>
> I've heard of property in yarn - dynamicAlloction , can this help?
>
>
> Thanks.
>
> On Sat, Feb 24, 2018 at 7:14 AM, vijay.bvp <bvpsa...@gmail.com> wrote:
>
>> it sure is not able to get sufficient resources from YARN to start the
>> containers.
>> is it only with this import job or if you submit any other job its failing
>> to start.
>>
>> As a test just try to run another spark job or a mapredue job  and see if
>> the job can be started.
>>
>> Reduce the thrift server executors and see overall there is available
>> cluster capacity for new jobs.
>>
>>
>>
>>
>>
>> --
>> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>
>>
>

Reply via email to