Thanks Jörn,
Fairscheduler is already enabled in yarn-site.xml
yarn.resourcemanager.scheduler.class -
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
yarn.scheduler.fair.allow-undeclared-pools -
true
yarn.scheduler.fair.user-as-default-queue
true
yarn.scheduler.fair.
Fairscheduler in yarn provides you the possibility to use more resources than
configured if they are available
On 24. Feb 2018, at 13:47, akshay naidu wrote:
>> it sure is not able to get sufficient resources from YARN to start the
>> containers.
> that's right. I worked when I reduced execut
>
> it sure is not able to get sufficient resources from YARN to start the
> containers.
>
that's right. I worked when I reduced executors from thrift but it also
reduced thrift's performance.
But it is not the solution i am looking forward to. my sqoop import job
runs just once a day, and thrift
it sure is not able to get sufficient resources from YARN to start the
containers.
is it only with this import job or if you submit any other job its failing
to start.
As a test just try to run another spark job or a mapredue job and see if
the job can be started.
Reduce the thrift server execut
hello vijay,
appreciate your reply.
what was the error when you are trying to run mapreduce import job when
> the
> thrift server is running.
it didnt throw any error, it just gets stuck at
INFO mapreduce.Job: Running job: job_151911053
and resumes the moment i kill Thrift .
thanks
On Tue,
what was the error when you are trying to run mapreduce import job when the
thrift server is running.
this is only config changed? what was the config before...
also share the spark thrift server job config such as no of executors, cores
memory etc.
My guess is your mapreduce job is unable to get