Hi Karthik,
Yes all the queues are always active (atleast one job is running at a
time) and thus the fair share of all queue is very less. How to design the
fair scheduler for this kind of case. Do you have some Best Practices to
design the fair-scheduler.xml.
Weights - is the correct way to
Hi,
Sorry to resurrect an old thread, but Hadoop 2.6.0 setup is still proving
troublesome. (I've been diverted onto another project since my original email
and now returning to the subject)
I've got Hadoop 2.6.0 set up in pseudo-distributed mode and can't see logs
properly.
My Overview is
Hi Kumar,
This has to do with yarn.scheduler.capacity.maximum-am-resource-percent
which by default is 0.1
This parameter is used to control the number of concurrently running apps.
Refer to
https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html#Queue_Properties
Hi,
Im trying to build a Fresh hadoop cluster. Im using cloudera
manager 5.5.1 and CDH 5.5.2. Hdfs is up and running. When i start Map
Reduce (MR1) job tracker is up but all task trackers are not starting.
error:
Command aborted because of exception: Command timed-out after 150 seconds