Here's step-by-step on how to do this:
================

Here's step by step on how to do this:

1. Add following in mapred-site.xml:


  <property>
    <name>mapred.fairscheduler.preemption</name>
    <final>true</final>
    <value>true</value> 
  </property>
  <property>
    <name>mapred.jobtracker.taskScheduler</name>   
    <final>true</final>
    <value>org.apache.hadoop.mapred.FairScheduler</value>
  </property>
  <property>
    <name>mapred.acls.enabled</name>
    <final>true</final>
    <value>true</value>
  </property>
    <property>
    <name>mapred.fairscheduler.allow.undeclared.pools</name>
    <final>true</final>
    <value>false</value>
  </property>
  <property>
    <name>mapred.fairscheduler.poolnameproperty</name>
    <final>true</final>
    <value>mapred.job.queue.name</value>
  </property>
  <property>
    <name>mapred.fairscheduler.allocation.file</name>
    <final>true</final>
    <value>/etc/hadoop/conf/allocations.xml</value>
  </property>
  <property>
    <name>mapred.queue.names</name>
    <final>true</final>
    <value>sqoop,default</value>
  </property>
  <property>
    <name>mapreduce.job.acl-view-job</name>
    <final>true</final>
    <value>*</value>
  </property>


2. Create mapred-queue-acls.xml in same dir as mapred-site.xml where you define 
all the queues:

<configuration>
  <property>
    <name>mapred.queue.sqoop.acl-submit-job</name>
    <value>usera, userb</value>
  </property>
  <property>
    <name>mapred.queue.sqoop.acl-administer-jobs</name>
    <value>usera, userc</value>
  </property>
<! above two properties for every queue defined in mapred-site.xml -->
</configuration>


3. Define fair scheduler allocations file (location specified in 
maped-site.xml) to use the queues defined and assign resources to them:

<?xml version="1.0"?>
<allocations>
  <defaultMinSharePreemptionTimeout>300</defaultMinSharePreemptionTimeout>
  <pool name="sqoop">
    <minMaps>700</minMaps>
    <minReduces>175</minReduces>
    <maxRunningJobs>25</maxRunningJobs>
  </pool>
  <pool name="default">
    <minMaps>120</minMaps>
    <minReduces>40</minReduces>
    <maxRunningJobs>40</maxRunningJobs>
  </pool>
  <fairSharePreemptionTimeout>600</fairSharePreemptionTimeout>
</allocations>


4. After this, restart job tracker.


And your queues should be in place. You can then submit job to right queue 
using 
-Dmapred.job.queue.name=sqoop or default 

And check the job going to actual queue at the link:
http://<job tracker hostname>:50030/scheduler 


Look at the property mapreduce.job.acl-view-job in mapred-site. Means anyone 
can view the jobs.


Try above and let us know any issues you still face.


================================


 
Anurag Tangri


Never wear your best trousers when you go out to fight for freedom and
truth.- Henrik Ibsen



On Thursday, November 21, 2013 11:21 AM, anurag tangri 
<anurag_tan...@yahoo.com> wrote:
 
Hi Viswanathan,

What steps have you followed to set fair scheduler ?

Thanks,

 Anurag Tangri


Never wear your best trousers when you go out to fight for freedom and
truth.- Henrik Ibsen



On Thursday, November 21, 2013 10:25 AM, Viswanathan J 
<jayamviswanat...@gmail.com> wrote:
 
Hi,
I'm running hadoop with 1.2.1, all my jobs are running in single queue (Queue 
1) only all the time. But I have configured default, queue 1&2.
Why jobs are not scheduled to all the queues.
Please help. Running like this will be any issue?
Thanks,

Reply via email to