Fwd: RE: Making Mumak work with capacity scheduler

2011-09-23 Thread arun k
Hi guys !

I have run mumak as sudo. It works fine.
i am trying to run jobtrace in test/data with capacity scheduler.
I have done :
1 Build contrib/capacity-scheduler
2Copied hadoop-*-capacity-jar from build/contrib/capacity_scheduler to lib/
3added mapred.jobtracker.taskScheduler and mapred.queue.names in
mapred-site.xml
4In conf/capacity-scheduler
 set the propoery value for 2 queues
  mapred.capacity-scheduler.queue.default.capacity 20
  mapred.capacity-scheduler.queue.myqueue2.capacity  80

When i run mumak.sh
i see in console
11/09/23 11:51:19 INFO mapred.QueueHierarchyBuilder: No capacity specified
for queue default
11/09/23 11:51:19 INFO mapred.QueueHierarchyBuilder: Created a jobQueue
default and added it as a child to
11/09/23 11:51:19 INFO mapred.QueueHierarchyBuilder: No capacity specified
for queue myqueue2
11/09/23 11:51:19 INFO mapred.QueueHierarchyBuilder: Created a jobQueue
myqueue2 and added it as a child to
11/09/23 11:51:19 INFO mapred.AbstractQueue: Total capacity to be
distributed among the others are  100.0
11/09/23 11:51:19 INFO mapred.AbstractQueue: Capacity share for un
configured queue default is 50.0
11/09/23 11:51:19 INFO mapred.AbstractQueue: Capacity share for un
configured queue myqueue2 is 50.0
11/09/23 11:51:19 INFO mapred.CapacityTaskScheduler: Capacity scheduler
started successfully

2 Q's :

1 In web GUI of Jobtracker i see both he queues but CAPACITIES ARE
REFLECTED
2 All the jobs by defaul are submitted to default queue. How can i submit
jobs to various queues in mumak ?

Regards,
Arun

On Fri, Sep 23, 2011 at 10:12 AM, arun k arunk...@gmail.com wrote:

 Hi !

 I have changed he permissions for hadoop extract and /jobstory and
 /history/done dir recursively:
 $chmod -R 777 branch-0.22
 $chmod -R logs
 $chmod -R jobracker
 but still i get the same problem.
 The permissions are like this http://pastebin.com/sw3UPM8t
 The log is here http://pastebin.com/CztUPywB.
 I am able to run as sudo.

 Arun

 On Thu, Sep 22, 2011 at 7:19 PM, Uma Maheswara Rao G 72686 
 mahesw...@huawei.com wrote:

 Yes Devaraj,
 From the logs, looks it failed to create /jobtracker/jobsInfo



 code snippet:

 if (!fs.exists(path)) {
if (!fs.mkdirs(path, new
 FsPermission(JOB_STATUS_STORE_DIR_PERMISSION))) {
  throw new IOException(
  CompletedJobStatusStore mkdirs failed to create 
  + path.toString());
}

 @ Arun, Can you check, you have correct permission as Devaraj said?


 2011-09-22 15:53:57.598::INFO:  Started
 SelectChannelConnector@0.0.0.0:50030
 11/09/22 15:53:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
 processName=JobTracker, sessionId=
 11/09/22 15:53:57 WARN conf.Configuration: mapred.task.cache.levels is
 deprecated. Instead, use mapreduce.jobtracker.taskcache.levels
 11/09/22 15:53:57 WARN mapred.SimulatorJobTracker: Error starting tracker:
 java.io.IOException: CompletedJobStatusStore mkdirs failed to create
 /jobtracker/jobsInfo
at
 org.apache.hadoop.mapred.CompletedJobStatusStore.init(CompletedJobStatusStore.java:83)
at org.apache.hadoop.mapred.JobTracker.init(JobTracker.java:4684)
at
 org.apache.hadoop.mapred.SimulatorJobTracker.init(SimulatorJobTracker.java:81)
at
 org.apache.hadoop.mapred.SimulatorJobTracker.startTracker(SimulatorJobTracker.java:100)
at
 org.apache.hadoop.mapred.SimulatorEngine.init(SimulatorEngine.java:210)
at
 org.apache.hadoop.mapred.SimulatorEngine.init(SimulatorEngine.java:184)
at
 org.apache.hadoop.mapred.SimulatorEngine.run(SimulatorEngine.java:292)
at
 org.apache.hadoop.mapred.SimulatorEngine.run(SimulatorEngine.java:323)

 I cc'ed to Mapreduce user mailing list as well.

 Regards,
 Uma

 - Original Message -
 From: Devaraj K devara...@huawei.com
 Date: Thursday, September 22, 2011 6:01 pm
 Subject: RE: Making Mumak work with capacity scheduler
 To: common-user@hadoop.apache.org

  Hi Arun,
 
 I have gone through the logs. Mumak simulator is trying to
  start the job
  tracker and job tracking is failing to start because it is not able to
  create /jobtracker/jobsinfo directory.
 
  I think the directory doesn't have enough permissions. Please check
  thepermissions or any other reason why it is failing to create the
  dir.
 
 
  Devaraj K
 
 
  -Original Message-
  From: arun k [mailto:arunk...@gmail.com]
  Sent: Thursday, September 22, 2011 3:57 PM
  To: common-user@hadoop.apache.org
  Subject: Re: Making Mumak work with capacity scheduler
 
  Hi Uma !
 
  u got me right !
  Actually without any patch when i modified appropriate mapred-
  site.xml and
  capacity-scheduler.xml and copied capaciy jar accordingly.
  I am able to see see queues in Jobracker GUI but both the queues
  show same
  set of job's execution.
  I ran with trace and topology files from test/data :
  $bin/mumak.sh trace_file topology_file
  Is it because i am not submitting jobs to a particular queue ?
  If so how can i do it ?
 
  Got

Re: RE: Making Mumak work with capacity scheduler

2011-09-23 Thread arun k
Sorry ,

1Q:  In web GUI of Jobtracker i see both he queues but CAPACITIES ARE NOT
REFLECTED
2Q:All the jobs by defaul are submitted to default queue. How can i submit
jobs to various queues in mumak ?


regards,
Arun

On Fri, Sep 23, 2011 at 11:57 AM, arun k arunk...@gmail.com wrote:

 Hi guys !

 I have run mumak as sudo. It works fine.
 i am trying to run jobtrace in test/data with capacity scheduler.
 I have done :
 1 Build contrib/capacity-scheduler
 2Copied hadoop-*-capacity-jar from build/contrib/capacity_scheduler to
 lib/
 3added mapred.jobtracker.taskScheduler and mapred.queue.names in
 mapred-site.xml
 4In conf/capacity-scheduler
  set the propoery value for 2 queues
   mapred.capacity-scheduler.queue.default.capacity 20
   mapred.capacity-scheduler.queue.myqueue2.capacity  80

 When i run mumak.sh
 i see in console
 11/09/23 11:51:19 INFO mapred.QueueHierarchyBuilder: No capacity specified
 for queue default
 11/09/23 11:51:19 INFO mapred.QueueHierarchyBuilder: Created a jobQueue
 default and added it as a child to
 11/09/23 11:51:19 INFO mapred.QueueHierarchyBuilder: No capacity specified
 for queue myqueue2
 11/09/23 11:51:19 INFO mapred.QueueHierarchyBuilder: Created a jobQueue
 myqueue2 and added it as a child to
 11/09/23 11:51:19 INFO mapred.AbstractQueue: Total capacity to be
 distributed among the others are  100.0
 11/09/23 11:51:19 INFO mapred.AbstractQueue: Capacity share for un
 configured queue default is 50.0
 11/09/23 11:51:19 INFO mapred.AbstractQueue: Capacity share for un
 configured queue myqueue2 is 50.0
 11/09/23 11:51:19 INFO mapred.CapacityTaskScheduler: Capacity scheduler
 started successfully

 2 Q's :

 1 In web GUI of Jobtracker i see both he queues but CAPACITIES ARE
 REFLECTED
 2 All the jobs by defaul are submitted to default queue. How can i
 submit jobs to various queues in mumak ?

 Regards,
 Arun


 On Fri, Sep 23, 2011 at 10:12 AM, arun k arunk...@gmail.com wrote:

 Hi !

 I have changed he permissions for hadoop extract and /jobstory and
 /history/done dir recursively:
 $chmod -R 777 branch-0.22
 $chmod -R logs
 $chmod -R jobracker
 but still i get the same problem.
 The permissions are like this http://pastebin.com/sw3UPM8t
 The log is here http://pastebin.com/CztUPywB.
 I am able to run as sudo.

 Arun

 On Thu, Sep 22, 2011 at 7:19 PM, Uma Maheswara Rao G 72686 
 mahesw...@huawei.com wrote:

 Yes Devaraj,
 From the logs, looks it failed to create /jobtracker/jobsInfo



 code snippet:

 if (!fs.exists(path)) {
if (!fs.mkdirs(path, new
 FsPermission(JOB_STATUS_STORE_DIR_PERMISSION))) {
  throw new IOException(
  CompletedJobStatusStore mkdirs failed to create 
  + path.toString());
}

 @ Arun, Can you check, you have correct permission as Devaraj said?


 2011-09-22 15:53:57.598::INFO:  Started
 SelectChannelConnector@0.0.0.0:50030
 11/09/22 15:53:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
 processName=JobTracker, sessionId=
 11/09/22 15:53:57 WARN conf.Configuration: mapred.task.cache.levels is
 deprecated. Instead, use mapreduce.jobtracker.taskcache.levels
 11/09/22 15:53:57 WARN mapred.SimulatorJobTracker: Error starting
 tracker: java.io.IOException: CompletedJobStatusStore mkdirs failed to
 create /jobtracker/jobsInfo
at
 org.apache.hadoop.mapred.CompletedJobStatusStore.init(CompletedJobStatusStore.java:83)
at
 org.apache.hadoop.mapred.JobTracker.init(JobTracker.java:4684)
at
 org.apache.hadoop.mapred.SimulatorJobTracker.init(SimulatorJobTracker.java:81)
at
 org.apache.hadoop.mapred.SimulatorJobTracker.startTracker(SimulatorJobTracker.java:100)
at
 org.apache.hadoop.mapred.SimulatorEngine.init(SimulatorEngine.java:210)
at
 org.apache.hadoop.mapred.SimulatorEngine.init(SimulatorEngine.java:184)
at
 org.apache.hadoop.mapred.SimulatorEngine.run(SimulatorEngine.java:292)
at
 org.apache.hadoop.mapred.SimulatorEngine.run(SimulatorEngine.java:323)

 I cc'ed to Mapreduce user mailing list as well.

 Regards,
 Uma

 - Original Message -
 From: Devaraj K devara...@huawei.com
 Date: Thursday, September 22, 2011 6:01 pm
 Subject: RE: Making Mumak work with capacity scheduler
 To: common-user@hadoop.apache.org

  Hi Arun,
 
 I have gone through the logs. Mumak simulator is trying to
  start the job
  tracker and job tracking is failing to start because it is not able to
  create /jobtracker/jobsinfo directory.
 
  I think the directory doesn't have enough permissions. Please check
  thepermissions or any other reason why it is failing to create the
  dir.
 
 
  Devaraj K
 
 
  -Original Message-
  From: arun k [mailto:arunk...@gmail.com]
  Sent: Thursday, September 22, 2011 3:57 PM
  To: common-user@hadoop.apache.org
  Subject: Re: Making Mumak work with capacity scheduler
 
  Hi Uma !
 
  u got me right !
  Actually without any patch when i modified appropriate mapred-
  site.xml and
  capacity-scheduler.xml and copied

Re: Making Mumak work with capacity scheduler

2011-09-22 Thread arun k
Hi Uma !

u got me right !
Actually without any patch when i modified appropriate mapred-site.xml and
capacity-scheduler.xml and copied capaciy jar accordingly.
I am able to see see queues in Jobracker GUI but both the queues show same
set of job's execution.
I ran with trace and topology files from test/data :
$bin/mumak.sh trace_file topology_file
Is it because i am not submitting jobs to a particular queue ?
If so how can i do it ?

Got hadoop-0.22 from
http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.22/
  builded all three components but when i give
arun@arun-Presario-C500-RU914PA-ACJ:~/hadoop22/branch-0.22/mapreduce/src/contrib/mumak$
bin/mumak.sh src/test/data/19-jobs.trace.json.gz
src/test/data/19-jobs.topology.json.gz
it gets stuck at some point. Log is here http://pastebin.com/9SNUHLFy

Thanks,
Arun





On Wed, Sep 21, 2011 at 2:03 PM, Uma Maheswara Rao G 72686 
mahesw...@huawei.com wrote:


 Hello Arun,
  If you want to apply MAPREDUCE-1253 on 21 version,
  applying patch directly using commands may not work because of codebase
 changes.

  So, you take the patch and apply the lines in your code base manually. I
 am not sure any otherway for this.

 Did i understand wrongly your intention?

 Regards,
 Uma


 - Original Message -
 From: ArunKumar arunk...@gmail.com
 Date: Wednesday, September 21, 2011 1:52 pm
 Subject: Re: Making Mumak work with capacity scheduler
 To: hadoop-u...@lucene.apache.org

  Hi Uma !
 
  Mumak is not part of stable versions yet. It comes from Hadoop-
  0.21 onwards.
  Can u describe in detail You may need to merge them logically (
  back port
  them) ?
  I don't get it .
 
  Arun
 
 
  On Wed, Sep 21, 2011 at 12:07 PM, Uma Maheswara Rao G [via Lucene] 
  ml-node+s472066n3354668...@n3.nabble.com wrote:
 
   Looks that patchs are based on 0.22 version. So, you can not
  apply them
   directly.
   You may need to merge them logically ( back port them).
  
   one more point to note here 0.21 version of hadoop is not a
  stable version.
  
   Presently 0.20xx versions are stable.
  
   Regards,
   Uma
   - Original Message -
   From: ArunKumar [hidden
  email]http://user/SendEmail.jtp?type=nodenode=3354668i=0
   Date: Wednesday, September 21, 2011 12:01 pm
   Subject: Re: Making Mumak work with capacity scheduler
   To: [hidden email]
  http://user/SendEmail.jtp?type=nodenode=3354668i=1
Hi Uma !
   
I am applying patch to mumak in hadoop-0.21 version.
   
   
Arun
   
On Wed, Sep 21, 2011 at 11:55 AM, Uma Maheswara Rao G [via
  Lucene] 
[hidden email]
  http://user/SendEmail.jtp?type=nodenode=3354668i=2 wrote:
   
 Hello Arun,

  On which code base you are trying to apply the patch.
  Code should match to apply the patch.

 Regards,
 Uma

 - Original Message -
 From: ArunKumar [hidden
email]http://user/SendEmail.jtp?type=nodenode=3354652i=0
 Date: Wednesday, September 21, 2011 11:33 am
 Subject: Making Mumak work with capacity scheduler
 To: [hidden email]
http://user/SendEmail.jtp?type=nodenode=3354652i=1
  Hi !
 
  I have set up mumak and able to run it in terminal and in
  eclipse.I have modified the mapred-site.xml and capacity-
  scheduler.xml as
  necessary.I tried to apply patch MAPREDUCE-1253-
  20100804.patch in
  https://issues.apache.org/jira/browse/MAPREDUCE-1253
  https://issues.apache.org/jira/browse/MAPREDUCE-1253  as
  follows{HADOOP_HOME}contrib/mumak$patch -p0 
  patch_file_locationbut i get error
  3 out of 3 HUNK failed.
 
  Thanks,
  Arun
 
 
 
  --
  View this message in context:
  http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
capacity-
  scheduler-tp3354615p3354615.html
  Sent from the Hadoop lucene-users mailing list archive at
Nabble.com. 


 --
  If you reply to this email, your message will be added to the
discussion below:

 http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
capacity-scheduler-tp3354615p3354652.html
  To unsubscribe from Making Mumak work with capacity scheduler,
click here
  


   
   
--
View this message in context:
http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
  capacity-
scheduler-tp3354615p3354660.html
Sent from the Hadoop lucene-users mailing list archive at
  Nabble.com.
  
   --
If you reply to this email, your message will be added to the
  discussion below:
  
   http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
  capacity-scheduler-tp3354615p3354668.html
To unsubscribe from Making Mumak work with capacity scheduler,
  click here
 http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=3354615code=YXJ1bms3ODZAZ21haWwuY29tfDMzNTQ2MTV8NzA5NTc4MTY3
 .
  
  
 
 
  --
  View this message in context:
  http

RE: Making Mumak work with capacity scheduler

2011-09-22 Thread Devaraj K
Hi Arun,

I have gone through the logs. Mumak simulator is trying to start the job
tracker and job tracking is failing to start because it is not able to
create /jobtracker/jobsinfo directory. 

I think the directory doesn't have enough permissions. Please check the
permissions or any other reason why it is failing to create the dir.



Devaraj K 


-Original Message-
From: arun k [mailto:arunk...@gmail.com] 
Sent: Thursday, September 22, 2011 3:57 PM
To: common-user@hadoop.apache.org
Subject: Re: Making Mumak work with capacity scheduler

Hi Uma !

u got me right !
Actually without any patch when i modified appropriate mapred-site.xml and
capacity-scheduler.xml and copied capaciy jar accordingly.
I am able to see see queues in Jobracker GUI but both the queues show same
set of job's execution.
I ran with trace and topology files from test/data :
$bin/mumak.sh trace_file topology_file
Is it because i am not submitting jobs to a particular queue ?
If so how can i do it ?

Got hadoop-0.22 from
http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.22/
  builded all three components but when i give
arun@arun-Presario-C500-RU914PA-ACJ:~/hadoop22/branch-0.22/mapreduce/src/con
trib/mumak$
bin/mumak.sh src/test/data/19-jobs.trace.json.gz
src/test/data/19-jobs.topology.json.gz
it gets stuck at some point. Log is here http://pastebin.com/9SNUHLFy

Thanks,
Arun





On Wed, Sep 21, 2011 at 2:03 PM, Uma Maheswara Rao G 72686 
mahesw...@huawei.com wrote:


 Hello Arun,
  If you want to apply MAPREDUCE-1253 on 21 version,
  applying patch directly using commands may not work because of codebase
 changes.

  So, you take the patch and apply the lines in your code base manually. I
 am not sure any otherway for this.

 Did i understand wrongly your intention?

 Regards,
 Uma


 - Original Message -
 From: ArunKumar arunk...@gmail.com
 Date: Wednesday, September 21, 2011 1:52 pm
 Subject: Re: Making Mumak work with capacity scheduler
 To: hadoop-u...@lucene.apache.org

  Hi Uma !
 
  Mumak is not part of stable versions yet. It comes from Hadoop-
  0.21 onwards.
  Can u describe in detail You may need to merge them logically (
  back port
  them) ?
  I don't get it .
 
  Arun
 
 
  On Wed, Sep 21, 2011 at 12:07 PM, Uma Maheswara Rao G [via Lucene] 
  ml-node+s472066n3354668...@n3.nabble.com wrote:
 
   Looks that patchs are based on 0.22 version. So, you can not
  apply them
   directly.
   You may need to merge them logically ( back port them).
  
   one more point to note here 0.21 version of hadoop is not a
  stable version.
  
   Presently 0.20xx versions are stable.
  
   Regards,
   Uma
   - Original Message -
   From: ArunKumar [hidden
  email]http://user/SendEmail.jtp?type=nodenode=3354668i=0
   Date: Wednesday, September 21, 2011 12:01 pm
   Subject: Re: Making Mumak work with capacity scheduler
   To: [hidden email]
  http://user/SendEmail.jtp?type=nodenode=3354668i=1
Hi Uma !
   
I am applying patch to mumak in hadoop-0.21 version.
   
   
Arun
   
On Wed, Sep 21, 2011 at 11:55 AM, Uma Maheswara Rao G [via
  Lucene] 
[hidden email]
  http://user/SendEmail.jtp?type=nodenode=3354668i=2 wrote:
   
 Hello Arun,

  On which code base you are trying to apply the patch.
  Code should match to apply the patch.

 Regards,
 Uma

 - Original Message -
 From: ArunKumar [hidden
email]http://user/SendEmail.jtp?type=nodenode=3354652i=0
 Date: Wednesday, September 21, 2011 11:33 am
 Subject: Making Mumak work with capacity scheduler
 To: [hidden email]
http://user/SendEmail.jtp?type=nodenode=3354652i=1
  Hi !
 
  I have set up mumak and able to run it in terminal and in
  eclipse.I have modified the mapred-site.xml and capacity-
  scheduler.xml as
  necessary.I tried to apply patch MAPREDUCE-1253-
  20100804.patch in
  https://issues.apache.org/jira/browse/MAPREDUCE-1253
  https://issues.apache.org/jira/browse/MAPREDUCE-1253  as
  follows{HADOOP_HOME}contrib/mumak$patch -p0 
  patch_file_locationbut i get error
  3 out of 3 HUNK failed.
 
  Thanks,
  Arun
 
 
 
  --
  View this message in context:
  http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
capacity-
  scheduler-tp3354615p3354615.html
  Sent from the Hadoop lucene-users mailing list archive at
Nabble.com. 


 --
  If you reply to this email, your message will be added to the
discussion below:

 http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
capacity-scheduler-tp3354615p3354652.html
  To unsubscribe from Making Mumak work with capacity scheduler,
click here
  


   
   
--
View this message in context:
http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
  capacity-
scheduler-tp3354615p3354660.html
Sent from the Hadoop lucene-users

Re: RE: Making Mumak work with capacity scheduler

2011-09-22 Thread Uma Maheswara Rao G 72686
Yes Devaraj,
From the logs, looks it failed to create /jobtracker/jobsInfo



code snippet:

if (!fs.exists(path)) {
if (!fs.mkdirs(path, new 
FsPermission(JOB_STATUS_STORE_DIR_PERMISSION))) {
  throw new IOException(
  CompletedJobStatusStore mkdirs failed to create 
  + path.toString());
}

@ Arun, Can you check, you have correct permission as Devaraj said?


2011-09-22 15:53:57.598::INFO:  Started SelectChannelConnector@0.0.0.0:50030
11/09/22 15:53:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with 
processName=JobTracker, sessionId=
11/09/22 15:53:57 WARN conf.Configuration: mapred.task.cache.levels is 
deprecated. Instead, use mapreduce.jobtracker.taskcache.levels
11/09/22 15:53:57 WARN mapred.SimulatorJobTracker: Error starting tracker: 
java.io.IOException: CompletedJobStatusStore mkdirs failed to create 
/jobtracker/jobsInfo
at 
org.apache.hadoop.mapred.CompletedJobStatusStore.init(CompletedJobStatusStore.java:83)
at org.apache.hadoop.mapred.JobTracker.init(JobTracker.java:4684)
at 
org.apache.hadoop.mapred.SimulatorJobTracker.init(SimulatorJobTracker.java:81)
at 
org.apache.hadoop.mapred.SimulatorJobTracker.startTracker(SimulatorJobTracker.java:100)
at 
org.apache.hadoop.mapred.SimulatorEngine.init(SimulatorEngine.java:210)
at 
org.apache.hadoop.mapred.SimulatorEngine.init(SimulatorEngine.java:184)
at 
org.apache.hadoop.mapred.SimulatorEngine.run(SimulatorEngine.java:292)
at 
org.apache.hadoop.mapred.SimulatorEngine.run(SimulatorEngine.java:323)

I cc'ed to Mapreduce user mailing list as well.

Regards,
Uma

- Original Message -
From: Devaraj K devara...@huawei.com
Date: Thursday, September 22, 2011 6:01 pm
Subject: RE: Making Mumak work with capacity scheduler
To: common-user@hadoop.apache.org

 Hi Arun,
 
I have gone through the logs. Mumak simulator is trying to 
 start the job
 tracker and job tracking is failing to start because it is not able to
 create /jobtracker/jobsinfo directory. 
 
 I think the directory doesn't have enough permissions. Please check 
 thepermissions or any other reason why it is failing to create the 
 dir.
 
 
 Devaraj K 
 
 
 -Original Message-
 From: arun k [mailto:arunk...@gmail.com] 
 Sent: Thursday, September 22, 2011 3:57 PM
 To: common-user@hadoop.apache.org
 Subject: Re: Making Mumak work with capacity scheduler
 
 Hi Uma !
 
 u got me right !
 Actually without any patch when i modified appropriate mapred-
 site.xml and
 capacity-scheduler.xml and copied capaciy jar accordingly.
 I am able to see see queues in Jobracker GUI but both the queues 
 show same
 set of job's execution.
 I ran with trace and topology files from test/data :
 $bin/mumak.sh trace_file topology_file
 Is it because i am not submitting jobs to a particular queue ?
 If so how can i do it ?
 
 Got hadoop-0.22 from
 http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.22/
  builded all three components but when i give
 arun@arun-Presario-C500-RU914PA-ACJ:~/hadoop22/branch-
 0.22/mapreduce/src/contrib/mumak$
 bin/mumak.sh src/test/data/19-jobs.trace.json.gz
 src/test/data/19-jobs.topology.json.gz
 it gets stuck at some point. Log is here 
 http://pastebin.com/9SNUHLFy
 Thanks,
 Arun
 
 
 
 
 
 On Wed, Sep 21, 2011 at 2:03 PM, Uma Maheswara Rao G 72686 
 mahesw...@huawei.com wrote:
 
 
  Hello Arun,
   If you want to apply MAPREDUCE-1253 on 21 version,
   applying patch directly using commands may not work because of 
 codebase changes.
 
   So, you take the patch and apply the lines in your code base 
 manually. I
  am not sure any otherway for this.
 
  Did i understand wrongly your intention?
 
  Regards,
  Uma
 
 
  - Original Message -
  From: ArunKumar arunk...@gmail.com
  Date: Wednesday, September 21, 2011 1:52 pm
  Subject: Re: Making Mumak work with capacity scheduler
  To: hadoop-u...@lucene.apache.org
 
   Hi Uma !
  
   Mumak is not part of stable versions yet. It comes from Hadoop-
   0.21 onwards.
   Can u describe in detail You may need to merge them logically (
   back port
   them) ?
   I don't get it .
  
   Arun
  
  
   On Wed, Sep 21, 2011 at 12:07 PM, Uma Maheswara Rao G [via 
 Lucene] 
   ml-node+s472066n3354668...@n3.nabble.com wrote:
  
Looks that patchs are based on 0.22 version. So, you can not
   apply them
directly.
You may need to merge them logically ( back port them).
   
one more point to note here 0.21 version of hadoop is not a
   stable version.
   
Presently 0.20xx versions are stable.
   
Regards,
Uma
- Original Message -
From: ArunKumar [hidden
   email]http://user/SendEmail.jtp?type=nodenode=3354668i=0
Date: Wednesday, September 21, 2011 12:01 pm
Subject: Re: Making Mumak work with capacity scheduler
To: [hidden email]
   http://user/SendEmail.jtp?type=nodenode=3354668i=1
 Hi Uma !

 I am applying patch to mumak in hadoop-0.21

Making Mumak work with capacity scheduler

2011-09-21 Thread ArunKumar
Hi !

I have set up mumak and able to run it in terminal and in eclipse.
I have modified the mapred-site.xml and capacity-scheduler.xml as necessary.
I tried to apply patch MAPREDUCE-1253-20100804.patch in 
https://issues.apache.org/jira/browse/MAPREDUCE-1253
https://issues.apache.org/jira/browse/MAPREDUCE-1253  as follows
{HADOOP_HOME}contrib/mumak$patch -p0  patch_file_location
but i get error
3 out of 3 HUNK failed.

Thanks,
Arun



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-capacity-scheduler-tp3354615p3354615.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.


Re: Making Mumak work with capacity scheduler

2011-09-21 Thread Uma Maheswara Rao G 72686
Hello Arun,

 On which code base you are trying to apply the patch.
 Code should match to apply the patch.

Regards,
Uma

- Original Message -
From: ArunKumar arunk...@gmail.com
Date: Wednesday, September 21, 2011 11:33 am
Subject: Making Mumak work with capacity scheduler
To: hadoop-u...@lucene.apache.org

 Hi !
 
 I have set up mumak and able to run it in terminal and in eclipse.
 I have modified the mapred-site.xml and capacity-scheduler.xml as 
 necessary.I tried to apply patch MAPREDUCE-1253-20100804.patch in 
 https://issues.apache.org/jira/browse/MAPREDUCE-1253
 https://issues.apache.org/jira/browse/MAPREDUCE-1253  as follows
 {HADOOP_HOME}contrib/mumak$patch -p0  patch_file_location
 but i get error
 3 out of 3 HUNK failed.
 
 Thanks,
 Arun
 
 
 
 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-capacity-
 scheduler-tp3354615p3354615.html
 Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
 


Re: Making Mumak work with capacity scheduler

2011-09-21 Thread ArunKumar
Hi Uma !

I am applying patch to mumak in hadoop-0.21 version.


Arun

On Wed, Sep 21, 2011 at 11:55 AM, Uma Maheswara Rao G [via Lucene] 
ml-node+s472066n3354652...@n3.nabble.com wrote:

 Hello Arun,

  On which code base you are trying to apply the patch.
  Code should match to apply the patch.

 Regards,
 Uma

 - Original Message -
 From: ArunKumar [hidden 
 email]http://user/SendEmail.jtp?type=nodenode=3354652i=0

 Date: Wednesday, September 21, 2011 11:33 am
 Subject: Making Mumak work with capacity scheduler
 To: [hidden email] http://user/SendEmail.jtp?type=nodenode=3354652i=1

  Hi !
 
  I have set up mumak and able to run it in terminal and in eclipse.
  I have modified the mapred-site.xml and capacity-scheduler.xml as
  necessary.I tried to apply patch MAPREDUCE-1253-20100804.patch in
  https://issues.apache.org/jira/browse/MAPREDUCE-1253
  https://issues.apache.org/jira/browse/MAPREDUCE-1253  as follows
  {HADOOP_HOME}contrib/mumak$patch -p0  patch_file_location
  but i get error
  3 out of 3 HUNK failed.
 
  Thanks,
  Arun
 
 
 
  --
  View this message in context:
  http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-capacity-
  scheduler-tp3354615p3354615.html
  Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
 


 --
  If you reply to this email, your message will be added to the discussion
 below:

 http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-capacity-scheduler-tp3354615p3354652.html
  To unsubscribe from Making Mumak work with capacity scheduler, click 
 herehttp://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=3354615code=YXJ1bms3ODZAZ21haWwuY29tfDMzNTQ2MTV8NzA5NTc4MTY3.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-capacity-scheduler-tp3354615p3354660.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.

Re: Making Mumak work with capacity scheduler

2011-09-21 Thread Uma Maheswara Rao G 72686
Looks that patchs are based on 0.22 version. So, you can not apply them 
directly.
You may need to merge them logically ( back port them).

one more point to note here 0.21 version of hadoop is not a stable version.
Presently 0.20xx versions are stable.

Regards,
Uma
- Original Message -
From: ArunKumar arunk...@gmail.com
Date: Wednesday, September 21, 2011 12:01 pm
Subject: Re: Making Mumak work with capacity scheduler
To: hadoop-u...@lucene.apache.org

 Hi Uma !
 
 I am applying patch to mumak in hadoop-0.21 version.
 
 
 Arun
 
 On Wed, Sep 21, 2011 at 11:55 AM, Uma Maheswara Rao G [via Lucene] 
 ml-node+s472066n3354652...@n3.nabble.com wrote:
 
  Hello Arun,
 
   On which code base you are trying to apply the patch.
   Code should match to apply the patch.
 
  Regards,
  Uma
 
  - Original Message -
  From: ArunKumar [hidden 
 email]http://user/SendEmail.jtp?type=nodenode=3354652i=0
  Date: Wednesday, September 21, 2011 11:33 am
  Subject: Making Mumak work with capacity scheduler
  To: [hidden email] 
 http://user/SendEmail.jtp?type=nodenode=3354652i=1
   Hi !
  
   I have set up mumak and able to run it in terminal and in eclipse.
   I have modified the mapred-site.xml and capacity-scheduler.xml as
   necessary.I tried to apply patch MAPREDUCE-1253-20100804.patch in
   https://issues.apache.org/jira/browse/MAPREDUCE-1253
   https://issues.apache.org/jira/browse/MAPREDUCE-1253  as follows
   {HADOOP_HOME}contrib/mumak$patch -p0  patch_file_location
   but i get error
   3 out of 3 HUNK failed.
  
   Thanks,
   Arun
  
  
  
   --
   View this message in context:
   http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
 capacity-
   scheduler-tp3354615p3354615.html
   Sent from the Hadoop lucene-users mailing list archive at 
 Nabble.com. 
 
 
  --
   If you reply to this email, your message will be added to the 
 discussion below:
 
  http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
 capacity-scheduler-tp3354615p3354652.html
   To unsubscribe from Making Mumak work with capacity scheduler, 
 click 
 herehttp://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=3354615code=YXJ1bms3ODZAZ21haWwuY29tfDMzNTQ2MTV8NzA5NTc4MTY3.
 
 
 
 
 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-capacity-
 scheduler-tp3354615p3354660.html
 Sent from the Hadoop lucene-users mailing list archive at Nabble.com.


Re: Making Mumak work with capacity scheduler

2011-09-21 Thread ArunKumar
Hi Uma !

Mumak is not part of stable versions yet. It comes from Hadoop-0.21 onwards.
Can u describe in detail You may need to merge them logically ( back port
them) ?
I don't get it .

Arun


On Wed, Sep 21, 2011 at 12:07 PM, Uma Maheswara Rao G [via Lucene] 
ml-node+s472066n3354668...@n3.nabble.com wrote:

 Looks that patchs are based on 0.22 version. So, you can not apply them
 directly.
 You may need to merge them logically ( back port them).

 one more point to note here 0.21 version of hadoop is not a stable version.

 Presently 0.20xx versions are stable.

 Regards,
 Uma
 - Original Message -
 From: ArunKumar [hidden 
 email]http://user/SendEmail.jtp?type=nodenode=3354668i=0

 Date: Wednesday, September 21, 2011 12:01 pm
 Subject: Re: Making Mumak work with capacity scheduler
 To: [hidden email] http://user/SendEmail.jtp?type=nodenode=3354668i=1

  Hi Uma !
 
  I am applying patch to mumak in hadoop-0.21 version.
 
 
  Arun
 
  On Wed, Sep 21, 2011 at 11:55 AM, Uma Maheswara Rao G [via Lucene] 
  [hidden email] http://user/SendEmail.jtp?type=nodenode=3354668i=2
 wrote:
 
   Hello Arun,
  
On which code base you are trying to apply the patch.
Code should match to apply the patch.
  
   Regards,
   Uma
  
   - Original Message -
   From: ArunKumar [hidden
  email]http://user/SendEmail.jtp?type=nodenode=3354652i=0
   Date: Wednesday, September 21, 2011 11:33 am
   Subject: Making Mumak work with capacity scheduler
   To: [hidden email]
  http://user/SendEmail.jtp?type=nodenode=3354652i=1
Hi !
   
I have set up mumak and able to run it in terminal and in eclipse.
I have modified the mapred-site.xml and capacity-scheduler.xml as
necessary.I tried to apply patch MAPREDUCE-1253-20100804.patch in
https://issues.apache.org/jira/browse/MAPREDUCE-1253
https://issues.apache.org/jira/browse/MAPREDUCE-1253  as follows
{HADOOP_HOME}contrib/mumak$patch -p0  patch_file_location
but i get error
3 out of 3 HUNK failed.
   
Thanks,
Arun
   
   
   
--
View this message in context:
http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
  capacity-
scheduler-tp3354615p3354615.html
Sent from the Hadoop lucene-users mailing list archive at
  Nabble.com. 
  
  
   --
If you reply to this email, your message will be added to the
  discussion below:
  
   http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
  capacity-scheduler-tp3354615p3354652.html
To unsubscribe from Making Mumak work with capacity scheduler,
  click here

  
  
 
 
  --
  View this message in context:
  http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-capacity-
  scheduler-tp3354615p3354660.html
  Sent from the Hadoop lucene-users mailing list archive at Nabble.com.


 --
  If you reply to this email, your message will be added to the discussion
 below:

 http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-capacity-scheduler-tp3354615p3354668.html
  To unsubscribe from Making Mumak work with capacity scheduler, click 
 herehttp://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=3354615code=YXJ1bms3ODZAZ21haWwuY29tfDMzNTQ2MTV8NzA5NTc4MTY3.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-capacity-scheduler-tp3354615p3354818.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.

Re: Making Mumak work with capacity scheduler

2011-09-21 Thread Uma Maheswara Rao G 72686

Hello Arun,
  If you want to apply MAPREDUCE-1253 on 21 version,
  applying patch directly using commands may not work because of codebase 
changes.

 So, you take the patch and apply the lines in your code base manually. I am 
not sure any otherway for this.

Did i understand wrongly your intention? 

Regards,
Uma


- Original Message -
From: ArunKumar arunk...@gmail.com
Date: Wednesday, September 21, 2011 1:52 pm
Subject: Re: Making Mumak work with capacity scheduler
To: hadoop-u...@lucene.apache.org

 Hi Uma !
 
 Mumak is not part of stable versions yet. It comes from Hadoop-
 0.21 onwards.
 Can u describe in detail You may need to merge them logically ( 
 back port
 them) ?
 I don't get it .
 
 Arun
 
 
 On Wed, Sep 21, 2011 at 12:07 PM, Uma Maheswara Rao G [via Lucene] 
 ml-node+s472066n3354668...@n3.nabble.com wrote:
 
  Looks that patchs are based on 0.22 version. So, you can not 
 apply them
  directly.
  You may need to merge them logically ( back port them).
 
  one more point to note here 0.21 version of hadoop is not a 
 stable version.
 
  Presently 0.20xx versions are stable.
 
  Regards,
  Uma
  - Original Message -
  From: ArunKumar [hidden 
 email]http://user/SendEmail.jtp?type=nodenode=3354668i=0
  Date: Wednesday, September 21, 2011 12:01 pm
  Subject: Re: Making Mumak work with capacity scheduler
  To: [hidden email] 
 http://user/SendEmail.jtp?type=nodenode=3354668i=1
   Hi Uma !
  
   I am applying patch to mumak in hadoop-0.21 version.
  
  
   Arun
  
   On Wed, Sep 21, 2011 at 11:55 AM, Uma Maheswara Rao G [via 
 Lucene] 
   [hidden email] 
 http://user/SendEmail.jtp?type=nodenode=3354668i=2 wrote:
  
Hello Arun,
   
 On which code base you are trying to apply the patch.
 Code should match to apply the patch.
   
Regards,
Uma
   
- Original Message -
From: ArunKumar [hidden
   email]http://user/SendEmail.jtp?type=nodenode=3354652i=0
Date: Wednesday, September 21, 2011 11:33 am
Subject: Making Mumak work with capacity scheduler
To: [hidden email]
   http://user/SendEmail.jtp?type=nodenode=3354652i=1
 Hi !

 I have set up mumak and able to run it in terminal and in 
 eclipse.I have modified the mapred-site.xml and capacity-
 scheduler.xml as
 necessary.I tried to apply patch MAPREDUCE-1253-
 20100804.patch in
 https://issues.apache.org/jira/browse/MAPREDUCE-1253
 https://issues.apache.org/jira/browse/MAPREDUCE-1253  as 
 follows{HADOOP_HOME}contrib/mumak$patch -p0  
 patch_file_locationbut i get error
 3 out of 3 HUNK failed.

 Thanks,
 Arun



 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
   capacity-
 scheduler-tp3354615p3354615.html
 Sent from the Hadoop lucene-users mailing list archive at
   Nabble.com. 
   
   
--
 If you reply to this email, your message will be added to the
   discussion below:
   
http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
   capacity-scheduler-tp3354615p3354652.html
 To unsubscribe from Making Mumak work with capacity scheduler,
   click here
 
   
   
  
  
   --
   View this message in context:
   http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
 capacity-
   scheduler-tp3354615p3354660.html
   Sent from the Hadoop lucene-users mailing list archive at 
 Nabble.com.
 
  --
   If you reply to this email, your message will be added to the 
 discussion below:
 
  http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
 capacity-scheduler-tp3354615p3354668.html
   To unsubscribe from Making Mumak work with capacity scheduler, 
 click 
 herehttp://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=3354615code=YXJ1bms3ODZAZ21haWwuY29tfDMzNTQ2MTV8NzA5NTc4MTY3.
 
 
 
 
 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-capacity-
 scheduler-tp3354615p3354818.html
 Sent from the Hadoop lucene-users mailing list archive at Nabble.com.