Hi !

I have changed he permissions for hadoop extract and /jobstory and
/history/done dir recursively:
$chmod -R 777 branch-0.22
$chmod -R logs
$chmod -R jobracker
but still i get the same problem.
The permissions are like this <http://pastebin.com/sw3UPM8t>
The log is here <http://pastebin.com/CztUPywB>.
I am able to run as sudo.

Arun

On Thu, Sep 22, 2011 at 7:19 PM, Uma Maheswara Rao G 72686 <
mahesw...@huawei.com> wrote:

> Yes Devaraj,
> From the logs, looks it failed to create /jobtracker/jobsInfo
>
>
>
> code snippet:
>
> if (!fs.exists(path)) {
>        if (!fs.mkdirs(path, new
> FsPermission(JOB_STATUS_STORE_DIR_PERMISSION))) {
>          throw new IOException(
>              "CompletedJobStatusStore mkdirs failed to create "
>                  + path.toString());
>        }
>
> @ Arun, Can you check, you have correct permission as Devaraj said?
>
>
> 2011-09-22 15:53:57.598::INFO:  Started
> SelectChannelConnector@0.0.0.0:50030
> 11/09/22 15:53:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> processName=JobTracker, sessionId=
> 11/09/22 15:53:57 WARN conf.Configuration: mapred.task.cache.levels is
> deprecated. Instead, use mapreduce.jobtracker.taskcache.levels
> 11/09/22 15:53:57 WARN mapred.SimulatorJobTracker: Error starting tracker:
> java.io.IOException: CompletedJobStatusStore mkdirs failed to create
> /jobtracker/jobsInfo
>        at
> org.apache.hadoop.mapred.CompletedJobStatusStore.<init>(CompletedJobStatusStore.java:83)
>        at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:4684)
>        at
> org.apache.hadoop.mapred.SimulatorJobTracker.<init>(SimulatorJobTracker.java:81)
>        at
> org.apache.hadoop.mapred.SimulatorJobTracker.startTracker(SimulatorJobTracker.java:100)
>        at
> org.apache.hadoop.mapred.SimulatorEngine.init(SimulatorEngine.java:210)
>        at
> org.apache.hadoop.mapred.SimulatorEngine.init(SimulatorEngine.java:184)
>        at
> org.apache.hadoop.mapred.SimulatorEngine.run(SimulatorEngine.java:292)
>        at
> org.apache.hadoop.mapred.SimulatorEngine.run(SimulatorEngine.java:323)
>
> I cc'ed to Mapreduce user mailing list as well.
>
> Regards,
> Uma
>
> ----- Original Message -----
> From: Devaraj K <devara...@huawei.com>
> Date: Thursday, September 22, 2011 6:01 pm
> Subject: RE: Making Mumak work with capacity scheduler
> To: common-u...@hadoop.apache.org
>
> > Hi Arun,
> >
> >    I have gone through the logs. Mumak simulator is trying to
> > start the job
> > tracker and job tracking is failing to start because it is not able to
> > create "/jobtracker/jobsinfo" directory.
> >
> > I think the directory doesn't have enough permissions. Please check
> > thepermissions or any other reason why it is failing to create the
> > dir.
> >
> >
> > Devaraj K
> >
> >
> > -----Original Message-----
> > From: arun k [mailto:arunk...@gmail.com]
> > Sent: Thursday, September 22, 2011 3:57 PM
> > To: common-u...@hadoop.apache.org
> > Subject: Re: Making Mumak work with capacity scheduler
> >
> > Hi Uma !
> >
> > u got me right !
> > >Actually without any patch when i modified appropriate mapred-
> > site.xml and
> > capacity-scheduler.xml and copied capaciy jar accordingly.
> > I am able to see see queues in Jobracker GUI but both the queues
> > show same
> > set of job's execution.
> > I ran with trace and topology files from test/data :
> > $bin/mumak.sh trace_file topology_file
> > Is it because i am not submitting jobs to a particular queue ?
> > If so how can i do it ?
> >
> > >Got hadoop-0.22 from
> > http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.22/
> >  builded all three components but when i give
> > arun@arun-Presario-C500-RU914PA-ACJ:~/hadoop22/branch-
> > 0.22/mapreduce/src/contrib/mumak$
> > bin/mumak.sh src/test/data/19-jobs.trace.json.gz
> > src/test/data/19-jobs.topology.json.gz
> > it gets stuck at some point. Log is here
> > <http://pastebin.com/9SNUHLFy>
> > Thanks,
> > Arun
> >
> >
> >
> >
> >
> > On Wed, Sep 21, 2011 at 2:03 PM, Uma Maheswara Rao G 72686 <
> > mahesw...@huawei.com> wrote:
> >
> > >
> > > Hello Arun,
> > >  If you want to apply MAPREDUCE-1253 on 21 version,
> > >  applying patch directly using commands may not work because of
> > codebase> changes.
> > >
> > >  So, you take the patch and apply the lines in your code base
> > manually. I
> > > am not sure any otherway for this.
> > >
> > > Did i understand wrongly your intention?
> > >
> > > Regards,
> > > Uma
> > >
> > >
> > > ----- Original Message -----
> > > From: ArunKumar <arunk...@gmail.com>
> > > Date: Wednesday, September 21, 2011 1:52 pm
> > > Subject: Re: Making Mumak work with capacity scheduler
> > > To: hadoop-u...@lucene.apache.org
> > >
> > > > Hi Uma !
> > > >
> > > > Mumak is not part of stable versions yet. It comes from Hadoop-
> > > > 0.21 onwards.
> > > > Can u describe in detail "You may need to merge them logically (
> > > > back port
> > > > them)" ?
> > > > I don't get it .
> > > >
> > > > Arun
> > > >
> > > >
> > > > On Wed, Sep 21, 2011 at 12:07 PM, Uma Maheswara Rao G [via
> > Lucene] <
> > > > ml-node+s472066n3354668...@n3.nabble.com> wrote:
> > > >
> > > > > Looks that patchs are based on 0.22 version. So, you can not
> > > > apply them
> > > > > directly.
> > > > > You may need to merge them logically ( back port them).
> > > > >
> > > > > one more point to note here 0.21 version of hadoop is not a
> > > > stable version.
> > > > >
> > > > > Presently 0.20xx versions are stable.
> > > > >
> > > > > Regards,
> > > > > Uma
> > > > > ----- Original Message -----
> > > > > From: ArunKumar <[hidden
> > > > email]<http://user/SendEmail.jtp?type=node&node=3354668&i=0>>>
> > > > > Date: Wednesday, September 21, 2011 12:01 pm
> > > > > Subject: Re: Making Mumak work with capacity scheduler
> > > > > To: [hidden email]
> > > > <http://user/SendEmail.jtp?type=node&node=3354668&i=1>>
> > > > > > Hi Uma !
> > > > > >
> > > > > > I am applying patch to mumak in hadoop-0.21 version.
> > > > > >
> > > > > >
> > > > > > Arun
> > > > > >
> > > > > > On Wed, Sep 21, 2011 at 11:55 AM, Uma Maheswara Rao G [via
> > > > Lucene] <
> > > > > > [hidden email]
> > > > <http://user/SendEmail.jtp?type=node&node=3354668&i=2>>> wrote:
> > > > > >
> > > > > > > Hello Arun,
> > > > > > >
> > > > > > >  On which code base you are trying to apply the patch.
> > > > > > >  Code should match to apply the patch.
> > > > > > >
> > > > > > > Regards,
> > > > > > > Uma
> > > > > > >
> > > > > > > ----- Original Message -----
> > > > > > > From: ArunKumar <[hidden
> > > > > > email]<http://user/SendEmail.jtp?type=node&node=3354652&i=0>>>
> > > > > > > Date: Wednesday, September 21, 2011 11:33 am
> > > > > > > Subject: Making Mumak work with capacity scheduler
> > > > > > > To: [hidden email]
> > > > > > <http://user/SendEmail.jtp?type=node&node=3354652&i=1>>
> > > > > > > > Hi !
> > > > > > > >
> > > > > > > > I have set up mumak and able to run it in terminal and in
> > > > eclipse.> > > > I have modified the mapred-site.xml and
> > capacity-
> > > > scheduler.xml as
> > > > > > > > necessary.I tried to apply patch MAPREDUCE-1253-
> > > > 20100804.patch in
> > > > > > > > https://issues.apache.org/jira/browse/MAPREDUCE-1253
> > > > > > > > https://issues.apache.org/jira/browse/MAPREDUCE-1253  as
> > > > follows> > > > {HADOOP_HOME}contrib/mumak$patch -p0 <
> > > > patch_file_location> > > > but i get error
> > > > > > > > "3 out of 3 HUNK failed."
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > Arun
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > View this message in context:
> > > > > > > > http://lucene.472066.n3.nabble.com/Making-Mumak-work-
> > with-
> > > > > > capacity-
> > > > > > > > scheduler-tp3354615p3354615.html
> > > > > > > > Sent from the Hadoop lucene-users mailing list archive at
> > > > > > Nabble.com.> >
> > > > > > >
> > > > > > >
> > > > > > > ------------------------------
> > > > > > >  If you reply to this email, your message will be added
> > to the
> > > > > > discussion> below:
> > > > > > >
> > > > > > > http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
> > > > > > capacity-scheduler-tp3354615p3354652.html
> > > > > > >  To unsubscribe from Making Mumak work with capacity
> > scheduler,> > > > click here<
> > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > View this message in context:
> > > > > > http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
> > > > capacity-
> > > > > > scheduler-tp3354615p3354660.html
> > > > > > Sent from the Hadoop lucene-users mailing list archive at
> > > > Nabble.com.>
> > > > >
> > > > > ------------------------------
> > > > >  If you reply to this email, your message will be added to the
> > > > discussion> below:
> > > > >
> > > > > http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
> > > > capacity-scheduler-tp3354615p3354668.html
> > > > >  To unsubscribe from Making Mumak work with capacity scheduler,
> > > > click here<
> > >
> >
> http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscrib
> >
> e_by_code&node=3354615&code=YXJ1bms3ODZAZ21haWwuY29tfDMzNTQ2MTV8NzA5NTc4MTY3
> > > >.
> > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > View this message in context:
> > > > http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
> > capacity-
> > > > scheduler-tp3354615p3354818.html
> > > > Sent from the Hadoop lucene-users mailing list archive at
> > Nabble.com.>
> >
> >
>

Reply via email to