Where can I find those logs?
-Vipul

On Fri, Mar 16, 2012 at 1:33 PM, Bejoy Ks <bejoy.had...@gmail.com> wrote:

> Hi Vipul
>      AFAIK, the clean up should happen after the job completion not task.
> But what is causing this clean up, may be you can get some info from the NN
> logs.
>
> Regards
> Bejoy
>
> On Sat, Mar 17, 2012 at 12:48 AM, Vipul Bharakhada <vipulr...@gmail.com
> >wrote:
>
> > Its one of the server which I don't have permission to upgrade. but any
> > help is fine. I saw a bug filed on 0.20 that mapreduce doesnt check for
> the
> > _temporary directory existence. but not so sure is it hadoop or its one
> of
> > the scheduled jobs on server which is cleaning up hadoop. I am curious
> > about why its deleting the _temporary folder before the task is finished.
> > the clean up should happen at the task completion If I am not wrong.
> > Correct me If I am wrong. I am new to hadoop.
> > Thank you.
> > -Vipul
> >
> > On Fri, Mar 16, 2012 at 12:09 PM, Bejoy Ks <bejoy.had...@gmail.com>
> wrote:
> >
> > > Hi Vipul
> > >      Is there any reason you are with 0.17 version of hadoop? It is a
> > > pretty old version of hadoop (more than 2 years back) and tons of bug
> > fixes
> > > and optimizations have gone into trunk after the same. You should badly
> > > upgrade to any 1.0.X releases. It would be hard for any one on the list
> > to
> > > help you out with such an outdated version,Try an upgrade and
> > > see whether this issue still persists.
> > >
> > > Regards
> > > Bejoy
> > >
> > > On Sat, Mar 17, 2012 at 12:27 AM, Vipul Bharakhada <
> vipulr...@gmail.com
> > > >wrote:
> > >
> > > > One more observation: usually this job takes 3 to 4 minutes, however
> > when
> > > > it fails, at that particular time it takes more than 42 to 50
> minutes.
> > > > -Vipul
> > > >
> > > > On Fri, Mar 16, 2012 at 11:38 AM, Vipul Bharakhada <
> > vipulr...@gmail.com
> > > > >wrote:
> > > >
> > > > > Hi,
> > > > > I am using the old hadoop version 0.17.2 and I am getting the
> > following
> > > > > exception when I am trying to run a Job. But it only happens at
> > > > particular
> > > > > time, as cron jobs do run those task at particular intervals but it
> > > only
> > > > > fails at one particular time in day.
> > > > >
> > > > > Mar 14 06:49:23 7 08884: java.io.IOException: The directory
> > > > > hdfs://<{IPADDRESS}>:<{PORT}>/myserver/matcher/output/_temporary
> > doesnt
> > > > > exist
> > > > > Mar 14 06:49:23 7 08884: \09at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.mapred.TaskTracker$TaskInProgress.localizeTask(TaskTracker.java:1439)
> > > > > Mar 14 06:49:23 7 08884: \09at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.mapred.TaskTracker$TaskInProgress.launchTask(TaskTracker.java:1511)
> > > > > Mar 14 06:49:23 7 08884: \09at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.mapred.TaskTracker.launchTaskForJob(TaskTracker.java:723)
> > > > > Mar 14 06:49:23 7 08884: \09at
> > > > >
> > org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:716)
> > > > > Mar 14 06:49:23 7 08884: \09at
> > > > >
> > >
> org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:1274)
> > > > > Mar 14 06:49:23 7 08884: \09at
> > > > >
> > org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:915)
> > > > > Mar 14 06:49:23 7 08884: \09at
> > > > > org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:1310)
> > > > > Mar 14 06:49:23 7 08884: \09at
> > > > > org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2251)
> > > > >
> > > > > What can be a problem? as this folder is created by hadoop
> internally
> > > and
> > > > > used internally, clean up is also done internally by hadoop. So why
> > > this
> > > > > directory is missing at some particular time?
> > > > > Any clue?
> > > > > -Vipul
> > > > >
> > > > >
> > > >
> > >
> >
>

Reply via email to