from handheld, please excuse typos.
> ------
> *From: * Abhay Ratnaparkhi
> *Date: *Fri, 24 Aug 2012 12:58:41 +0530
> *To: *
> *ReplyTo: * user@hadoop.apache.org
> *Subject: *namenode not starting
>
> Hello,
>
> I had a running hadoop cluster.
Hello Everyone,
I have specified the secondary namenode is masters file.
Which property is to be used to show a path used to store HDFS data by
secondary node?
What happened if I don't specify that property (or What is the default
location secondary namenode uses)?
Regards,
Abhay
recovery possible now.
>
> On Fri, Aug 24, 2012 at 1:10 PM, Abhay Ratnaparkhi
> wrote:
> > Hello,
> >
> > I was using cluster for long time and not formatted the namenode.
> > I ran bin/stop-all.sh and bin/start-all.sh scripts only.
> >
> > I am using
> parallelcopies) to recommend reducing it, but a lower value might work.only
> long-term indications are for your system to under-go node maintenance.
>
> Thanks
> Rekha
>
> From: Abhay Ratnaparkhi
> Reply-To:
> Date: Tue, 28 Aug 2012 14:52:27 +0530
> To:
>
Hello,
I have a MR job which has 4 reducers running.
One of the reduce attempt is pending since long time in reduce->copy phase.
The job is not able to complete because of this.
I have seen that the child java process on tasktracker is running.
Is it possible to run the same attempt again? Does
Hello,
How can one get to know the nodes on which reduce tasks will run?
One of my job is running and it's completing all the map tasks.
My map tasks write lots of intermediate data. The intermediate directory is
getting full on all the nodes.
If the reduce task take any node from cluster then It
e to configure each of your nodes with the right
> number of map and reduce slots based on the resources available on each
> machine.
>
>
> On Mon, Sep 3, 2012 at 7:49 PM, Abhay Ratnaparkhi <
> abhay.ratnapar...@gmail.com> wrote:
>
>> Hello,
>>
>> How can on
ing a combiner, or some other form of local aggregation ?
>
> Thanks
> hemanth
>
>
> On Mon, Sep 3, 2012 at 9:06 PM, Abhay Ratnaparkhi <
> abhay.ratnapar...@gmail.com> wrote:
>
>> How can I set 'mapred.tasktracker.reduce.tasks.maximum' to "0"