+1
On Sat, Apr 20, 2013 at 1:23 PM, Ravindranath Akila <
ravindranathak...@gmail.com> wrote:
> +1
>
> R. A.
> On 20 Apr 2013 12:07, "Viral Bajaria" wrote:
>
> > +1!
> >
> >
> > On Fri, Apr 19, 2013 at 4:09 PM, Marcos Luis Ortiz Valmaseda <
> > marcosluis2...@gmail.com> wrote:
> >
> > > Wow, gre
Hi Jean,
Steps need to followed during migration of namenode.
1.Make New Server with same hostname.
2.Install hadoop
3.copy the metadata from old server and paste it in new server.
4.Make sure all the datanodes are down.
5.Stop old namenode
6.Start New namenode with old metadata.
7.if it come up
Hi Varun,
try to increase the heap memory.
Regards,
Varun Kumar
On Thu, Jan 24, 2013 at 11:10 PM, Varun Sharma wrote:
> Hi,
>
> I have a region server which has the following logs. As you can see from
> the log, ParNew is sufficiently big (450M) and there are heavy writes goin
Hi Tian,
What is replication factor you mention in hdfs.
Regards,
Varun Kumar.P
On Mon, Jan 21, 2013 at 12:17 PM, tgh wrote:
> Hi
> I use hbase to store Data, and I have an observation, that is,
> When hbase store 1Gb data, hdfs use 10Gb disk space, and when data
> is 60Gb, hdf
Hi Daila,
Safemode is on.
Turn Off safemode you will be write files into that cluster.
Hadoop cluster will turn off safemode automatically when the gets it's
required blocks.
In your scenario try to start 2 more region server.
Regards,
Varun Kumar.P
On Wed, Jan 2, 2013 at 11:30 PM, Dalia Sob