Re: Re: HFile backup while cluster running

2010-03-15 Thread Vaibhav Puranik
That is correct. There is no need to reconfigure the property files if you use an elastic ip for each node. Regards, Vaibhav On Mon, Mar 15, 2010 at 2:23 PM, wrote: > I mentioned this on a previous thread, but I think it's worth restating - > in EC2, the public DNS hostnames follow a well-known

Re: Re: HFile backup while cluster running

2010-03-15 Thread charleswoerner
I mentioned this on a previous thread, but I think it's worth restating - in EC2, the public DNS hostnames follow a well-known naming convention and the internal DNS servers automatically convert the public hostnames to the internal ip addresses. So I believe that if you assign elastic ip add

Re: HFile backup while cluster running

2010-03-15 Thread Vaibhav Puranik
and do > an > >> HDFS copy the same way. HBase doesn't actually have to be shutdown, > that's > >> just recommended to prevent things from changing mid-backup. If you're > >> careful to not write data it should be ok. > >> &

Re: HFile backup while cluster running

2010-03-14 Thread prasenjit mukherjee
#x27;s >> just recommended to prevent things from changing mid-backup.  If you're >> careful to not write data it should be ok. >> >> JG >> >> -Original Message- >> From: Ted Yu [mailto:yuzhih...@gmail.com] >> Sent: Wednesday,

Re: HFile backup while cluster running

2010-03-03 Thread Vaibhav Puranik
> > JG > > -Original Message- > From: Ted Yu [mailto:yuzhih...@gmail.com] > Sent: Wednesday, March 03, 2010 11:40 AM > To: hbase-user@hadoop.apache.org > Subject: Re: HFile backup while cluster running > > If you disable writing, you can use > org.apache.h

RE: HFile backup while cluster running

2010-03-03 Thread Jonathan Gray
kup. If you're careful to not write data it should be ok. JG -Original Message- From: Ted Yu [mailto:yuzhih...@gmail.com] Sent: Wednesday, March 03, 2010 11:40 AM To: hbase-user@hadoop.apache.org Subject: Re: HFile backup while cluster running If you disable

Re: HFile backup while cluster running

2010-03-03 Thread Ted Yu
If you disable writing, you can use org.apache.hadoop.hbase.mapreduce.Export to export all your data, copy them to your new HDFS, then use org.apache.hadoop.hbase.mapreduce.Import, finally switch your clients to the new HBase cluster. On Wed, Mar 3, 2010 at 11:27 AM, Kevin Peterson wrote: > My cu

HFile backup while cluster running

2010-03-03 Thread Kevin Peterson
My current setup in EC2 is a Hadoop Map Reduce cluster and HBase cluster sharing the same HDFS. That is, I have a batch of nodes that run datanode and tasktracker and a bunch of nodes that run datanode and regionserver. I'm trying to move HBase off this cluster to a new cluster with it's own HDFS.