+1 for taking snapshots and exporting them on DR cluster if there is no requirement of DR cluster to stay up-to-date in realtime. I am not sure if there is any Incremental snaphost feature out yet but doing snapshot on periodic basis is also not that heavy.
On Thu, Dec 24, 2015 at 3:44 PM, Sandeep Nemuri <[email protected]> wrote: > You can take incremental Hbase Snapshots for required tables and store it > in the DR cluster. > Restoring doesn't take much time in this case. > > Thanks > Sandeep Nemuri > > ᐧ > > On Thu, Dec 24, 2015 at 11:49 AM, Vasudevan, Ramkrishna S < > [email protected]> wrote: > >> Am not very sure if Phoenix directly has any replication support now. But >> in your case as you are bulk loading the tables you are not able to >> replicate but that problems is addressed in HBase >> As part of >> https://issues.apache.org/jira/browse/HBASE-13153 >> Where bulk loaded files can get replicated directly to the remote cluster >> like how WAL edits gets replicated. >> >> Regards >> Ram >> >> -----Original Message----- >> From: Krishnasamy Rajan [mailto:[email protected]] >> Sent: Tuesday, December 22, 2015 8:04 AM >> To: [email protected] >> Subject: Backup and Recovery for disaster recovery >> >> Hi, >> >> We’re using HBase under phoenix. Need to setup DR site and ongoing >> replication. >> Phoenix tables are salted tables. In this scenario what is the best >> method to copy data to remote cluster? >> People give different opinions. Replication will not work for us as >> we’re using bulk loading. >> >> Can you advise what are our options to copy data to remote cluster and >> keeping it up to date. >> Thanks for your inputs. >> >> -Regards >> Krishna >> > > > > -- > * Regards* > * Sandeep Nemuri* >
