Yes, that's the link. None of the options in there in entirely satisfactory...
I think StumbleUpon is using replication extensively, I'll let them provide more details if they like. At Salesforce we will also use Replication for HA and DR, and have been working on Replication Features (HBASE-2195, 2196, 3130) and also HBASE-4071 (which allows a backups with TTL set, while at least keeping some versions around), and partially HBASE-4363. One sore spot is HBASE-2611, but I am sure that can be fixed, too. For HA and DR this should be a far better option than backup/restore. That all said, the reason why I posted the initial message, was exactly that we need safety that cannot be guaranteed by replication alone. Software errors, operator errors, data corruption, etc, would just be replicated to the remote site. Long story for saying: I think replication is production ready and can be used for HA and DR, but backup/restore is still needed. -- Lars ----- Original Message ----- From: Steinmaurer Thomas <[email protected]> To: [email protected]; lars hofhansl <[email protected]> Cc: Sent: Thursday, September 15, 2011 11:24 PM Subject: RE: Hooks for WAL archiving Hello Lars, if you talk about HBase replication, it's a rather new feature and during our evaluation, we aren't sure if it's ready for production. While I agree that regular backups in the PB area isn't an option, IMHO it is in the low TB area. The system operating team is demanding a reliable backup solution if it is affordable disk space wise. I think you meant the backup options discussed here: http://blog.sematext.com/2011/03/11/hbase-backup-options/ I can imagine, that in the distributed data management area, a consistent (incremental) snapshot backup, while the system is in use, isn't that easy to implement. Thomas -----Original Message----- From: lars hofhansl [mailto:[email protected]] Sent: Freitag, 16. September 2011 08:13 To: [email protected] Subject: Re: Hooks for WAL archiving Hallo Thomas, I guess the general sentiment is/was that a store that scales to petabytes by adding more and more machines does not lend itself to conventional backup. I think you can get fairly far with replication. That gives you HA and DR (disaster recovery), but does not guard against software or operator error. It's also asynchronous, so there's a window of loosing data. There was a link posted here recently to a blog listing the various HBase backup strategies. Can't find it just now, but if you look through this list that past 10 days or so, you'll find it. -- Lars ----- Original Message ----- From: Steinmaurer Thomas <[email protected]> To: [email protected]; lars hofhansl <[email protected]> Cc: Sent: Thursday, September 15, 2011 10:38 PM Subject: RE: Hooks for WAL archiving Hello, as our major point in giving HBase a go for production is (incremental) online/snapshot backups, I find a public discussion rather interesting. ;-) I really wonder how other use HBase in production with a backup/restore scenario in place? Or are they all that big data wise, that regular backups like in the RDBMS world aren't really possible. Thanks, Thomas -----Original Message----- From: lars hofhansl [mailto:[email protected]] Sent: Freitag, 16. September 2011 07:30 To: [email protected] Subject: Re: Hooks for WAL archiving Ah yes, HBASE-4132 is essentially covering the same idea I had for the WALObserver (and I meant the ...coprocessor.WALObserver). HBASE-50 seems only tangentially related to what I had in mind, it allows snapshots, but does not help with pitr (I think). So you'd create a manifest to avoid copying newer files? That would lock the earliest recovery time to when we start the copy (not when we end it), which is nice. What about compactions? They'd remove some of the old files and the data is now in new files. Copying the WALs should really be a separate process, I think. The base backup would maybe not even copy them. The scenario I have in mind is where one would take a base backup whenever it makes sense (once a day, a week, a month) and always archive all WALs. The last base backup with WALs could be on disks and previous ones might be spooled to tape. Maybe I am just dreaming, but then one could restore a base backup, and replay the necessary WALs to bring the state to *any* given point in time. Then one of the harder issues - as you say - would be to associate the correct WALs with the snapshot. I wonder how precise we really need to be, though. As every edit is timestamped - in theory - WAL replay would be idempotent. And what about new tables? .META. changes are probably logged, but replaying those we'd somehow need to create the tables. And then the .META. changes have to be in strict order w.r.t. to the other WAL replays. Are the sequenceIds globally ordered? I wonder if they even need to be. Maybe have an offline discussion? -- Lars ________________________________ From: Stack <[email protected]> To: [email protected]; lars hofhansl <[email protected]> Sent: Thursday, September 15, 2011 9:29 PM Subject: Re: Hooks for WAL archiving On Thu, Sep 15, 2011 at 4:11 PM, lars hofhansl <[email protected]> wrote: > A typical scenario for relational databases is to take periodic base backups > and also archive the log files. > Would that even work in HBase currently? Say I have distcp copy of all > HBase files that was done while HBase was running and I also have an archive > of all WALs since the time when the distcp started. > > Could I theoretically restore HBase to a consistent state (at any time > after the distcp finished)? Or are there changes that are not WAL logged that > I would miss (like admin actions)? > I'm interested in this topic too. Related work was done up in hbase-50, snapshotting. Have you seen that lars? It'd roll WALs and make a manifest of all hfiles. A background could then copy off the hfiles in the manifests and WALs. IIRC, there was a restore from a snapshot mechanism too. Need to figure stuff like most recent sequenceid for a region and then discard all WAL edits that were done before this sequenceid. Reading head and tail of WAL we could figure what sequenceids it had (we should probably get the sequenceid out in the name of the file, at least the start sequenceid.... and perhaps even an accompanying metadata file or entry on the end of the WAL that had the list of regions for which the WAL had edits (maybe this is more trouble than its worth since there will be times when we don't close WAL properly). Sequenceids are kept by the regionserver. hfiles are by region which can move among regionservers. > If that works, a backup would involve these steps: > 1. Flush all stores. Flush would be nice but could take a good while to complete... could jepardize your snapshot. > 2. copy the files. I'd dump a manifest and background copy. Copy is going to be heavy-duty too I'd say if you let it run full belt. > 3. roll all logs. > > > #1 and #3 are really optional, #3 is good because it would make all logs > eligible for archiving right after the backup is done. > > > In any case some hooks to act upon HLog actions would be a good thing anyway. > For example we could add four new methods to WALObserver (or a new observer > type): > > boolean preLogRoll(Path newFile) > void postLogRoll(Path newFile) > > boolean preLogArchive(Path oldFile) > void postLogArchive(Path oldFile) > Is HBASE-4132 related? St.Ack
