[
https://issues.apache.org/jira/browse/HBASE-50?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12852446#action_12852446
]
Jonathan Gray commented on HBASE-50:
------------------------------------
One idea floated around here was to use HDFS-level hard links on files that
would prevent HBase from being able to move/delete them.
Why is it that you say we cannot flush and instead need to use the WAL? Just
an overhead thing? If so, makes sense, but just wondering if you think it's a
requirement or just a different strategy.
It's likely that we will need to spend some effort on this jira in the near
future. Thanks for freshening.
> Snapshot of table
> -----------------
>
> Key: HBASE-50
> URL: https://issues.apache.org/jira/browse/HBASE-50
> Project: Hadoop HBase
> Issue Type: New Feature
> Reporter: Billy Pearson
> Priority: Minor
>
> Havening an option to take a snapshot of a table would be vary useful in
> production.
> What I would like to see this option do is do a merge of all the data into
> one or more files stored in the same folder on the dfs. This way we could
> save data in case of a software bug in hadoop or user code.
> The other advantage would be to be able to export a table to multi locations.
> Say I had a read_only table that must be online. I could take a snapshot of
> it when needed and export it to a separate data center and have it loaded
> there and then i would have it online at multi data centers for load
> balancing and failover.
> I understand that hadoop takes the need out of havening backup to protect
> from failed servers, but this does not protect use from software bugs that
> might delete or alter data in ways we did not plan. We should have a way we
> can roll back a dataset.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.