On 29 May 2008, at 17:52, Chris Siebenmann wrote:

> The first issue alone makes 'zfs send' completely unsuitable for the
> purposes that we currently use ufsdump. I don't believe that we've  
> lost
> a complete filesystem in years, but we restore accidentally deleted
> files all the time. (And snapshots are not the answer, as it is common
> that a user doesn't notice the problem until well after the fact.)
>
> ('zfs send' to live disks is not the answer, because we cannot afford
> the space, heat, power, disks, enclosures, and servers to spin as many
> disks as we have tape space, especially if we want the fault isolation
> that separate tapes give us. most especially if we have to build a
> second, physically separate machine room in another building to put  
> the
> backups in.)

However, the original poster did say they were wanting to backup to  
another disk and said they wanted something lightweight/cheap/easy.  
zfs send/receive would seem to fit the bill in that case. Let's answer  
the question rather than getting into an argument about whether zfs  
send/receive is suitable for an enterprise archival solution.

Using snapshots is a useful practice as it costs fairly little in  
terms of disk space and provides immediate access to fairly recent,  
accidentally deleted files. If one is using snapshots, sending the  
streams to the backup pool is a simple procedure. One can then keep as  
many snapshots on the backup pool as necessary to provide the amount  
of history required. All of the files are kept in identical form on  
the backup pool for easy browsing when something needs to be restored.  
In event of catastrophic failure of the primary pool, one can quickly  
move the backup disk to the primary system and import it as the new  
primary pool.

It's a bit-perfect incremental backup strategy that requires no  
additional tools.

Jonathan

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to